text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringlengths 9
15
⌀ | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
---|---|---|---|---|---|
/* * fmaf.c * * by Ian Ollmann * * Copyright (c) 2007, Apple Inc. All Rights Reserved. * * C implementation of C99 fmaf() function. */ #include <math.h> #include <stdint.h> float fmaf( float a, float b, float c ) { double product = (double) a * (double) b; //exact double dc = (double) c; //exact #warning fmaf not completely correct // Simply adding C here is incorrect about 1 in a billion times. // While the double precision add here is correctly rounded, // we take a second rounding on conversion to float on return // which may cause us to be off by very slightly over half an ulp // in round to nearest. double sum = product + dc; // ideally, we should test here and patch up the result. // I think the problem only occurs in round to nearest for // exact half way cases in product with a non-zero c. // Presumably, we could check to see if the difference between // (float) sum and sum is a power of two (the right exact power // of two) and c is non-zero, and it rounded the wrong way, then // we might tweak the answer by an ulp using something like nextafter. // Happily denormals are not a problem during this check. // // Alternatively, if we figure out the problem of correctly rounded // 3-way adds, the product could be broken into 2 floats, and we // could do a 3-way add of prodHi, prodLo and c. Crlibm has a function // that might do the job (DoRenormalize3), bu Im thinking that it doesnt. // // Finally, to be completely right, we'd have to detect rounding mode. // The half way cases are different in other rounding modes. return (float) sum; } | http://opensource.apple.com/source/Libm/Libm-315/Source/ARM/fmaf.c | CC-MAIN-2014-42 | refinedweb | 269 | 70.73 |
Constantly.
--Eric S. Raymond
Read the rest in Silence Is Golden
Companies like Cisco and Websense are exporting censorship technologies to China that make them money, but create a democratic deficit behind the Great Firewall. These same companies argue that they have no control over how their products are used, and regardless, Chinese economic development will lead to political development.
This rationale is naive at best and self-serving at worst. Censorship not only stifles the free flow of information, it also creates international instability. And while Western companies continue to focus on short-term profit, the long-term results are clearly to our disadvantage.
There is not a single technology that doesn't go to the People's Liberation Army for testing and reverse engineering. And while we continue to reward China for bad behavior, they're laughing at us and looking for the next group of suckers eager to make a nickel. Move over Enron, you've just been eclipsed in the greed and gutless department.
--Oxblood Ruffin, executive director and founder of Hacktivismo
Read the rest in Wired News: The Fantasy and Reality of 2004
Of course, the "Java" Desktop has absolutely nothing to do with Java. It's really.
--Norman Richards
Read the rest in :My impressions of the "Java" Desktop System demo
C# has a disadvantage of letting programmers write error-prone code, with the potential to wreak havoc with a program's address space. You have to tag the code as "unsafe." One might consider such tags like a restaurant sign that says, "We failed a health inspection last month."
--Tom Adelstein
Read the rest in How to Misunderstand Sun's Linux Desktop Strategy
We will agree to match any offer Microsoft puts on the table for desktop software — at 50 per cent of Microsoft's quoted offer. No matter what their offer, we'll agree to provide the software for half their price. If they offer Windows and Office for $200 per desktop, we'll offer them for $100. If they offer $50, our offer will be $25
--Jonathan Schwartz, Sun vice-president
Read the rest in The Telegraph - Calcutta : Business
How prisoners at Guantánamo Bay have been treated we do not know. But what we do know is not reassuring. At Camp Delta the minute cells measure 1.8 meters by 2.4 meters (6 feet by 8 feet). Detainees are held in these cells for up to 24 hours a day. Photographs of prisoners being returned to their cells on stretchers after interrogation have been published. The Red Cross described the camp as principally a center of interrogation rather than detention..
--Johan Steyn
Read the rest in IHT: Search
Now maybe, just maybe, Saddam's capture will start a virtuous circle in Iraq. Maybe the insurgency will evaporate; maybe the cost to America, in blood, dollars and national security, will start to decline. week's polls: despite the complete absence of evidence, 53 percent of Americans believe that Saddam had something to do with 9/11, up from 43 percent before his capture. The administration's long campaign of guilt by innuendo, it seems, is still working.
--Paul Krugman
Read the rest in Op-Ed Columnist: Telling It Right
Without actual numbers, work on performance is basically just blind guessing. There is also a large danger of placebo effects... if you tell someone that something is faster now, they'll see it as faster.
But if we can document that the time between pressing the mouse button on a menu item and the time for the submenu to pop up and fully paint is 20ms, then we can start looking at exactly what is being done in the 20ms, and when we make changes, we can verify that they actually improved the situation.
--Owen Taylor
Read the rest in Interview: Red Hat's Owen Taylor on GTK+ - OSNews.com.
--Robin Gross
Read the rest in O'Reilly Network: Robin Gross Seeks International IP Justice [Feb. 20, 2003]
Disks will replace tapes, and disks will have infinite capacity. Period. This will dramatically change the way we architect our file systems. There are many more questions opened by this than resolved. Will we start using an empty part of the disk for our tape storage, our archive storage, or versions? Just exactly how does that work? And how do I get things back? I don't think there is much controversy about that, especially if you set the time limit far enough out: I would say three years; others would say 10 years.
--Jim Gray
Read the rest in ACM Queue - Content
'Free trade', like the 'free market', is a myth. Adam Smith's 'invisible hand' is shackled and always has been. Tariffs are only ever part of the story. Subsidies (such as in agriculture), tax breaks, immigration restrictions, labor laws, consumer protection laws, 'blue laws' restricting the business hours of certain industries (car dealers in IL for example) etc. in the U.S. and every other country distort pretty much any market you care to name.
That's why it drives me nuts hearing all the libertarian fairy tales about market capitalism every time there's a discussion regarding business in the U.S. News flash: business is already heavily regulated. The question isn't 'whether', but 'what', 'why' and 'how.'
--Chris Kaminski on the WWWAC mailing list, Wednesday, 26 Nov 2003.
--Bill Venners
Read the rest in The Philosophy of Ruby
If I were to be God at this point, and many people are probably glad I am not, I would say deprecate
Cloneableand have a
Copyable, because
Cloneablehas problems. Besides the fact that it's misspelled,
Cloneabledoesn't contain the
clonemethod. That means you can't test if something is an instance of
Cloneable, cast it to
Cloneable, and invoke
clone. You have to use reflection again, which is awful.
--Ken Arnold
Read the rest in Java Design Issues
The losers are the cities that don't let these people organize and be themselves and express their.
--Richard Florida
Read the rest in On a Hunt for Ways to Put Sex in the City.
--Paul Graham
Read the rest in The Hundred-Year Language.
--Anders Hejlsberg
Read the rest in The C# Design Process
when you're building a library, it's not enough to just accumulate good components. Take a data structure library as an example. You might have excellent classes for lists, stacks, files, and btrees, but taken together they don't make an excellent library if they are inconsistent. If they use different conventions, they aren't part of a single design. For example, when you're putting an element into an array, you might have an insert operation that takes x and i, where x is the element and i is the index. For the hash table class you might have an insert operation that takes key and x, where key is the key and x is the element. The order of arguments is reversed. The order of arguments might make perfect sense within each class, but when you start approaching the library as a whole, you're in new territory each time you look at a new class. You don't get a feeling of consistency. Instead you get a feeling of a mess—something that is a collection of pieces rather than a real engineering design. What we found many years ago when we started focusing seriously on libraries is that just as much attention has to be devoted to the construction of the library as a whole as to the construction of the individual elements.
--Bertrand Meyer
Read the rest in Design by Contract
Expressing basic methods like algorithms for sorting and searching in machine language makes it possible to carry out meaningful studies of the effects of cache and RAM size and other hardware characteristics (memory speed, pipelining, multiple issue, lookaside buffers, the size of cache blocks, etc.) when comparing different schemes..
Therefore I will continue to use English as the high-level language in TAOCP, and I will continue to use a low-level language to indicate how machines actually compute. Readers who only want to see algorithms that are already packaged in a plug-in way, using a trendy language, should buy other people's books.
--Donald Knuth
Read the rest in Amazon.com: Books: The Art of Computer Programming, Volumes 1-3 Boxed Set
You can't get 100% Mac-like behavior using Swing; there are too many places where the Mac interface is the square peg to Swing's Windows-based round hole.
--Glen Fisher on the java-dev mailing list, Friday, 05 Dec 2003.
--Rob Gingell, Sun Microsystems fellow and chief engineer
Read the rest in Standards and Innovation?
--Robert X. Cringely
Read the rest in PBS | I, Cringely . Archived Column
The last straw seems to have been the working lunch session on Wednesday. I presented about J2EE and web services during the first slot (about 20 minutes), explaining about the way web services has been adopted by the non-Microsoft world and how this now provides an intergation bridge. In the second slot I spoke for 15 minutes or so to explain the Linux desktop world that's rapidly evolving, epitomised by Sun's 'Mad Hatter' project. The Java session was scrupulously non-partisan (at least in intent & in my opinion), the second was more Sun-oriented as the case-in-point was Mad Hatter & there was no way to generalise it.
During the afternoon, Neil came over to me and said that some of the other speakers (no names) had been incensed that I covered Java in my talk and said they had asked that I not participate in the evening Q & A. We reached an accommodation. End of history.
Now, what's interesting here is the dimension it illuminates for me of the outlook of Microsoft insiders. This is the first time I have ever had other speakers approach the event organiser and ask for me to be removed from the agenda, and naturally my first reaction was to feel hurt, shamed and insulted (in roughly that order). I have gone out of my way, being aware this is billed as '.Net Nirvana', to be non-partisan and inclusive and to avoid at all costs criticising either .Net or Microsoft - only one slide out of everything I have presented has even attempted a comparison.
But the more I think about it, the more it resonates with what I have read in books like 'Hard Drive' about Microsoft's ethos being one of 'Win at all costs, and they are all out to get us'. It seems the automatic assumption of some of the other speakers was that I was in some way 'out to get' Microsoft, that my agenda was attack, so despite that being absent from my intent it was read in as a sub-text to what I said. Considering that the people involved represent the attitudes of the largest, most aggressive company in my industry, immune from almost every attack and even able to shrug off conviction under the Sherman Act like a speeding ticket from a small-town cop, they showed a vulnerablity and insecurity which speaks volumes of the way Microsoft likes its people to feel and act.
--Simon Phipps
Read the rest in Webmink: the blog.
--Greg Ross, Go-Kart Records
Read the rest in Downhill Battle - Go-Kart Records Interview.
--James Gosling
Read the rest in Failure and Exceptions
One of the prosecutors told me that they think 30% of the people in Guantanamo Bay were nothing to do with anything. They were just in the wrong place at the wrong time..
--Clive Stafford-Smith
Read the rest in Guardian Unlimited | Special reports | People the law forgot (part two).
--Javva the Hutt
Read the rest in Javva The Hutt November 2003
public static void main(String[] args) { int I=0;int S=0;int N[] = null; while(jump != -1) { try { switch(jump) { case 10: N = DIM(10); case 20: I = 0; case 30: N[I] = INT(1000 * RND()); case 40: I = I + 1; if (I < 10) GOTO (30); case 50: S = 1; I = 0; case 60: if( N[I] < N[I + 1]) GOTO (80); case 70: int T = N[I] ; N[I] = N[I + 1] ; N[I + 1] = T ; S = 0; case 80: I = I + 1 ; if (I < 9) GOTO (60); case 90: if (S == 0) GOTO (50); case 100: I = 0; case 110: PRINT (N[I]); case 120: I = I + 1; if (I < 10) GOTO (110); case 130: STOP(); } // if there was no GOTO then we want to end the program STOP(); } catch(GotoException ex) { // GOTO was called, and a GotoException has caused the // control to pass outside of the switch statement } } } }
--Dr. Heinz M. Kabutz
Read the rest in 2003-03-31 The Java Specialists' Newsletter [Issue 067] - BASIC Java
Calling all cars: be advised of an all-points bulletin for individuals wanted for questioning in connection with crimes against consumerism perpetrated in the New York region yesterday.
The authorities could not say whether the incidents documented in reports around the city were connected, but it was clear that the wanted individuals could be considered derelict in the duties inherent to living in one of the most privileged societies in history. It was the day after Thanksgiving, and they were not shopping.
--Michael Brick
Read the rest in Some People Didn’t Spend the Day Shopping. Maybe Even on Purpose.
I’ve given this problem a whole lot of thought. I think that the way that we deal with performance is pretty much fundamentally flawed. Right now, we do one of two things: we try to make everything super-fast the first time, or we wait until someone screams in production. Both are dangerous and expensive. The first problem is flaWednesday, because developer intuition sucks. We simply guess wrong more than we guess right. Smart people have not been immune, either: the initial models for CORBA and EJB entity beans were fundamentally flaWednesday, because they injected too much communication costs for typical usage models. If you’re guessing, then you’re either building in too much performance (which is incredibly expensive), or you’re missing on your performance goals. And we all know what waiting until production does to our future schedules and well-intentioned designs.
Ideally, we should measure our fundamental performance requirements using JUnit test cases (JUnitPerf, from clarkware.com, is a fantastic start.) But we simply don’t have enough tools to do so today. The ideal tool would be ant-integrated, automated, and require as few code changes as possible.
--Bruce Tate
Read the rest in The Interview: Bruce Tate, Bitter Java
The international aid policy should apply the medical principle, "first do no harm" and cancel Iraq's debts. In addition, Iraq should not have to endure an IMF structural adjustment programme. We must not require Iraq to privatise its society and allow its natural resources to be asset stripped. In the second world war, allied soldiers used the term "liberate" as a euphemism for looting and the same is proving to be true today on a grand scale.
More generally, foreign aid is not a high priority. Iraq is a country with rich agricultural land, huge oil resources and a highly skilled population that kept the country functioning through three wars and the UN blockade. In too many cases, aid to developing countries means rich countries subsidising their own businesses and third-rate consultants. Iraqis should not be prevented from owning and controlling the reconstruction effort.
--Dan Plesch
Read the rest in New York Times: NYT HomePage.
--Huey Freeman, Thanksgiving Prayer
Read the rest in Huey Freeman: American Hero
The constitutional scholar Raul Berger once told me that the main purpose of one party is to keep the other party honest. We didn't have that. And the burden on journalism was overwhelming to what too few are equipped to do -- go to original material, provide plenty of airtime to dissenting opinions. We wound up with far more airtime going to official spokesmen than to skeptics. I've gone back and reviewed transcripts of many of the interview programs conducted in the build-up to the invasion. Hawks like Richard Perle were thrown softball after softball, and their assertions for invasion basically went unchallenged. Our mandate at NOW is to provide alternative voices and views and when we started fulfilling that mandate, the hawks wouldn't come on. They didn't want to be challenged. Colin Powell's now largely-discredited speech to the U.N. was hailed at the time as if it were an oration by Pericles; there was no one with the evidence to challenge him until some time had passed.
I guess I was most astonished at the imbalance of the Washington Post -- something like three-to-one pro-war columns on the op-ed page. The press seemed to throw to the wind Ben Bradlee's Watergate requirement of two sources for every allegation. Or some sense that people other than the establishment should have been heard on war and peace.
--Bill Moyers
Read the rest in Bill Moyers is Insightful, Erudite, Impassioned, Brilliant and the Host of PBS' "NOW" - A BuzzFlash Interview
Read the rest in The Philosophy of Ruby
the designer's job is not only to create something that will work correctly and efficiently, but something that is also easy for the client to understand and use.
--Bill Venners
Read the rest in Analyze
this!
The F.B.I. is dangerously targeting Americans who are engaged in nothing more than lawful protest and dissent. The line between terrorism and legitimate civil disobedience is blurred, and I have a serious concern about whether we're going back to the days of Hoover."
-- Anthony Romero, executive director of the American Civil Liberties Union
Read the rest in F.B.I. Scrutinizes Antiwar Rallies.
--David Pogue
Read the rest in Apple's Latest 0.1 Adds a Lot
Every five to 10 years, Silicon Valley goes broke. This began in the 1950s and maybe long before, but the 1950s is as early as I care to write about. The Valley then was filled with apricot and cherry orchards only to see agriculture driven out first by the military and aerospace, and then by semiconductor companies. It is fitting that Shockley Semiconductor -- the first of many transistor companies -- was started in a shed previously used for drying apricots. Transistors begat Integrated Circuits, which begat memory chips, which begat microprocessors, which begat personal computers, which begat consumer software, which begat networks, which begat the Internet, which begat the day before yesterday and the day after tomorrow. And each of those transitions was accompanied by a seismic shudder going through the Valley as companies went under and home prices sloWednesday, for just a moment, their inexorable rise before continuing to climb again. A few familiar names survived from each era, but most of the companies went out of business because that's the way it is. We burn our fields in Silicon Valley, then plow the ashes under and start anew. It is perfectly natural, then, for companies to die here, but that doesn't mean there is no room for regret and nostalgia. So today I look with nostalgia on Sun Microsystems and hope -- probably in vain -- that the company doesn't die.
Sun did not invent the engineering workstation, but they certainly perfected it. But where are workstations today? Gone, for the most part. Sun's workstation business is about the same size as SGI's, which is to say small. Sun is now a server company, but that won't last long either under the onslaught of Linux. Cheap Intel and AMD hardware running Linux is going to kill Sun unless the company does something so stop it, which they aren't.
--Robert X. Cringely
Read the rest in I, Cringely | The Pulpit).
--Dave Thomas
Read the rest in Orthogonality and the DRY Principle
Java doesn't run everywhere. Sun kaboshed that by keeping it closed to ownership but open to ideas, then suing Microsoft and forcing it off the distribution. Java's problems are still development tools and performance.
-- Claude L (Len) Bullard on the xml-dev mailing list, Tuesday, 18 Nov 2003
If you can come up with more than say 25 or 30 member functions, that strongly suggests you have probably merged more than one concept into a single class. You should probably think about splitting that class into pieces.
--Scott Meyers
Read the rest in Designing Contracts and Interfaces
despite what the users say, it's very hard to judge what's actually important to them, because they themselves may not know. You may collect requirements and interview users. You may be certain that a particular feature is the most important. You put all your work into that important feature and ignore another minor feature that the user didn't seem to care much about. But later, you find out that in practice the users use this important feature only once every six months. The minor feature that you kind of ignored, they use six times a day. Now that's a huge problem.
What features are most important is not always clear up front. It's not even always clear to users. You need to be prepared to rock and roll and be flexible a bit. There's a kind of Heisenberg effect as you put a system into production and real users start using it. The act of introducing the system changes how the users work. It's almost impossible up front to be sure you know what the user wants, and then implement that perfectly. The very act of introducing your software into the user's world changes the game.
--Andy Hunt
Read the rest in Good Enough Software
And however grim the Cuban crackdown, it beggars belief that the denunciations have been led by the US and its closest European allies in the "war on terror". Not only has the US sentenced five Cubans to between 15 years and life for trying to track anti-Cuban, Miami-based terrorist groups and carried out over 70 executions of its own in the past year, but (along with Britain) supports other states, in the Middle East and Central Asia for example, which have thousands of political prisoners and carry out routine torture and executions. And, of course, the worst human rights abuses on the island of Cuba are not carried under Castro's aegis at all, but in the Guantanamo base occupied against Cuba's will, where the US has interned 600 prisoners without charge for 18 months, who it now plans to try in secret and possibly execute - without even the legal rights afforded to Cuba's jailed oppositionists.
Which only goes to reinforce what has long been obvious: that US hostility to Cuba does not stem from the regime's human rights failings, but its social and political successes and the challenge its unyielding independence offers to other US and western satellite states.-backed (currently there are 1,000 working in Venezuela's slums) and given a free university education to 1,000 third world students a year. How much of that would survive a takeover by the Miami-backed opposition?
--Seumas Milne
Read the rest in Guardian Unlimited | Special reports | Seumas Milne: Why the US fears Cuba?
--Joshua Marinacci
Read the rest in Swing has failed. What can we do?
Microsoft is a large corporation that has fallen prey to the sort of dysfunctional world view that other large companies, like IBM, GM, LockheedMartin and others fall prey to as well; and this dysfunctional world view almost ensures that Microsoft will find an open standards approach to technology development threatening and abhorrent.
As Microsoft has come to dominate the market, their view of their business has changed from seeing themselves as meeting customers needs to viewing the marketplace as a consumer of their products. The corresponding change in corporate strategy is to stop changing the products to give the customers what they want but instead to start manipulating the market to be sure it consumes what they sell and only what they sell.
--Rod Davison on the xml-dev mailing list, Wednesday, 12 Nov 2003. A more useful comparison is obtained by looking at normalized benchmarks. Here, the G5 benchmarks at 0.127 MFLOPS/MHz, the two G4 machines benchmark at 0.103-0.105 MFLOPS/MHz, and the two P4 machines come in at 0.096 MFLOPS/MHz.
--Craig A. Hunter
Read the rest in NASA G5 Study: Part 1
Some development cultures use Singleton all over the place, but it's just a global variable. We used to know that global variables are bad, but that's somehow been lost. So, we could do this with a Singleton or we can rearrange things so we needn't use a Singleton -- and the code will be more valuable as a result.
--Kent Beck
Read the rest in Working smarter, not harder: An interview with Kent Beck
C# is nothing to sneeze at (precisely what I did when I first saw it, primarily based on Microsoft's past performance implementing the C++ standard, which was nothing short of abominable). C# is the best thing Microsoft has done in the language arena -- very well thought-out and architected. I'd have to say, strictly from a business standpoint, if I were creating a product that I was basing the company on, I would seriously consider writing it in C# (at least, when the next MS Windows looks more imminent, since in theory it will have the .NET runtime built in).
There's no question that C# is the first real competition Java has seen, and already Sun has started to respond to this. JDK 1.5 will have features that are clearly inspired directly from C#. At Java One they announce some kind of new desktop development system where they want to make the creation of GUI desktop apps easier than VB. Based on this response, I can say that C# will do nothing but good for Java.
--Bruce Eckel
Read the rest in Bruce Eckel's MindView, Inc: 7-02-03 Java vs. .NET
I'm interested in figuring out how we can build a Net that is a lot less prone to viruses and spam, and not just by putting in filters and setting up caches to test things before they get into your computer. That doesn't really solve anything. We need an evolutionary step of some sort, or we need to look at the problem in a different way.
I'm not convinced there's not something modest we can do that would make a big difference. You have to find a way to structure your systems in a safer way. Writing everything in Java will help, because stuff written in antique programming languages like C is full of holes. Those languages weren't designed for writing distributed programs to be used over a network. Yet that's what Microsoft still uses. But even Java doesn't prevent people from making stupid mistakes.
My own biggest mistake in the last 20 years was that sometimes I designed solutions for problems that people didn't yet know they had. That's why some of the things that could've made a difference couldn't find a market. When people get hit between the eyes with a two-by-four by these viruses, they know they have a problem. Still, the right time to address it would have been a while ago. The hardest part isn't inventing the solution but figuring out how to get people to adopt it.
--Bill Joy
Read the rest in Fortune.com - Technology - Joy After Sun.
--Thomas Goetz
Read the rest in Wired 11.11: Open Source Everywhere.
--Bruce Schneier
Read the rest in Newsday.com - Terror Profiles By Computers Are Ineffective
From the time I woke up in that hospital, no one beat me, no one slapped me, no one, nothing. I'm so thankful for those people, because that's why I'm alive today.
--Jessica Lynch
Read the rest in Jessica Lynch Criticizes U.S. Accounts of Her Ordeal
So much of what we want to do is all tied up in somebody's intellectual property. It's a complete sclerotic mess, where nobody has any freedom of movement. Everything that open source has been fighting in software is exactly where we find ourselves now with biotechnology.
--Richard Jefferson
Read the rest in Wired 11.11: Open Source Everywhere
We've had the same good experience with Dell, but the trick seems to be to always buy through the small business division, not the consumer division. As editor of the WinXPNews, I get complaints from readers all the time about Dell's consumer tech support, which has been outsourced to India. However, the tech support we get, purchasing as a business, is always excellent.
--Deb Shinder on the cbp mailing list, Friday, 31 Oct 2003
File sharing is a reality, and it would seem that the labels would do well to learn how to incorporate it into their business models somehow. Record companies suing 12-year-old girls for file sharing is kind of like horse-and-buggy operators suing Henry Ford.
--Moby
Read the rest in Artists blast record companies over lawsuits against downloaders
Here's the great disconnect between most technical users and the people who just want to use computers as a tool. Most people look at a PC the same way they look at a piece of stereo equipment or the TV - they plug it in, and they want it to just work. If there's any user interaction at all, they want it on - at most - the level of interaction they have with a ReplayTV or VCR. You simply cannot expect average users to deal with firewalls, security updates, etc. I know I've posted before that having Windows Update on by default would drive me nuts - but I think it's probably the right answer (so long as it could be disabled manually - most people wouldn't bother).
It's worse than that though. For way too many years now, Windows has been shipping with the defaults set to wide open. Maybe that was excusable through Windows 95 - but by Win 98, ME, and 2000? And XP? This is why there are so many zombies out there sending spam and viruses - because these systems have been shipped in what amounts to a broken state, and the unsurprising has happened - they've been compromised. In most cases, the infections won't clear until those systems are junked and replaced with new systems (presuming that the security defaults for those new systems are reasonable).
Next time one of your non-tech friends asks for system advice, suggest a Mac. You'll be doing the entire world a favor.
--James A. Robertson
Read the rest in Cincom Smalltalk Blog - Smalltalk with Rants: View
Our society and our democracy is better served by open voting systems..
--Cindy Cohn, Electronic Frontier Foundation
Read the rest in Wired News: E-Vote Software Leaked Online
There's a price for this, and democracy pays it. Somewhere around here I've got a copy of a study The Project for Excellence in Journalism that examined the front pages of The New York Times and The Los Angeles Times, looked at the nightly news programs of ABC, CBS and NBC, read Time and Newsweek, and found that between 1977 and 1997 the number of stories about government dropped from one in three to one in five, while the number of stories about celebrities rose from one in every 50 stories to one in every 14. More recently the nightly newscasts gave four times the coverage to Arnold Schwarzenegger's campaign in California than to all gubernatorial campaigns in the country throughout 2002.
Does it matter? Well, governments can send us to war, pick our pockets, slap us in jail, run a highway through our back yard, look the other way as polluters do their dirty work, slip tax breaks and subsidies to the privileged at the expense of those who can't afford lawyers, lobbyists, or time to be vigilant. Right now, as we speak, House Republicans are trying to sneak into the energy bill a plan that would prohibit water pollution lawsuits against oil and chemical companies. Millions of consumers and their water utilities in 25 states will be forced to pay billions of dollars to remove the toxic gasoline additive MTBE from drinking water if the House gives the polluters what they want. I can't find this story in the mainstream press, only on niche websites. You see, it matters who's pulling the strings, and I don't know how we hold governments accountable if journalism doesn't tell us who that is.
On the other hand, remember during the invasion of Iraq a big radio-consulting firm sent out a memo to its client stations advising them on how to use the war to their best advantage -- they actually called it "a war manual." Stations were advised to "go for the emotion" -- broadcast patriotic music "that makes you cry, salute, get cold chills…." I'm not making this up. All of this mixture of propaganda and entertainment adds up to what? You get what James Squires, the long-time editor of the Chicago Tribune, calls "the death of journalism." We're getting so little coverage of the stories that matter to our lives and our democracy: government secrecy, the environment, health care, the state of working America, the hollowing out of the middle class, what it means to be poor in America. It's not that the censorship is overt. It's more that the national agenda is being hijacked. They're deciding what we know and talk about, and it's not often the truth behind the news.
--Bill Moyers
Read the rest in Bill Moyers is Insightful, Erudite, Impassioned, Brilliant and the Host of PBS' "NOW" - A BuzzFlash Interview
Amazing what folks will do for a t-shirt. We must have all witnessed the violence at trade shows... ( JavaOne tickets: $1400 Travel expenses: $2200 JavaOne James Gosling Edition T-Shirt three sizes too large: priceless).
--Kathy Sierra on the cbp mailing list, Saturday, 25 Oct 2003
It's very simple.. If you are making something to give away to the world, something that represents to millions of users your philosophy of computing, you will always make it the very best product you can make. That's the reason why Linux is a success.
--Linus Torvalds
Read the rest in PBS | I, Cringely . Archived Column.
--Bruce Schneier
Read the rest in Newsday.com - Terror Profiles By Computers Are Ineffective
What's most annoying, though, is the Apple Attitude: Any problems with your Mac are Your Fault. Any perceived shortcomings are Your Bad Attitude that Needs Changing.
- Flimsy power adapter? You bent it, so warranty won't cover it. (Apple Store staff person)
- Tiny keys? If you must have larger keys, plug in an external keyboard! (Another Apple Store staff person)
- No keyboard shortcuts? Install Emacs! (An Apple trade show rep). I actually did that. OS X renders its window decorations so beautifully!
- Slow VM startup times compared to Linux? You must be wrong. (Another Apple trade show rep)
- Swing flakiness? Maybe the early betas, but now OS X is the best platform for the Mac.(Yet another Apple trade show rep)
It looks as if Steve Jobs' reality distortion field is really working, at least inside Apple stores and show booths..
--Cay Horstmann
Read the rest in Is Apple's OS X The Best (or even A Good) Platform for Java Development?
Microsoft used to dismiss Linux as 1980s technology, which pretty much describes both Linux and Windows, it seems to me. Now they talk about "total cost of ownership" and find some way to make it look like using free software is more expensive in the long run than using software from Microsoft. Linux is certainly not free, but it is Microsoft's tech support that has been compared to the Psychic Friends Network, not Red Hat's or SuSE's. Just because Microsoft has a big support operation doesn't mean you'll actually get a solution to your problem.
--Robert X. Cringley
Read the rest in PBS | I, Cringely . Archived Column
The.
--David Pogue
Read the rest in Apple's Latest 0.1 Adds a Lot
Lawsuits on 12-year-old kids for downloading music, duping a mother into paying a $2,000 settlement for her kid?. Those scare tactics are pure Gestapo.
--Chuck D, Public Enemy
Read the rest in Artists blast record companies over lawsuits against downloaders
“Exceptions change the default behaviour on an error from being unpredictable, to being fail-fast”. If an exception occurs that you were not expecting or that your code was not set up to handle, the exception will cause the operation to fail immediately. It will fail without causing any further damage, and without moving the observed error any further from its root cause.
This is valuable. Failing fast is the only valid response to an unexpected error. There’s no way forward, because the system is no longer in a predictable state. There’s no way back, because without having anticipated the problem, you don’t have any way to fix it. So you just have to stop.
This is one reason that returning null from a method is generally a bad idea. null is usually a disguised error code. It means “you expected something to be here, but it really isn’t”. Worse, in Java a null is a time bomb, waiting to be dereferenced and blow up the code far from the original problem. If there being nothing to return is unexpected, consider throwing an exception. If it is expected, consider a null object refactoring, or changing the method to return an array or collection that can be empty.
--Charles Miller
Read the rest in The Fishbowl: Return to the Planet of the Exceptions
Silicon Valley is not America. It is not a mirror of the country. It is a very international place.
--Alex Vieux
Read the rest in An Optimist Aims to Revive Red Herring
In a wider context, the debate is not so much about homosexuality as such, but authority within the church and about the Bible's place in Christian belief. Increasingly at present it is being used as a symbol of orthodoxy, wielded as a clinching argument about who is right and so deserves control of worship. With all this going on, it is perhaps not surprising therefore that there is much less engagement and debate about what the Bible actually says.
There are essentially six main passages, three in the Old Testament, three in the New, which deal with the issue - in itself possibly a sign that it was not a central preoccupation of the original authors, whose writings spanned several hundred years and accordingly different cultural norms. It has been rightly pointed out in the current argument that homosexuality is not the only human practice which is condemned and that others the Biblical writers thought were wicked have now been accepted. This leaves a question mark over what it is about homosexuality that is unchangeably bad when practices such as divorce, lending money, eating shellfish, wearing a mixture of fabrics, cross-breeding livestock and sowing mixed seed in fields have long since become acceptable and tolerated.
--Stephen Bates
Read the rest in Guardian Unlimited | Special reports | Mixed messages
Java has been a technology success, a so-so branding effort, and a financial failure.
-- Steven Milunovich. Merrill Lynch
Read the rest in Merrill to Sun: 'Cut and Focus' or Be Acquired.
--Greg Papadopoulos, Chief Technology Officer, Sun Microsystems
Read the rest in On the hot seat at Sun |CNET.
--Thomas Goetz
Read the rest in Wired 11.11: Open Source Everywhere
Java has more implementations of specs you don't need, Perl has more implementations of things you need that aren't specs
--Robin Berjon on xml-dev mailing list, Wednesday, 15 Oct 2003
More interesting and less organized is the bootleg or pirated software scene. In much of Kuala Lumpur, everything you'd ever want is available for $1 a disc. Some elaborate discs cost around $3. The products you can get include Windows, Office XP, all the Adobe products, and more. The locals will tell you flat out that they cannot afford expensive software, and then they tend to go off on anti-Microsoft rants. I've thought about this and am totally convinced that the piracy is tolerated because it keeps users on the Microsoft teat even though the illegal copies generate no income for legitimate publishers. The approach is like fighting a forest fire with a backfire. In this case, the forest fire is Linux. As long as Southeast Asia and China can get Microsoft Office XP for $1, they are not about to switch to Linux anytime soon. Stop the bootlegging, and then economics alone will turn the whole area over to Linux in the blink of an eye.
--John Dvorak
Read the rest in New York Times: NYT HomePage
There are serious charges laid out in the case against Senior Airman Ahmad I. al-Halabi, the Air Force translator at the Guantanamo prison camp -- among them espionage, punishable by death. But the charge that stands out is unlawfully delivering baklava to detainees. Apparently, al-Halabi was being nice to these people. Apparently, he liked some of them. And this, in the eyes of military prosecutors, stands as damning evidence. Al-Halabi showed sympathy for the Devil.
--Ted Conover
Read the rest in Ministering to the Enemy
I'm certainly more and more to the conclusion that Iraq has, as they maintained, destroyed all, almost, of what they had in the summer of 1991. The more time that has passed, the more I think it's unlikely that anything will be found.
--Hans Blix
Read the rest in Guardian Unlimited | Special reports | Iraq dumped WMDs years ago, says Blix
Bottom line, thanks to the powerful tools (or should I say weapons) that Microsoft has built into their products, criminals now dominate the Internet. Common citizens don't feel safe anymore. They fear that their thousand dollar computer investment will be destroyed by these criminals, and due to the increasing unusability of the Internet, in many respects they already have been. I hate to say it, but maybe these terrorists have won.
In their full page ad, Microsoft provides three "simple" steps to protect your PC. I'd like to propose a different solution - a single step solution:
Either buy a Mac, or switch to Linux.
--Russ McGuire
Read the rest in WorldNetDaily: How Microsoft fuels Internet terrorism."
--James Gosling
Read the rest in Failure and Exceptions.
--Anders Hejlsberg
Read the rest in Versioning, Virtual, and Override
People still don't recognize the scope of what we have to do. You can't simply write a new, multimillion-line program in C and expect it to be reliable unless you're willing to work on it for 20 years. It takes such a long time because that language doesn't support the easy detection of the kinds of flaws most viruses exploit to bring down systems. Instead, you need to use a programming language with solid rules so that you can have the software equivalent of chemistry: the predictable interaction of code as it runs. But on the network, where part of the software works here and part of it works there, programs also behave in emergent ways that are more biological and difficult to predict. So until you have a science of doing distributed computing, software developers will continue to just throw stuff out there. That's why the Net is not going to be secure.
Also, distributed software systems have to be a lot simpler than they are now for us to have any hope of understanding even the mechanistic consequences, much less the nonlinear, biological consequences. You may not want to print this, but why have we been so fortunate that no one has done a Sobig virus that wipes your hard disk clean? It's just one more line of code. Just one line.
That said, I suspect some of these virus writers never expected their bugs to replicate quite the way they did. The fact that a virus goes hypercritical doesn't necessarily mean it was intended to. You could take a loop of code that is perfectly functional and add or delete a single character and unintentionally turn it into an exponential. Then again, perhaps they were just curious what would happen.
--Bill Joy
Read the rest in Fortune.com - Technology - Joy After Sun
I'm not sure how much I trust OptimizeIt (etc) anymore now that HotSpot has come along. It's great for finding possible problem areas but not very good at giving accurate timings - i.e. I take its findings with a large grain of salt. Doing further tests is the right way to go.
--Alex Rosen on the jdom-interest mailing list, Wednesday, 28 May 2003
Javalobby is the Java equivalent of the National Enquirer and should be taken as seriously as you would that fine publication.
--Rob Ross on the java-dev mailing list, Wednesday, 1 Oct 2003
it's not realistic to say that we can just delete our spam. The volumes are way, way, too high for that. On my tiny network with only a few dozen users, I've gotten as much as 150,000 spams in a single day. I've been able to deal with it, but I have a lot more technical background than a typical system manager and the costs in my time and equipment upgrades are substantial. (I'm spending about $1000 to upgrade the server where the CBP list is hosted, entirely due to increases in spam.) I'm a little ahead of the spam curve, since I have a widely published address that hasn't changed in a decade, but as I've watched the spam increase since the mid 1990s everyone else has tracked up the curve behind us spam leaders, so if you're not getting 10 times as much spam as real mail this year, you will shortly.
--John R. Levine on the Computer Book Publishing mailing list, 30 Sep 2003
Half the security problems in MS Windows are caused by MS Office attachments with MS VBA scripts that MS Outlook opens. When UNIX has 90% of the desktop market, it will all be different systems with different vulnerabilities, and the number of different mail agents is another order of magnitude higher. The worm that works on my FreeBSD on Intel can't get your Solaris on Sparc. There's value in diversity and multiculturalism, and it's the same for biological ecosystems, investment banking, human societies and computer software.
--K. Ari Krupnikov on the xml-dev mailing list, 01 Oct 2003."
--Bruce Eckel
Read the rest in Python and the Programmer
President Bush and Tom DeLay put the interests of the energy companies before the interests of the American people by insisting we drill in A.N.W.R. and other environmentally sensitive areas rather than modernize our energy system.
--Nancy Pelosi
Read the rest in After 2 Years, Energy Bill Is Getting New Urgency in Congress
Jayson Blair plagiarized and fabricated, and that's awful. Nobody can deny that. But there are other stories in which other journalistic sins are more serious. Take the business media during the whole bubble. They subjected few of these analysts and spokesmen and CEOs to any sort of scrutiny and were more cheerleaders than investigative journalists -- or even impartial journalists. That journalistic sin, if you want to use that term, had a greater consequence than Blair's.
--John Allen Paulos
Read the rest in Mercury News | 06/08/2003 | Math professor learned lesson from losing on WorldCom.
If a U.S. employer said out loud, "Gosh, we have a lot of 50-something engineers who are going to kill us with their retirement benefits so we'd better get rid of a few thousand," they would be violating a long list of labor and civil rights laws. But if they say, "Our cost of doing business in the U.S. is too high, so we'll be moving a few thousand jobs to India," that's just fine -- even though it means exactly the same thing.
--Robert X. Cringely
Read the rest in I, Cringely | The Pulpit
You have to design it so that bad things don't happen when programmers make mistakes
--Bill Joy
Read the rest in To Fix Software Flaws, Microsoft Invites Attack
Those who believe in the supernatural in any form should not be trusted to make life-and-death decisions about other people's lives. When I'm standing there as the accused, I want you -- an atheist -- in that jury box, not someone whose world is populated with capricious, vengeful, imaginary beings.
--Teller
Read the rest in Showtime - Penn & Teller: Bullshit! - community.
--Greg Papadopoulos, Chief Technology Officer, Sun Microsystems
Read the rest in On the hot seat at Sun |CNET.com.
--Joel Spolsky
Read the rest in Joel on Software
I'm very happy to see this report, and I think it validates our work. But my concern remains that Maryland, instead of responding with a sense of urgency, seems to be looking for ways to move ahead with Diebold despite this report. The Maryland plan of action is seriously out of whack with the SAIC risk assessment. This is a system with serious problems. I would expect them to suspend plans to use the Diebold machines until SAIC releases a report that says the system is safe to use.
--Avi Rubin, Johns Hopkins University
Read the rest in Wired News: Maryland: E-Voting Passes Muster. If a programmer is attacking a truly difficult problem he or she will generally have to use a language with systems programming and dynamic type extension capability, such as Lisp. This corresponds to the situation in which my friend, the proud owner of an original-style Hummer, got stuck in the sand on his first off-road excursion; an SUV can't handle a true off-road adventure for which a tracked vehicle is required.
--Philip Greenspun
Read the rest in Philip Greenspun's Weblog:
making it work is the first priority - efficiency can come later. ever experienced a project fail because everyone's worried about efficiency, speed, user interface, etc but forgotten about making it work?
--Rick Marshall on the xml-dev mailing list, Sunday, 21 Sep 2003
Official certification is just marketing BS. Passing the certification test doesn't mean the server actually supports the spec. The tests are simply too simplistic. JBoss routinely catches flaws in the other servers we use. Whether JBoss get certified is purely political. The BEAs and IBMs don't want to cheapen the certification by allowing a free offering to get certified without paying big bucks for the privilege. Jonas isn't certified either and won't be unless the Objectweb or someone else puts up $100K or more for the certification process. The JBoss Group has decided the certification is a marketing label they don't need. Since JBoss has 2 million download last year and 1.5+ million so far this year, I tend to agree.
--Victor Langelo on the java-dev mailing list, Thursday, 26 Jun 2003
When I go into the field, I have a copy of the Koran and next to it a copy of the U.S. Constitution.
--Captain James. J. Yee
Read the rest in Military confirms Muslim chaplain had secret papers - The Washington Times: Nation/Politics.
--Neal Stephenson
Read the rest in Wired 11.09: Neal Stephenson Rewrites History
My partner and I were looking at some generated code the other day trying to figure out what it did. A super class required knowledge of the subclass being used. I said that I bet they're using reflection to go down the call stack to figure out which class is calling this. Instead, in the generated code they could have just put the class name. Now it's very clever to use the Java Security Manager to do all this, but it cost us half an hour instead of doing something blindingly simple like putting the name of the class right there.
So, simplicity is about acknowledging the tricks exist but not using them. Wouldn't it be cool if we could use the Java Security Manager here? Yeah we could, but let's just put the name of the class. I used to feel proud of myself when I used something no one else knew about. Now I'm disappointed -- I apologize when I can't think of a simpler way.
--Kent Beck
Read the rest in Working smarter, not harder: An interview with Kent Beck
JetBlue has assaulted the privacy of 5 million of its customers. Anyone who flew JetBlue before September 2002 should be aware and very scared that there is a dossier on them.
--Bill Scannell
Read the rest in Wired News: JetBlue Shared Passenger Data.
--Martin Fowler
Read the rest in Flexibility and Complexity
One of the design principles behind Java is that I don't care much about how long it takes to slap together something that kinda works. The real measure is how long it takes to write something solid. Lots have studies have been done on developer productivity, and Java beats C and C++ by a factor of 2.
--James Gosling on the java-dev mailing list, Thursday, 31 Jul 2003
One of the problems with making security tradeoffs is that there are many overlapping security concerns. The Patriot Act has given the government and police unprecedented powers. Many of these powers are Draconian and fly directly in the face of a free society.
Of course, if you assume that the government and the police are 100% benevolent and good, there's no reason not to give them ultimate power. But history shows, in this country and abroad, both that power corrupts and that even an honest organization invariably includes a dishonest few.
It's the very freedom and openness and rule of law that has made the U.S. such a safe place to live, and it's a bad tradeoff to give some of that up for a tiny bit of increased security. If the Patriot Act made us considerably more secure, it might be a good tradeoff. But we're giving up a lot -- and not getting very much in return.
I spend a lot of time on this concept in my book: It's not only whether a security countermeasure is effective, it's whether it's worth it. It makes no sense to buy a $10 lock to protect a $1 rock, even if that $10 lock provides effective security.
--Bruce Schneier
Read the rest in BW Online | September 2, 2003 | "We've Made Bad Security Tradeoffs" alarming: subsidies in advanced countries exceed the total income of sub-Saharan Africa; the average European subsidy per cow matches the $2 per day poverty level on which billions of people barely subsist; America's $4bn cotton subsidies to 25,000 well-off farmers bring misery to 10 million African farmers and more than offset the US's miserly aid to some of the affected countries. Although both Europe and America accuse each other of unfair agricultural policies, neither side seems willing to make major concessions.
--Joseph Stiglitz
Read the rest in Guardian Unlimited | Special reports | Joseph Stiglitz: the Cancun WTO talks?
--Ian Goldberg
Read the rest in Dell's Software License Policy: Dude, you're getting screwed..
--John Sulston
Read the rest in Wired 11.06: View
I'll put in a vote for jEdit (). It's quite different from the other IDEs, because it *isn't* an IDE. It is, first and foremost, a really excellent text editor. It's fast. It's easy to use. It has lots of advanced features built in, like templates, word completion, and a tightly integrated macro language. It's amazingly configurable, so you can make it work exactly the way you want it to.
--Peter Eastman on the java-dev mailing list, Friday, 9 May 2003
The first really heavy geek I saw with a Mac was Rohit Khare. After I'd taken the leap, I discovered that Tim Berners-Lee, James Gosling, Roy Fielding, Tim O'Reilly, and a lot of RHGs from the Open Source and Web Technology worlds were already in OS X-land. I'm not sure this means that Macs Are The Future, or that I Will Score With Hot Babes, but still it's nice to be in good company.
--Tim Bray
Read the rest in ongoing - iYear
Long ago, before Java, there was C. I developed a really nice application based on Informix (when it was barely a relational database) for a customer. When running the application (under DOS :<), it would randomly crash!!! My customer was not happy.
The problem, a memory allocation bug, took weeks to track down and was in the Informix code itself. Well, I turned the assembly code into nice C code -- it was a very short module that had the bug, fixed the bug, compiled it, removed the offending module from the library and added in my fixed module. Everything worked great after that!!
My next step as a good member of society was to send all the details to Informix, Inc. I expected, perhaps, a thank you. I had found and documented and fixed a very significant bug that was very difficult to track down. Instead, I got a notice that I was in violation of their license, which prohibited decompiling. If asked, I recommended against using Informix after that.
--Harry Keller on the java-dev mailing list, Tuesday, 2 Sep 2003
This is another buffer overflow bug. (Somebody remind me. Didn't Microsoft perform a month-long security lockdown and code review, specifically aimed at buffer overflows and other common security holes, about a year ago? Hundreds of millions of dollars, if memory serves. Hmmmmm...)
--Woody Leonhard on the Woody's OFFICE Watch mailing list, Thursday, 04 Sep 2003
If you haven't used regular expressions before, they do look a bit cryptic, but they're amazingly powerful. This is one new API that's definitely worth learning.
StringTokenizeris pretty much obsolete at this point.
--Joshua Bloch
Read the rest in Java Puzzlers
Technology companies often have weak or nonexistent warranties for their products. As a consumer or even a large business entity, we have little recourse (except to vote with our wallets next time around). I'd like to see an insurance service developed in which businesses and consumers could buy defect insurance as an optional part of the technology purchase price.
When a product is discovered to be defective (including software), the insurance would pay for somebody to fix the problem or replace the defective product. It wouldn't take long for the insurance companies to figure out which companies make good products and which ones don't. In turn, this would drive the price of insurance either higher or lower, based on real data.
Right now, almost 100 percent of the economic loss that occurs because of bad software products is borne by the purchaser, not the technology company. A change in this situation would drive some real metrics into the process and finally force the technology industry to come to grips with an important issue--quality.
--Tony Scott
Read the rest in Laments of an IT buyer | CNET News.com
There's a reason this kind of thing doesn't happen with automobiles. When Firestone produces a tire with a systemic flaw, they're liable. When Microsoft produces an operating system with two systemic flaws per week, they're not liable
--Bruce Schneier, chief technical officer at Counterpane Internet Security
Read the rest in Digital Vandalism Spurs a Call for OversightSPAN and the internet.
--U.S. Representative Ron Paul, Republican, Texas
Read the rest in Neo-CONNED!
When you install a linux distribution, the kernel and associated libraries are a couple of percentage points of the code. The FSF utilities ditto. No one contributing team or organization is more than about 5%.
This is why I find Richard Stallman's insistence on calling it Gnu/Linux--to the point of harassing speakers at conferences who won't use his term--to be so offensive. If it's not Linux (a convenience term with historical meaning), it ought to be ATT-Berkeley-GNU-MIT-Digital-SGI-HP-Sun-Apache-...-Linux or some such other idiocy. If you had to shorten it to only two names, like law or accounting firms do after many mergers, I'd say it ought to be called BSD-Linux, since BSD has pride of place over GNU in its proximate origins.
--Tim O'Reilly on the cpb mailing list, Monday, 25 Aug 2003
If we walk down the path of 100 percent computerized, paperless voting, we surrender the "keys to the kingdom" to a handful of private companies who use proprietary software to run elections.
--Kim Alexander, president of the California Voter Foundation
Read the rest in Wired News: No Consensus on Voting Machines
Read the rest in WorldNetDaily: How Microsoft fuels Internet terrorism
It's the same damn thing. They didn't learn a thing. We had nine O-rings fail, and they flew. These guys had seven pieces of foam hit, and it still flew.
--General Donald Kutyna
Read the rest in Inertia and Indecision at NASA.
--Anders Hejlsberg
Read the rest in The Trouble with Checked Exceptions
Some of us olde fartes remember that for the first two decades of the computer era, open source software was the only kind there was. When ADR started selling Autoflow in the early 1960s, the idea of charging for packaged software was quite peculiar. Until John Banzhaf persuaded the copyright office to register a program of his in 1964, the presumption was that software was neither copyrightable nor patentable. Nonetheless, some rather impressive software got written.
--John R. Levine on the cbp mailing list, 18 Aug 2003.
The usual theory has been that Windows gets all the attacks because almost everybody uses it. But millions of people do use Mac OS X and Linux, a sufficiently big market for plenty of legitimate software developers -- so why do the authors of viruses and worms rarely take aim at either system?
Even."
--Rob Pegoraro
Read the rest in Microsoft Windows: Insecure by Design (TechNews.com)
I always wanted to buy a TiVo, but I thought it would be pretty cool to build my own. Trouble is, I definitely don't watch as much TV as I did before I started this project. It's more fun to work on it than it is to watch TV.
-- Isaac Richards, MythTV
Read the rest in Wired News: Building a TiVo, a Step at a Time...I thought I was OK; I buy computers with licensed software. But my lawyer told me it could be pretty bad.
The BSA had a program back then called "Nail Your Boss," where they encouraged disgruntled employees to report on their company...and that's what happened to us. Anyways, they basically shut us down...We were out of compliance I figure by about 8 percent (out of 72 desktops).
--Sterling Ball
Read the rest in Tech News - CNET.com
They are smoking crack. Their slides said there are.
--Linus Torvalds
Read the rest in Torvalds Slams SCO
Most of the ISPs are good to their word and are fighting it very, very hard,.
--Steve Linford, Spamhaus
Read the rest in Who profits from spam? Surprise
Microsoft's sloppy code and arrogance is coming home to roost. Anyone that remembers Service Pack 6 will never let MS perform automatic updates on a system. SP6 was released and it promptly blew away thousands of servers, and there was not any recovery method other than a reinstall. Which is why Service Pack 6 now is SP6A.
--Mike Sweeney
Read the rest in Wired News: Geeks Grapple With Virus Invasion
Let's show the American people that we can solve the problem that they saw last Thursday.. They should break off the electricity issues from the other very controversial portions of the energy bill that could take months to resolve
--Representative Edward J. Markey, Democrat of Massachusetts
Read the rest in Passage Unlikely for Separate Bill on Electrical Grid.
--Robert Kuttner
Read the rest in An Industry Trapped by a Theory
This event underscores the need to reduce the overload on the system, and there are other ways to do it besides building new transmission capacity. There are elegant ways of doing it, such as electronic controls that allow the system to carry more power safely, or increasing standards for efficiency of air conditioners, which consume a third of the peak demand. The challenge is picking the best solution.
--Ralph Cavanagh, Natural Resources Defense Council
Read the rest in Warnings Long Ignored on Aging Electric System
What's lacking in this deregulated world is someone to take responsibility. No one is responsible for beefing up the system or building power plants. We're talking about the lifeblood of our economy, and there has to be a sense of legal responsibility for keeping the lights on, and that's what we lost with deregulation.
--David Freeman, chairman of the California Power Authority
Read the rest in Which Party Gets the Blame? They Agree: It's the Other One
It's hard to decide what's more pathetic: scripting an electronic ballot stuffer for a trivial on-line poll of tech-CEO popularity, or creating a trivial on-line poll that begs to be abused.
--Thomas C Greene
Read the rest in The Register
When a company says its web site "doesn't support the Mac", it means that the site uses JavaScript and the developers don't want to bother testing the scripts on a Mac. (JavaScript is notorious for having incompatibilities between browsers and even between multiple versions of the same browser.) Many web sites use far more JavaScript than is necessary which makes testing the site on multiple browsers unnecessarily difficult. So when a company says "our web site doesn't support the Mac", they mean "our company doesn't understand how to write software correctly". Unfortunately, nothing will change until the people paying for the web sites' development understand this.
--Erik Hanson
Read the rest in MacInTouch Home Page: Apple/Macintosh news, information and analysis
It seems the nation's election officials aren't open to input from anyone but the industries that are wining and dining them to buy their equipment.
--Rebecca Mercuri, Bryn Mawr College
Read the rest in DenverPost.com - Politics.
--Brian Kernighan
Read the rest in Interview with Brian Kernighan
The typical CALEA installation on a Siemens ESWD or a Lucent 5E or a Nortel DMS 500 runs on a Sun workstation sitting in the machine room down at the phone company. The workstation is password protected, but it typically doesn't run Secure Solaris. It often does not lie behind a firewall. Heck, it usually doesn't even lie behind a door. It has a direct connection to the Internet because, believe it or not, that is how the wiretap data is collected and transmitted. And by just about any measure, that workstation doesn't meet federal standards for evidence integrity.
And it can be hacked.
And it has been.
Israeli companies, spies, and gangsters have hacked CALEA for fun and profit, as have the Russians and probably others, too. They have used our own system of electronic wiretaps to wiretap US, because you see that's the problem: CALEA works for anyone who knows how to run it. Not all smart programmers are Americans or wear white hats. We should know that by now. CALEA has probably given up as much information as it has gathered. Part of this is attributable to poor design and execution, part to pure laziness, part to the impossibility of keeping such a complex yet accessible system totally secure, and part because hey, they're cops, they're good guys. Give 'em a break. Have a donut.
--Robert X. Cringely
Read the rest in I, Cringely | The Pulpit
I designed Java so I could write more code in less time and have it be way more reliable. In the past I've wasted huge numbers of hours chasing down memory smashes and all the other time wasters that are so typical of what happens when writing C code. I wanted to spend time writing code, not debugging. Life is too short for debugging. All of those little "limitations" turn out to be things that make coding faster and debugging vanish.
For example, to damn hard.
--James Gosling on the java-dev mailing list, Thursday, 31 Jul 2003
I don't feel I have that much of a different computer than I had 10 years ago. I was teaching classes, the Internet was coming in. We were doing video editing. It was expensive back then. Five years ago, I did exactly what I do now. I don't feel that how you live life is changing that greatly. OK, now instead of storing something on CDs, I store it on DVDs.
--Steve Wozniak
Read the rest in sunspot.net - plugged in
the tendency of hotels and car-rental companies to disguise or omit mandatory additional charges, which boost the real price, is more than annoying. It's deceptive.
--Dan Gillmor
Read the rest in Words of experience on the new world of travel
If your Java code eats XML, consider XOM as your very own shark
--Rogers Cadenhead
Read the rest in Linux Magazine | March 2003 | FEATURES | Java XOM: XML Made Simpler
The really hard thing for big companies is to listen. Microsoft has been spectacularly bad at this; it's only now beginning to sink in over at the Windows group that all system management activities, yes all of them, have to be scriptable or they're just not usable in enterprise server deployments. Unix geeks have been saying this for years, and if Windows had been fully scriptable five years or so ago, I bet they'd have at least twice the server market share, relative to Linux, that they do now. Now it's probably too late for Redmond to win it back, because they'd need to be a lot better than the server-side competition, and they're just not.
--Tim Bray
Read the rest in ongoing - iYear
Sun Microsystems is becoming a little like the Red Queen from Alice in Wonderland -- running as fast as it can just to stand still.
--Dean Takahashi
Read the rest in Mercury News | 07/23/2003 | Sun breaks even for quarter
What's striking about the Munich deal is the use of Linux on the desktop. It's a threat to Microsoft's real source of strength, the desktop, where it has no competition and is used to winning all sorts of battles
--Paul DeGroot, Directions
Read the rest in Linux took on Microsoft, and won big in Munich Victory could be a huge step in climb by up-and-comer
Floating point arithmetic is tricky. Floating point numbers aren't the same as the real numbers that you learned about in school. Be very careful when you're working with floating point, and never use it when integer arithmetic will do.
--Joshua Bloch
Read the rest in Java Puzzlers
I will never understand why marketing people haven't learned how to talk to geeks after the decades since computer conferences have been going on. Scripted pseudo-conversations, for example, really don't work -- they're just inane. The fact that they're pre-scripted makes them inane. There's just no way around it. It's not like the two Sun engineers showing off Rave were making this demo up.
Is a Web Service possibly available on the entire World Wide Web? Thanks for saying so, I wasn't sure! She wanted three components? Which ones? Wow, what a surprise! She would like to add a column to the database, can he do that? Why, yes he can! Would adding some buttons be good? Why yes, she thinks so!
--Ken Arnold
Read the rest in Continuing to wait for Gosling...
this whole P2P explosion began the moment one decent online trading site - Napster - opened its doors. Had the labels been pouring money into technology research and development instead of developing complicated CD packaging and promoting shill artists then they might have stood a fighting chance in this war. Instead, the pigopolist mob was caught with its pants down and is now trying to play catch-up in the courts.
--Ashlee Vance
Read the rest in The Register
The EFF's position on spam filters is: "Any measure for stopping spam must ensure that all non-spam messages reach their intended recipients." It's a laudable goal, but one that's very difficult to implement in practice. Newsletters like Crypto-Gram are problematic. I know that everyone who gets my newsletter has subscribed, but how does any filter know that? I send 80,000 of these out every month; the only difference between me and a spammer is that my recipients asked to receive this e-mail. But I'm sure that some of my recipients don't remember subscribing. To them, Crypto-Gram is unsolicited e-mail: spam.
Despite my personal difficulties with sending out Crypto-Gram, I have a lot of sympathy for spam filters. There's a lot of "throwing the baby out with the bathwater" going on, but the bathwater is so foul that many companies don't mind the occasional loss of baby. The spam problem is so bad that draconian solutions are the only workable ones right now.
--Bruce Schneier
Read the rest in Counterpane: Crypto-Gram: July 15, 2003
Mozilla has consistently offered users the features, performance and innovation instrumental to the evolution of the Internet..
--Curtis Sasaki, Vice President, Engineering, Desktop Solutions, Sun Microsystems
Read the rest in Mozilla Foundation Announcement
this is point which newpaper reporters do not understand. Why do most home users of IBM style peecess run a Microsoft OS? There is only one reason: The computer they bought from Dell comes with a Microsoft OS installed, and there is no other OS installed. This is easy to demonstrate: Imagine the computer came with a Debian GNU/Linux OS installed and no other. Clearly most home users will never install another OS, so these home users will run Debian on their machines until the hardware fails. And this is why most home users run Microsoft today. No choice of the home user accounts for this, but the decision of Michael Dell explains it.
--Jay Sulzberger on the WWWAC mailing list, Tuesday, 15 Jul 2003
I don't understand how anybody can put up with garbage like Visual Studio, especially when there are environments like CodeWarrior available that get out of your way and let you work.
--Chris Hanson on the java-dev mailing list, Tuesday, 24 Jun 2003
Read the rest in Shirky: A Group Is Its Own Worst Enemy.
--Thomas Powers
Read the rest in When Frontier Justice Becomes Foreign Policy
Whenever I write something vaguely critical of Java on my website, I get linked to from a Smalltalk weblog, saying "Look! We were right all along!"
--Charles Miller on the java-dev mailing list, Thursday, 10 Jul 2003—or IPv9.
--Jim Gray
Read the rest in ACM Queue - Content
Who was the patriot in 1861?.. If, on the other hand, patriotism means devotion to a particular political idea, then clearly Grant was the patriot and Lee was not. That, in a sense, is part of the problem that we face even today.
--Walter Berns
Read the rest in The Changing Face of Patriotism.
--Bruce Eckel
Read the rest in Type Checking and Techie Control
apparently, the CIA has been tapping fiber optic cables in Baghdad, listening in on telephone conversations in efforts to track down Saddam. Most people think fiber can't be tapped, but here's how to do it (I wrote about this at least 10 years ago). Strip the plastic casing off a couple inches of the fiber bundle, being careful not to damage the glass. Bend the fiber back on itself in a very tight loop. At that place where the bend in the fiber is sharpest, the internal reflective ability of the fiber is compromised enough for a little light to leak out (called "conductive emission" in the spy biz). That's where you put your detector. This is remarkably easy to do, yet we think of fiber as being totally secure.
--Robert X. Cringely
Read the rest in I, Cringely | The Pulpit.
--Martin Fowler
Read the rest in Tuning Performance and Process
flag-waving for personal and corporate profit has gotten so out of hand that last month, when the House of Representatives passed a constitutional amendment banning flag desecration for the umpteenth time, I for once found myself rooting for the Senate to follow suit. It would be fun to watch TV executives hauled on to Court TV. If NBC's post-9/11 decision to slap the flag on screen in the shape of its trademarked peacock wasn't flag desecration, what is?
--Frank Rich
Read the rest in Had Enough of the Flag Yet?
If you look at DOS, or maybe compilers, one thing that happened with Microsoft was that these small upstarts came out and had cheaper compilers. DOS was also cheap and it undercut the competition. They never had a competitor like themselves. Then comes somebody who undercuts them and they start acting exactly how all of their competitors acted. If you look at how Unix vendors acted toward Microsoft, they were belittling Microsoft. They were saying yes we're more expensive but we're better and we give better support. Whether that was true or not was not the point. The reaction to somebody coming in and undercutting you is for Microsoft exactly the same as the failure mode for their competitors. Microsoft is on the receiving end of this undercutting.
--Linus Torvalds
Read the rest in Silicon Valley
Linux is the current OS competition, but it's no more threatening than OS/2. Remember OS/2?
--Bill Gates
Read the rest in USATODAY.com - Gates on Linux
The computer security industry is a media circus. It's filled with clowns who want to siphon billions of dollars of counterterrorism funds so the Keystone Cops can shield us from Osama bin Virus. Prostitute pundits stand fearlessly on the corners of New York City and compare "cyberterrorism" to real terrorism. They stand fearlessly on the corners of Washington, D.C. and compare 'cyberwar' to real war. They pull numbers out of thin air and tell whoppers with a perfectly straight face. They want us to blame everything but them when they fail to do what we pay them for.
--Rob Rosenberger
Read the rest in Wired News: Vmyths Hovering at Death's Door
Good thread design minimizes the interactions between threads, and thus the need for any synchronization at all. Synchronization is needed only when two threads could end up using the same resource [variable, file, whatever] at the same time. If the design is such that only one thread *can* use resource X at any given time (usually, because it's the only thread that even has access to that resource), there are no potential conflicts, and so no need to arbitrate them.
--Glen Fisher on the java-dev mailing list, Monday, 23 Jun 2003
Because the district court was unable to find immediate irreparable harm and because it entered a preliminary injunction that does not aid or protect the court’s ability to enter final relief on Sun’s PCoperating- systems monopolization claim, we vacate the mandatory preliminary injunction. With respect to the preliminary injunction prohibiting Microsoft from distributing products that infringe Sun’s copyright interests, however, we conclude that the district court did not err in construing the scope of the license granted by Sun to Microsoft, nor did it abuse its discretion in entering the injunction. Accordingly, we affirm that preliminary injunction.
--U.S. Judge Paul Niemeyer
Read the rest in In Re: MICROSOFT CORPORATION ANTITRUST LITIGATION
A design is finished when there is nothing left to throw out.
--Ken Arnold
Read the rest in MacFixIt - MacHack 18 Opens with a Keynote Address from Ken Arnold
Dawn's tech complexity theorem goes like this: A device's hassle-factor can be instantly determined by counting the number of cords.
Coffee maker, alarm clock, microwave oven: one cord, no waiting.
Telephone, fax machine: sometimes two. No problem. Usually.
Home computers: five cords. More challenging. But the connections are color-coded, so all but the color-blind can handle this with aplomb.
TiVo: six cords (not counting the spools of coaxial cable beneath your TV set, and the extras you might need for improved reception). Clear all small children -- and anyone else offended by profanity -- from the room.
--Dawn C. Chmielewski
Read the rest in Silicon Valley
Apple wants to be the leader of the Digital Lifestyle pack. The digital lifestyle is all about the fluidity of bits, the fact that all computers on the Internet are, in some sense, in the same place, no matter where they're physically located..
--Cory Doctorow
Read the rest in Boing Boing: A Directory of Wonderful Things
I wish developers would consider the enormous consequences of their actions. When I got my driver's license at 16, I was both elated and terrified; I had newfound freedom and responsibilities to go with it. Now, compare that feeling to when Microsoft sends me a new operating system. Do I have the same feeling? No, I think it's going to screw up my life for months. For how many decades and for how many millions of people has that negative emotion been created around software. I. If our laptops degrade at half the pace as before, that isn't progress. Sucks less isn't progress. What would it be like if you bought new software and you had that sense of increased responsibilities but also of infinite vistas? Our ambitions are so, so small compared to the opportunity.
--Kent Beck
Read the rest in Working smarter, not harder: An interview with Kent Beck
The idea of a virtual community that "shares" music is a great idea. Unfortunately, that is not what is happening on P2P networks these days. Networks like Kazaa, Gnutella, iMesh, Grokster and Morpheus, among others, are encouraging and helping individuals to distribute perfect digital copies of music to millions of strangers simultaneously. Nobody is really "sharing" as we traditionally think of the term. Sharing involves lending something to somebody, and while it is on loan, the owner no longer has it. "Sharing" in the P2P context has become a euphemism for "copying." That copying is neither legal nor ethical.
--Matt Oppenheim, RIAA
Read the rest in Online NewsHour: Forum -- Copyright Conundrum
SCO is effectively trying to destroy both the UNIX and Linux markets. This makes no sense, but that is the logical result of their current efforts. The idea that 1,500 of America's largest companies will be forced to drop Linux and will do so in favor of SCO's UNIXware is ludicrous. Why would those companies spend big bucks buying licenses from SCO -- a company they are upset with -- when they can comply just as easily, and almost for free, by converting to one of the BSD variants? Only Microsoft has had success bullying customers into buying its operating systems and SCO is definitely not Microsoft. This behavior won't sell any software.
--Robert X. Cringely
Read the rest in I, Cringely | The Pulpit
JavaOne has two (sometimes conflicting) purposes -- it is the main annual technical conference for Java developers and project managers, but it is also a marketing vehicle for Sun, which has final control over which sessions will be presented. As a result, it's no surprise that JavaOne presents the vision of Java technology that Sun would like us all to share.
--Brian Goetz
Read the rest in JavaOne 2003: Less hype, more filling
My lawyer had to redo $5000 worth of work for me for free because a virus got into his PC and destroyed all the files. In 7 years my successful online business, which runs only on Macs (Mac OS X), has not had a single dollar lost to viruses. Tell me which is the cheaper computer to use? I recommend Mac to everyone except my competitors -- they should all stick to Windows.
--Peter Payne
Read the rest in Macintosh Justification
Standards have nothing to do with innovation; a good standard is what happens when an industry has basically shaken the bugs out of a technology and then, after the fact, writes it down. This is true of all the really successful standards: grams and meters, voltage, the calendar, octane ratings, TCP/IP, XML.
There have been attempts to innovate in standards space, let's see: ODA, HyTime, X.400. What, you've never heard of any of those? Exactly.
The one time I was in the room (for XML) what we did was take something that had been invented a decade before (SGML), fix up the internationalization, rationalize the error handling, and throw out the 90% that nobody ever used (we should have thrown out more).
--Tim Bray
Read the rest in ongoing - RSS and the S-word
The RIAA is the Recording Industry Association of America. It is not the Recording Industry and Artists Association of America. It says its concern is artists. That's true, in just the sense that a cattle rancher is concerned about its cattle.
Many, including I, doubt that the RIAA's actions actually benefit artists. They clearly benefit the relatively concentrated recording industry, which is fighting like hell to protect itself against new forms of competition. But there are many who believe that these new forms of competition -- if allowed to develop and mature -- would directly benefit artists.
Maybe not Madonna -- but it would certainly help the vast majority of artists who can barely scrape by under the existing system.
--Lawrence Lessig
Read the rest in Online NewsHour: Forum -- Copyright Conundrum.
--James Gosling
Read the rest in Analyze
this!
it's real easy to see that every computer in the world's a Macintosh. There was a time when Windows wasn't Windows. They had Microsoft DOS, and DOS was lines you had to type. And all the business people in the world said [mocking traditional business executives]: "This is real strong computing. This is capable business computing. The Macintosh is a toy because it has graphics and pictures."
And the funny thing is, when they switched over -- Windows 95, Windows 98 -- now they've got a Macintosh, but you don't hear the business people saying: "Oh, we were wrong. That really is the right way to go. It really doesn't have anything to do with the strength of the machine, it only had to do with what we wanted to say because we were bigoted."
--Steve Wozniak
Read the rest in sunspot.net - plugged in.
--Gary Rivlin
Read the rest in Wired 11.07: McNealy's Last Stand
GNOME is aiming for simplicity and consistency; we're the first open source desktop project to have a documented set of human interface guidelines.
KDE has way more options (the clock properties dialog has five tabs!) and Windows migrants frequently find this confusing, especially people who work in offices. Also, KDE sort of "looks" like Windows, which people frequently find confusing, since it implies that it will act exactly like Windows, which it doesn't (we have partners who have done UI studies that confirm this).
--Nat Friedman
Read the rest in Interview with Ximian's Nat Friedman - OSNews.com
You have to understand one thing about the Republican party and its plutocratic allies. They do not place as much value on work, at least not real work, as they do on coupon-clipping and inheritance.
Income from a paycheck gets taxed now at a much higher rate, in many cases, than income from dividends and capital gains -- income that is grossly skewed in the top wealth classes. And if you're lucky enough to be born into a wealthy family, the income you receive by inheritance will soon be entirely tax-free.
One of these years the people -- the real people who work for their paychecks but can't afford to bribe members of Congress -- will realize what's been done to them. They will be very, very unhappy, and they'll respond accordingly.
Even mentioning this will attract the usual mindless jibes from right-wingers who believe rich kids' inheritances are more valuable to society than the sweat off their nannies' brows. All they'll be doing is restating their contempt for fairness in our society -- but that's nothing new, is it?
--Dan Gillmor
Read the rest in Silicon Valley - Dan Gillmor's eJournal - Rich Wage Class War on Poor and Middle Class
A smart, creative, experienced, determined attacker can find flaws in just about any standard commercial product. Our security evaluations find catastrophic problems more than half the time, even though evaluation projects generally have very limited budgets.
The most common situation is where the systems' security objectives could theoretically be met if the designers, implementers, and testers never made any errors. For example, in a quest for slightly better performance, operating systems put lots of complexity into the kernel and give device drivers free reign over the system. This approach would be great if engineers were infallible, but it's a recipe for trouble if all you have are human beings.
--Paul Kocher
Read the rest in Slashdot | Security Expert Paul Kocher Answers, In Detail.
--Dave Thomas
Read the rest in Building Adaptable Systems?
--Paul Boutin
Read the rest in Wired 11.07: Slammed!
Look at some very solidly crafted code, for example, the space shuttle. The cost per line of code for the space shuttle is something like a thousand dollars per line. It's so expensive because of the amount of care that goes into specifying the code, reviewing the code, the whole process they use. It is understandable that if you're shooting up billion dollar spacecraft with human lives at stake, you're going to put a little bit of care into that software. But everything has its cost.
The space program has had its share of bugs. Various Mars probes have flown off into the weeds. Rockets have crashed. But nevertheless the space program has a pretty good track record on software quality, but at tremendous cost. You can't spend a thousand dollars per line of code in a dot com or even most major corporations. You simply can't afford that.
People tend to think software is free, because it has no real-world presence. Software is not substantial like disk drives or automobiles—it is just people typing away at a keyboard. So, therefore, software must be free. But it's not.
--Andy Hunt
Read the rest in Good Enough Software."
--Bruce Eckel
Read the rest in Python and the Programmer
This whole RSS drama is turning into kindergarten-playground intrigue.
--Uche Ogbuji on the xml-dev mailing list, Monday, 09 Sep 2002
pull parsing is the way to go in the future. The first 3 XML parsers (Lark, NXP, and expat) all were event-driven because... er well that was 1996, can't exactly remember, seemed like a good idea at the time.
--Tim Bray on the xml-dev mailing list, Wednesday, 18 Sep 2002
I noticed that people are doing a lot of "googling" before a first date nowadays--this represents the real trend. Poindexter's doing this and DARPA (the Defense Advanced Research Projects Agency) allowed him to do it for the propaganda that someone's serious about cyberwar someplace. Googling is international. It's not just restricted to cranky Republicans who couldn't erase e-mail in their PROFS (Professional Office System). That's going to have more of an effect. It's difficult to escape a tragedy in your life that's not your own fault.
Years ago, if your husband died in a house fire, you could get a covered wagon and go to Oregon. Now, as soon as you arrive in Oregon, someone could google you. "Oh, well, widow Simpson. Really sorry to hear about the house fire."
You don't get to cut that chain of evidence and start over. You're always going to be pursued by your data shadow, which is forming from thousands and thousands of little leaks and tributaries of information.
--Bruce Sterling
Read the rest in Tech News - CNET.com. On the other hand, they're more sophisticated in some ways. They come in knowing how to program. So instead of teaching them to program, we teach them to solder.
--Gerald Sussman, MIT Matsushita Professor of Electrical Engineering
Read the rest in Working engineers show frosh the ropes.
--Robin Cook
Read the rest in News.
--Tony Benn
Read the rest in News.
--Dave Thomas
Read the rest in Orthogonality and the DRY Principle
Admittedly, you can find many of the cool features in other editors. Emacs, for example, includes every feature ever conceived by humans and many that weren't. I assume the latter group programmed the user interface, since the keystrokes seem to have been designed for creatures with four hands and fingers that bend differently than mine. Call me lazy, but I have a problem with things like Ctrl-X, Ctrl-S to save a file, which is typical of Emacs. Even though Emacs probably offers more add-in features than Jedit, I'd still use Jedit to avoid having to reconfigure the editor to use sane keystrokes. Nevertheless, since other full-featured editors like Emacs exist, you are probably justified in wanting to stick with what you already know and love. If you're even the least bit unsatisfied with what you're using, however, I strongly recommend you check out Jedit.
--Nicholas Petreley
Read the rest in Vive Java et Blackdown! - Jan 21, 2003.
--Ken Arnold
Read the rest in Taste and Aesthetics
Earlier this month, the medical journal Opthalmology said the failure rate for eye surgery was one in 10, not the one in 1,000 figure widely advertised. With roughly 100,000 people having laser eye surgery each year, that would mean that 10,000 gained no benefit.
--Charles Arthur
Read the rest in News
Another point is that we hear a lot about agile methodologies these days, but not a lot about writing agile code. If you want to be able to keep up with rapid changes on a project, however, you have to make the code agile. You have to be able to make changes quickly. The XP folks say the way to do that is via refactoring. They recommend you always keep the code tidy and well factored enough that you can make needed changes fairly quickly. You can take the XP approach of refactoring the code, but if pull details out of the code into metadata, you can make changes without even having to touch the code.
In addition, with metadata you have the added benefit that you can make changes out in the field to a system that's already been deployed. If a customer calls and tells you their MP3 player isn't working, you may be able to tell them to switch a parameter in a property file. The MP3 player will use a different decoder and algorithm and get around the bug. So the more metadata you have, the more flexibility you have. And flexibility translates into being agile.
--Andy Hunt
Read the rest in Abstraction and Detail
people who think that the GPL is the right license for Java source code think that the obligation to publish all modifications or additions would discourage predators but miss the fact that the "inheritance" characteristic of the GPL (that's the one that says if you combine non-GPL code with GPL code then the non-GPL code must become GPL) can also have a chilling effect on beneficial contributions from commercial entities.
Also BSD advocates believe that there's a fair amount of illegal misuse of GPL code that goes unnoticed. Sun contributes to projects under both types of licenses by the way. We've also written a license that tries to walk a middle ground.
The SISSL license (Sun Industry Standards Source License), which is both a Free and Open Source license, tries to live in the middle. It references a standard and then acts like the BSD as long as you aren't deviating from the standard, but acts more like the GPL if you do extend the standard (requiring that you publicly document your extensions or modifications and provide a reference implementation).
--Danese Cooper
Read the rest in Open Source Advocate Danese Cooper on Open Source. Another advantage was randomized retransmissions. That was based on the Aloha Network built at the University of Hawaii by Norm Abramson, a forerunner of 802.11 that had randomized retransmissions.
--Bob Metcalfe
Read the rest in Tech News - CNET.com
Besides robbing people of time and money, medical fraudsters can mislead critically ill victims into thinking that they're cured; convince them to discontinue other, life-prolonging treatments; or induce them to stop taking precautions that prevent spreading the illness.
Quack sites also introduce risks in the form of dangerous combinations of drugs and herbs. For example, Saint-John's-wort, an herb that some people use to fight depression, has been much touted online as a cure-all even though medical research has shown serious drug interaction risks for HIV/AIDS patients.
--Anne Kandra
Read the rest in PCWorld.com - Consumer Watch: Avoid Online Snake Oil Sellers.
--Paul Graham
Read the rest in The Hundred-Year Language.
--Bruce Perens
Read the rest in The fear war against Linux | CNET News.com
The Ant build file format is an example of something that shouldn't have been an XML format because the benefit to the implementor is massively outweighted by the wasted time of the user. But what's the solution? Makefiles? No thank you; Makefiles are a many splendored family of stunningly similar formats that are all slightly incompatible with each other. (I've lost count of the times I tried to make something on a BSD system and found it not work properly because BSD is not GNUMake and vice versa.)
--Adam Turoff on the xml-dev mailing list, Tuesday, 6 May 2003
It's important to have the right amount of convenience methods, not too many or too few. If something's not very comMonday, and/or is only a couple of lines of code, I don't think it needs a convenience method. Every method you add is one more that the user has to wade through to find the one they're actually looking for. I hate classes with dozens of methods that make it hard to find the one you're looking for.
--Alex Rosen on the jdom-interest mailing list, Friday, 02 May 2003.
This is, of course, ridiculous. The prohibition against bringing outside drinks into the park has nothing to do with terrorism. The park wants people to buy drinks from their concession stands, at inflated prices, and to not be able to undercut those prices by bringing in drinks from outside.
This is an example of a non-security agenda co-opting a security countermeasure, and it happens a lot. Airlines were in favor of the photo ID requirement not because of some vague threat of terrorism, but because it killed the practice of reselling nonrefundable tickets. Hotels make a copy of your driver's license not because of security, but because they want your information for their marketing database.
--Bruce Schneier
Read the rest in Counterpane: Crypto-Gram: April 15, 2003
One of the problems with talking about .Net is that it's a lot of different pieces -- there are things like the programming language that they did. It's really hard for me to criticize because it is such a direct rip-off of Java. They haven't committed much in the way of acts of original thought. Then there's the whole SOAP/XML thing and there's a lot of good to be said for XML. It was kind of originated at Sun anyway. And some of it sounds somewhat humorous to me because it's as though they invented distributed computing when they came out with SOAP.
--James Gosling
Read the rest in Sun's Gosling On the Java Evolution - VARBusiness.com - 5/8/03 10:22:42 AM
I don't think this new attitude is temporary. I see a great deal of nesting going on. Despite the crash and the resulting disruption, more marriage licenses were granted in 2001 in San Francisco than any year prior. In Santa Clara County, more babies were born in 2002 than in any year of the boom, and house purchases have bounced back to near-record levels, despite the massive evaporation of wealth. The culture of shifting alliances and temporary agreements is out; permanence and settling down is in.
--Po Bronson
Read the rest in Wired 11.06: Life in the Bust Belt.
--Robert X. Cringely
Read the rest in I, Cringely | The Pulpit
In general, JNI is somewhat complex to use. However, when you call in one direction only—from Java into native code—and communicate using primitive data types, things remain simple.
--Vladimir Roubtsov
Read the rest in Profiling CPU usage from within a Java applicationide compliant.
--Marc Herold
Read the rest in A Dossier on Civilian Victims of United States' Aerial Bombing of Afghanistan: A Comprehensive Accounting
From a software perspective, to this day, Fortran compilers are still regarded as the best for scientific computing. I remember when Cray started producing their C compiler. The memory management headaches lead to some interesting performance problems, which they had to resort to using pragma to solve. This required that the programmer, which in many cases were primarily mathematicians, to have a very strong knowledge of the underlying hardware. I think that it's generally accepted that people don't want to have to understand when your code may cause excessive instruction buffer faults. To write a large business application in C required a level of expertise that is just difficult to come by. This is but one of the reasons that Visual Basic has been so successful. With the general acceptance of Smalltalk, it looked as if the business community had finally found the elusive environment they had been looking for to replace creaky Cobol. It was quite interesting to watch Java knock out it's growth curve before it reached escape velocity. From a programming point of view, I still prefer the normalized view that Smalltalk presents, but Java introduces a number of concepts that were lacking in Smalltalk. They also both use a virtual machine which further removes your application from the hardware. So, as each technology has been introduced, it has solved some problems and quite naturally, supplanted others with it's own.
--Kirk Pepperdine
Read the rest in The Interview: Kirk Pepperdine
Grab a file-sharing program and use it to test stuff. (The technology's out there and complaining about it is like bitching about the shit on the floor of the barn the horse bolted from.) Enter things at random and see what you get. Or just google for free mp3s offered by artists, which works more often than you'd think. If you like it, buy some, and see what buying it leads you to. (This is why I spend more on new music than anyone else I know.) Don't just wait to see what the TV feeds you. You know as well as I do that in most places the TV exists to feed you shit. They spent a full year programming Avril Lavigne in LA and dressing her up to appeal to as many "subculture" strands as possible. She's the Monkees, and that kind of Frankensteinian creature only works if you sit there and passively let that kind of shit-radiation into your brain.
--Warren Ellis
Read the rest in Slashdot | Warren Ellis Answers.)
--Colin P. Fahey
Read the rest in Scholastic Aptitude Test (SAT) : Answering All Questions Incorrectly!
That a "nobody" like Raed wound up providing a more nuanced view of his world--better than either the authoritarian inanities of the Iraqi information minister or the Geraldo-besotted dispatches of the commercial television networks--testifies both to the specific value of Weblogging as well as to the broader impact the Internet may yet have around the world.
--Charles Cooper
Read the rest in Raed is still alive | CNET News.com.
--Salam Pax
Read the rest in Where is Ra.
--Brian Goetz
Read the rest in Java theory and practice: To mutate or not to mutate?
IDEs are not a good fit for the kind of knowledge-intensive, mixed language style of programming you see under UNIX. IDEs are great if what you're doing is cranking out C++ code by the yard. But if you're writing systems that are glued together from C, shell, Python, Perl, and maybe several other languages, the worldview that IDEs tend to enforce on you is too rigid for that kind of programming. And that's why UNIX programmers have historically tended not to like IDEs, because they limit your options too much.
--Eric S. Raymond
Read the rest in Interview: Eric Raymond goes back to basics
The Bush administration's attitude, assisted by a Congress that long since abandoned any commitment to liberty, is that government has the right to know absolutely everything about you and that government can violate your fundamental rights with impunity as long as the cause is deemed worthy.
You, on the other hand, have absolutely no right to know what the government is doing in your name and with your money, unless the information is deemed harmless by people who have every motive to cover up misdeeds. Bush and his people have turned secrecy into a mantra, and too few people recognize the danger that poses to our freedoms, much less our pocketbooks.
--Dan Gillmor
Read the rest in Mercury News | 04/06/2003 | Why we may never regain the liberties that we've lost.
--Evan Dando, Lemonheads
Read the rest in Enjoyment.
--Martin Fowler
Read the rest in Tuning Performance and Process.
--Tsu Dho Nimh
Read the rest in Migrating to Linux not easy for Windows users - April 4, 2003
One question that is seldom asked is, "How can Open Source possibly be giving multi-billion dollar companies so much competition that they feel they need to actively dissuade government officials from even thinking of using Open Source software?" This is not an idle question. Open Source doesn't have lobbyists or marketers or ad men to promote its software. So, to say that governments shouldn't have rules to consider Open Source software, as Open Source opponents often do, takes away the only avenue that Open Source has to really reach government. The Open Source sales model is fundamentally "pull" model, where enlightened procurement officers need to know enough to ask about Open Source in the first place. There is no "push" model of sales in Open Source like that employed by the multi-billion dollar companies with their legions of salesmen, ad men, and lobbyists. In fact, the average large software company is 1/3 software developers, and 2/3 salesmen, marketers, management, apologists, and lawyers. So, a very apt question is -- if their software is so good and they have an extra 2 people for every one developer pushing it, why is it that they try so hard to impede government officials from making side-by-side comparisons? You would think they would be anxious to have procurement rules that require such comparisons so that they can show how much better their very expense software is.
--Tony Stanco
Read the rest in NewsForge: The Online Newspaper of Record for Linux and Open Source
I can't worry about skepticism. If there's no controversy, and everybody buys into our ideas and follows them, there is no chance of making money. The question is whether we have a controversial and right strategy. If so, we'll make a lot of money.
--Scott McNealy
Read the rest in McNealy: Rattling cages is good for Sun | CNET News? Ultimately, the longer we stay as occupiers, the more Iraq becomes not an example for other Arabs to emulate, but one that helps Islamic fundamentalists make their case that America is just an old-fashioned imperium bent on conquering Arab lands.
--Joshua Micah Marshall
Read the rest in Practice to Deceive.
--Allen Dennison, Apple Java Product Manager
Read the rest in O'Reilly Network: Apple Releases Java 1.4.1 for Mac OS X [Mar. 10, 2003].
--James Paul Gee
Read the rest in Wired 11.05: View U.S..
--Richard M. Stallman
Read the rest in NewsForge: The Online Newspaper of Record for Linux and Open Source - Mozilla {Build ID: 2002101612}
Dragging all human behavior into the public is literally totalitarian. If you erode privacy, you erode liberty, because people don't tolerate things going on in front of them that they don't approve of.
--Bob Blakely
Read the rest in The paradox of privacy | CNET News.
--David Findley
Read the rest in Unsanity.org: Shareware Is Dead.
--Scott Meyers
Read the rest in Meaningful Programming?
--Anthony B. Robinson
Read the rest in War in Iraq a reason for shame
The event is not about dreams, predictions or mockups. We will show actual flight hardware: an aircraft for high-altitude airborne launch, a flight-ready manned spaceship, a new, ground-tested rocket propulsion system and much more. This is not just the development of another research aircraft, but a complete manned space program with all its support elements
--Burt Rutan
Read the rest in Passenger-Carrying Spaceship Makes Desert Debut
When I met Tony Blair in 2000 he told me that if he thought members of the security forces had been involved in killings of this nature he would call a public inquiry. The most senior police officer in the UK has now found that there was collusion in my husband's murder. It is now time for Tony Blair to fulfil his promise.
--Geraldine Finucane
Read the rest in News
The rule of thumb with ease-of-use is that if you make your program 10% easier, you'll double the potential number of users of your product
--Joel Spolsky
Read the rest in Joel on Software - Working on CityDesk, Part IV.
--Mark Grennan
Read the rest in Firewall and Proxy Server HOWTO: Understanding Firewalls
There is a persistent notion in a lot of literature that software development should be like engineering. First, an architect draws up some great plans. Then you get a flood of people, some.
--Andy Hunt
Read the rest in Programming is Gardening, not Engineering
There are 13 million people on the FBI's terrorist watch list. That's ridiculous, it's simply inconceivable that a number of people equal to 4.5% of the population of the United States are terrorists. There are far more innocents on that list than there are guilty people not on that list. And these innocents are regularly harassed by police trying to do their job. And in any case, any watch list with 13 million people is basically useless. How many resources can anyone afford to spend watching about one-twentieth of the population, anyway?
That 13-million-person list feels a whole like CYA on the part of the FBI. Adding someone to the list probably has no cost and, in fact, may be one criterion for how your performance is evaluated at the FBI. Removing someone from the list probably takes considerable courage, since someone is going to have to take the fall when "the warnings were ignored" and "they failed to connect the dots." Best to leave that risky stuff to other people, and to keep innocent people on the list forever.
Many argue that this kind of thing is bad social policy. I argue that it is bad security as well.
--Bruce Schneier
Read the rest in Counterpane: Crypto-Gram: April 15, 2003.
--Dave Thomas
Read the rest in Programming Close to the Domain.
--Paul Graham
Read the rest in The Hundred-Year Language
Name one genius inventor who has gotten rich from a software patent. There must be some, but the system mostly benefits a handful of businesspeople and lawyers who don't write code. Look at British Telecom. It took years before BT's patent lawyers "discovered" the company had invented hypertext linking. Now General Electric claims it invented the JPEG file format. If GE is so smart, why did it take so many years to figure out it invented such a popular technology? Which genius inventors get rich on such claims?
--Ralph Nader
Read the rest in SourceForge.net Foundries: Foundries
Mostly the technology we have been creating is created by nerdy, white guys so you get nerdy, sometimes not-so-useful technology. Engineering is a creative art. You get out of it the life experience you put in it. If we want to create socially relevant technology, there better be a much broader participation in the development of it.
--Greg Papadopoulos, Chief Technology Officer,
Sun Microsystems
Read the rest in Mercury News | 04/09/2003 | Silicon Valley pioneer dies at 54
Read the rest in Strong versus Weak Typing
There's no question the Department of Justice has been abusing the material witness statute in their campaign to put pressure on Muslim and Arab Americans. There's no way to know what the government is after in Mr. Hawash's case, but we're very concerned about the way he's being treated, and dozens of other people in similar situations.
--David Fidanque, executive director, Oregon ACLU
Read the rest in Wired News: Intel Coder Not Going Anywhere
those are technical problems. Those are easily solved. Hackers have big arguments over them and eventually something gets grabbed out of the machinery that more or less works. I think the most serious problems are actually cultural ones. UNIX hackers are not very far along in the process of figuring out how to do interfaces well. And this is not because we've been lazy. We've assimilated a lot of stuff in the last 15 years. We've assimilated pervasive networking and we've assimilated GUIs at the developer toolkit level. We understand how to do graphics, we understand how to do libraries, we understand how to do toolkits. What we don't understand yet is good user interface policy and how to listen to users. And that, I think, is the biggest problem the UNIX tradition has right now.
--Eric S. Raymond
Read the rest in Interview: Eric Raymond goes back to basics.
--Martin Fowler
Read the rest in Flexibility and Complexity
Twenty years ago at PARC,.
--Alan Kay
Read the rest in OpenP2P.com: Daddy, Are We There Yet? A Discussion with Alan Kay [Apr. 03, 2003]
It's difficult to imagine this as anything but a grossly political, and therefore inappropriate, move by Akamai. The company has reason to dislike radical Islam, but shutting down voices like Al-Jazeera is simply wrong.
People who believe in free speech should be asking themselves whether they want this kind of thing to become routine.
Who will be brave enough to mirror Al-Jazeera?
--Dan Gillmor
Read the rest in Silicon Valley - Dan Gillmor's eJournal
Everyone supports our troops. We love our troops. And that's why we let them go and risk their lives without asking questions. Questions are for French-lovin' Commie-scum, got it?
--Aaron McGruder
Read the rest in The Boondocks
They are now proposing to add e-mail communications in God knows how many difficult languages to these cubic acres of untranslated, unread, unanalyzed, unabsorbed information. The request for broader powers is the excuse of first resort of anyone who's failed at national security or law-enforcement tasks. This notion — that if we could only read every e-mail message in the universe, that no one could cause us trouble — is a big mistake.
--Thomas Powers
Read the rest in The C.I.A.'s Domestic Reach
CAPPS II is potentially far worse than the Total Information Awareness program, because this program will be implemented and affect the 100 million people who fly every year. Even if the system is 99.9 percent accurate, there will be 100,000 mistakes a year.
--Barry Steinhardt, director of the Technology and Liberty program at the ACLU
Read the rest in Wired News: Will Airport Security Plan Fly?
Maybe Sun has a point here—that JBoss can't.
--Ronald Schmelzer, ZapThink LLC
Read the rest in Open-Source Growing Pains Give Sun Aches
Is it the sole responsibility of the U.S. to decide which nations' form of government will stand and which will fall?. Is it the responsibility of the U.S. to kill or destroy to bring about a change of government? I think not. I don't believe George Bush has the right to kill one person to bring about a change in government.
--Rear Admiral Gene LaRocque
Read the rest in Metroactive News & Issues | Middle Grounded.
--Greg Blidson
Read the rest in Gnutella v. Gnutella2.
--Ross Anderson
Read the rest in Protocol Analysis, Composability and Computation
In my experience, programmers like to write code. Period. They don't like to write documentation, they don't like to write system tests, and they don't like to write unit tests. Programmers are also optimists--how else could they tackle building these enormously complex systems and think they had any chance of working? Programmers like instant gratification (who doesn't?). They enjoy coming up with a solution to a problem and seeing that solution implemented immediately.
Because programmers are optimists, that is reflected in their unit tests. Time and time again I've seen developer-written tests that demonstrate the feature works -- because the tests reflect the thinking of the developer about how the feature will be used. They rarely do a good job of testing corner cases, limits, or "unusual" situations (like running out of memory or other finite resources).
I think the "test first" methodology is too at odds with what motivates programmers to do what they do. Would Linux have ever been created if Linus' original postings to the net had been test cases for a UNIX-like operating system? And invited others to write more test cases? How many would have responded? How many would have become excited about the prospect of building an Open Source operating system if the first year was going to be spent writing unit tests?
Maybe I'm just a skeptic, but Test First reminds me of so many other software development methodologies proposed over the years that promise great benefits but rarely deliver them.
--Scott Trappe
Read the rest in Slashdot | Scott Trappe's Answers About Code Quality
There are dozens of reasons why people have underestimated how quickly Linux has been grabbing Windows' market share, but the Evans data confirms one of my pet theories. Windows market share is usually estimated by the units of Windows Microsoft claims to have shipped. This figure is already skeWednesday, because it includes every unsold box of Windows XP sitting on shelves at Best Buy or Circuit City. More significant, however, is the fact that it includes every PC with a pre-installed version of Windows.
Linux market share, on the other hand, is usually estimated based on surveys, number of commercial boxes sold and the number of downloads.
The actual market-share shift from Windows to Linux is obviously more complicated. When someone purchases a PC with Windows pre-installed, and then overwrites that pre-installed Windows with Linux, nobody subtracts "one" from the installed base of Windows and then recalculates the Windows market share. So Windows starts out with a false boost and maintains its illusory market share even as it gets replaced by Linux.
--Nicholas Petreley
Read the rest in Debunking the Linux - Windows market - share myth - March 14, 2003
I estimate we command 20 percent of the worldwide installed base of databases, but of revenues we only command only .02 percent. So there's a factor of 1,000. And we are making money. People ask me "What's wrong-why are you leaving money on the table?" We say "You should ask the other database companies what is wrong with their cost structure."
--Marten Mickos, CEO of MySQL
Read the rest in CNN.com - MySQL: A threat to bigwigs? - Mar. 12, 2003
I've invited my fellow documentary nominees on the stage with us. They are here in solidarity with me because we like non-fiction.!
--Michael Moore
Oscar acceptance speech, March 23, 2003
I don't think the Fourth Amendment exists anymore. I think it's been buried by the Patriot Act and some of the court rulings that have been handed down. We need a requiem mass for the Fourth Amendment, because it's gone.
--Christopher Pyle
Read the rest in ABCNEWS.com : Right Joins Left to Criticize Patriot Act
Are we really arguing at this stage, before the UN process is complete, that the best thing to do is to start slaughtering people in their thousands, perhaps hundreds of thousands, as well as losing British and American and Australian lives in the process? I don't think so.
--Charles Kennedy
Read the rest in French vow to veto 'war by timetable'
Virus writers have long been rationalizing their actions by saying they create viruses for good reasons.. But it's still annoying to see your computer or network turned into a virtual schoolyard populated by bullies shoving each other and your data around.
--Ian Murray
Read the rest in Wired News: Yaha Virus Uses Netizens as Pawns
Distributed systems are when computers you've never heard of can cause your application to fail.
--Rich Salz on the xml-dev mailing list, Wednesday, 12 Mar 2003.
--John Perry Barlow
Read the rest in Wrapped up in Crypto Bottles.
--Joel Spolsky
Read the rest in Joel on Software - Working on CityDesk, Part IV
We were once told that we needed to present photo ID for our own safety, which most of us knew was nonsense from a security standpoint and which everyone now knows was nonsense. We now know terrorists can get photo ID. So how difficult would it be for terrorists to use operatives with the proper CAPPS credentials? This CAPPS thing smells of snoopy government, not real security.
--Keith Beasley
Read the rest in Wired News: Privacy Activist Takes on Delta.
--Edward Said
Read the rest in Al - Ahram Weekly | Opinion | Who is in charge?
Fuel-cell vehicles are the transportation equivalent of fat-free potato chips, seeming to promise that Americans can continue overindulging on energy without facing the consequences of their appetite. But as any dieter knows, that's a fantasy.
--John Krist
Read the rest in Mercury News | 02/18/2003 | John Krist: Hydrogen power a step forward but not 'non-polluting'™.
--Richard Gabriel,
Distinguished Engineer at Sun Microsystems
Read the rest in The Poetry of Programming
I've been supporting various UNIXes for a total of 13 years now, including IBM AIX, Sun Solaris, HP HP-UX, RedHat LINUX, and various other BSD variants. All in all, the legendary stability of UNIX is a truth, not a myth. Today, at my day job at a hospital, I have several mission-critical AIX and Solaris servers with uptimes of over 300 days. That's almost one year of 24x7 service without a single reboot for software or hardware errors. I have actually seen and supported AIX servers that had uptimes exceeding 365 days!
In UNIX, this stability is the norm because of its design, but the key to remember is that some UNIX implementations deviate for better or worse from the design principles for various reasons. For instance, in my experience, IBM AIX-based servers are far more reliable than Sun Solaris-based servers. Why--they're both UNIXes, so why is one more stable than the other? IBM, having come from the mainframe side of things, was used to having to provide 24x7 uptime for enterprise-wide, mission-critical applications. They easily could have been sued for substantial damages if they didn't provide such high levels of reliability. Everything at IBM is designed with stability, performance and scalability in mind, and change management of all of IBM's hardware and software is exacting and meticulous. Sun, on the other hand, tends to have more of a hack-it approach to things, and many times, a system would be DOA when we received it in our data center. Sun, being an engineers' sort of company, has no qualms about reboots and releases far too many OS patches too often, without exhaustively testing each one for its future effects. IBM is far more respectful of its customer's missions and business-ventures and Sun seems to think that its customers are its beta testers. That difference in attitudes between IBM and Sun manifests itself in the differences between their flavors of Unix.
--Dennis Chang
Read the rest in Mac OS X Justification Part 4 (MacInTouch Reader Report)
The reason XML has taken off is that generation after generation of attempts to interoperate at the datamodel/API level has either failed or provided poor price/performance. Syntax is a qualitatively, consistently, dramatically better basis for interoperation; desires to interoperate at the data model level, no matter how reasonable, are apt to remain unfulfilled for the foreseable future.
--Tim Bray on the xml-dev mailing list, Tuesday, 25 Feb 2003
You cannot say "I want Saddam Hussein to disarm" and at the same time when he is disarming say they're not doing what they should.
--Dominique de Villepin, French Foreign Minister
Read the rest in French vow to veto 'war by timetable'
When used wisely, invocation chaining can produce concise, elegant, and easy-to-read code. When abused it yields a cryptic tangle of muddled gibberish. Use invocation chaining when it improves readability and makes your intentions clearer. If clarity of purpose suffers when using invocation chaining, don't use it. Alwasy make your code easy for others to read.
--Ron Hitchens
Read the rest in Java NIO, p. 18, O'Reilly & Associates 2002.
--Robin Gross
Read the rest in O'Reilly Network: Robin Gross Seeks International IP Justice [Feb. 20, 2003]
We avoid the Windows operating system since it is such a huge security risk. We didn't want to have viruses blowing up systems that we depend on for navigation and monitoring engines and other systems. And since nothing seems to be able to stop all of these Windows viruses, the best way to win is to just stop using Windows.
--Doug Humphrey, CEO Cidera
Read the rest in Wired News: All Aboard! (But No PCs Allowed).
--Bill Venners
Read the rest in How to Interview a Programmer
Only a week ago the main topic in the streets among Kurds was Saddam and the fear of chemical attack. Now the only thing people talk about is Turkey and the Turkish advance.
--Karim Sinjari, Kurdish Interior Minister
Read the rest in News
OS X can not provide the same level of security that the MacOS through 9.x has provided. It is not a matter of obscurity. It is a matter of a completely different way of doing things, one that I believe if fundamentally better from a security point of view. All applications are vulnerable to password guessing, yes, but cracking an operating system simply by overflowing a buffer on a TCP port requires a shelled operating system. The original MacOS is not shelled. It does not accept character string commands like "/bin/sh" placed on the stack followed by a system call to execve. To launch an application there either has to be a GUI driven event like a double click, an AppleEvent sent to Finder (as through the Apple Menu), or a fairly elaborate setup and call to the Process manager. Additionally, the original MacOS does not accept parameters/arguments to be passed through the main() function (the entry point for start of program code execution). It requires AppleEvent handlers to accept interapplication communication. Once an application is executing, it calls the operating system call WaitNextEvent() to know what to do (repeatedly, until told to quit). It is not passive like a Unix application. Therefore, even if rogue byte code can launch an application, it will not be able to control it without massive set up, all in byte code with no null characters. Shells may make life easier for system administrators, but they also make life easier for crackers.
--Tim Kelly
Read the rest in Mac OS X Justification Part 2(MacInTouch Reader Report)
There are lots of irregularities which break common assumptions on certain data. Account numbers are no longer numbers if you want the "number" to be the SWIFT id of a bank. You can't infer from the fact that nobody yet had a total amount of securities in a depot which overflowed a 10 digit number that this will never happen (and unsurprisingly a program crashed because of this at the end of 1999). There is a swiss municipality which is italian by telephone country code, breaking the usual nationality->phone country code mapping. There are quite a few villages where one half pays its tax to the authority of region A, the other to region B, requiring you to maintain a map by street/house number.
--J. Pietschmann on the xml-dev mailing list, Sunday, 23 Feb 2003.
--Mark Twain
Read the rest in Mark Twain, "Battle Hymn of the Republic" 1900.
--Gary Kasparov
Read the rest in OpinionJournal - Extra
When the CIA comes and asks what you've read because they're suspicious of you, we can't tell them because we don't have it. That's just a basic right, to be able to read what you want without fear that somebody is looking over your shoulder to see what you're reading.
--Michael Katzenberg, Bear Pond Books
Read the rest in Vt. bookseller purges files to avoid potential 'Patriot Act' searches
Planning is good and bad. If you know where you're going, planning is good. If you don't know what you'll encounter on the way, you should be more open-minded and improvisational. I certainly see a place for planning, but if the language forces you to plan everything, there may be trips you'll never undertake because it would require too much thinking ahead. You're then inhibited by fears that you don't know how to do something. In Python, you can start doing that something and discover how to do it on the way. You can build something quickly, get it on the road, obtain feedback, and then design the next one based on greater understanding of the problem domain.
--Guido van Rossum
Read the rest in Programming at Python Speed.
--Joel Spolsky
Read the rest in Joel on Software - Working on CityDesk, Part Three
UnPatriot II would push ahead with this kind of Big Brother scheme. The government would collect DNA from a widening circle of Americans. It would add to government surveillance authority -- not that there's all that much keeping the official snoops out of innocent people's lives at this point in any event.
And, reviving an anti-privacy notion that Ashcroft himself once denounced -- that is, before he got a taste of the overweening state power he professed to fear -- it would criminalize some uses of encryption, the scrambling of digital information.
Government snoops, who have never, ever failed to misuse this kind of authority, would know everything about you. This is a one-way mirror. The Bush administration's fanatical devotion for secrecy, preventing citizens from knowing what government is doing in their name and with their money, would get a boost.
--Dan Gillmor
Read the rest in Mercury News | 02/19/2003 | Dan Gillmor: Bill of Rights under a new assault.
--Scott Meyers
Read the rest in Multiple Inheritance and Interfaces
The slur of "anti-Semitism" also lies behind Rumsfeld's snotty remarks about "old Europe". He was talking about the "old" Germany of Nazism and the "old" France of collaboration. But the France and Germany that oppose this war are the "new" Europe, the continent which refuses, ever again, to slaughter the innocent. It is Rumsfeld and Bush who represent the "old" America; not the "new" America of freedom, the America of F D Roosevelt. Rumsfeld and Bush symbolise the old America that killed its native Indians and embarked on imperial adventures. It is "old" America we are being asked to fight for - linked to a new form of colonialism - an America that first threatens the United Nations with irrelevancy and then does the same to Nato. This is not the last chance for the UN, nor for Nato. But it may well be the last chance for America to be taken seriously by her friends as well as her enemies.
--Robert Fisk
Read the rest in Argument
How much, if any, is left of Iraq's weapons of mass destruction and related proscribed items and programmes?. So far, Unmovic has not found any such weapons, only a small number of empty chemical munitions..
--Hans Blix
Read the rest in News.
--Eliot Spitzer,
New York Attorney General
Read the rest in Court: Network Associates can't gag users - Tech News - CNET.com
We would catch more terrorists, perhaps, in a police state, but that's not a country in which most Americans would want to live.
--Senator Russ Feingold
Read the rest in Mercury News | 01/17/2003 | Senators vow to halt 'data mining' project
units are more important than data types. I don't care too much if "7" is meant to be handled as a string, a short, an int, a long, a float, or a double -- I'll do whatever makes sense for my own program anyway -- but I care quite a bit whether it refers to feet or meters.
--David Megginson on the xml-dev mailing list, Wednesday, 12 Feb 2003
If you mingle your code with GPLed code, the mingled parts fall under the GPL. Now everyone can use it, including you.
If you mingle your code with Shared Source code, first of all you get sued for breaking your licence agreement (maybe the BSA or BSAA come around and steal your computers, who knows?), second of all, you forfeit the rights to that code: you can't use it any more, and neither can anyone else ‹ except Microsoft.
OK, so who has the real viral code? Do you need some time to think about it...? (-:
--Leon Brooks
Read the rest in Picking up your marbles.
--Declan McCullagh
Read the rest in Perspectives: Ashcroft's worrisome spy plans - Tech News - CNET.com
This document details the difficulties that keep our Solaris Java implementation from being practical for the development of common software applications. It represents a consensus of several senior engineers within Sun Microsystems..
Our experience in filing bugs against Java has been to see them rapidly closed as "will not fix". 22% of accepted non-duplicate bugs against base Java are closed in this way as opposed to 7% for C++. Key examples include:
4246106 Large virtual memory consumption of JVM
4374713 Anonymous inner classes have incompatible serialization
4380663 Multiple bottlenecks in the JVM
4407856 RMI secure transport provider doesn't timeout SSL sessions
4460368 For jdk1.4, JTable.setCellSelectionEnabled() does not work
4460382 For Jdk1.4, the table editors for JTable do not work.
4433962 JDK1.3 HotSpot JVM crashes Sun Management Center Console
4463644 Calculation of JTable's height is different for jdk1.2 and jdk1.4
4475676 [under jdk1.3.1, new JFrame launch causes jumping]
In personal conversations with Java engineers and managers, it appears that Solaris is not a priority and the resource issues are not viewed as serious. Attempts to discuss this have not been productive and the message we hear routinely from Java engineering is that new features are key and improvements to the foundation are secondary.
Read the rest in INTERNALMEMOS.COM - Internet's largest collection of corporate memos and internal communication
A priest without alcohol, that's the wrong combination.. Jesus didn't say, take this healthy camomile tea, he offered wine.
--Father Michael Fey
Read the rest in Realbeer.com: Beer News: Priest brews in washing machine.
--Ken Arnold
Read the rest in Designing Distributed Systems
Do terrorists sometimes benefit from drug profits? The answer is yes. The heroin and opium trade in Central Asia has been identified, in particular, as a source of funding for terrorist groups including the Taliban and Al Qaeda. But there really is more than one side to this issue. The Taliban also profited from our war on drugs, receiving $43 million from the US government in 2001 for the purpose of eradicating Afghanistan's heroin-producing poppy fields. And whatever one thinks of the various pros and cons of drug legalization, it's hard to deny that prohibition is what allows criminal groups, including terrorists, to profit from the drug trade.
Meanwhile, as the Drug Policy Alliance notes, the federal authorities have yet to come up with conclusive proof of a single case in which proceeds from drug dealing in the United States went to Middle Eastern terrorists. And some claims about the drug-terror link are downright misleading. Thus, drug war zealots have cited evidence that Ecstasy trade has a Middle Eastern connection, obviously implying a terrorist link. In fact, the organized crime groups allegedly involved in Ecstasy trafficking consist of Israelis from the former Soviet Union--who may not be nice guys, of course, but can hardly be suspected of funneling money to the Al Qaeda.
Surely, Americans who get locked up for growing marijuana plants in their basements have not given any aid or comfort to international terrorists. Yet somehow, I doubt that we'll see an ad campaign with the slogan, "Fight terrorism-grow your own pot!"
--Cathy Young
Read the rest in Reason
The Dell is up to twice as fast and never less than 1/3 faster. Digital Video Editing has done 2 previous tests and unfortunately the performance gap is growing. I use both platforms all the time and would be surprised if there were any mainstream apps that weren't faster on the PC at this point. Key perceived speed tasks like web surfing and desktop speed are noticeably faster even on bottom-end PCs.
The price gap remains large too: at current prices the Dell used in the article costs $850 less than the Apple. Worse, a PowerMac is at least $1700 and a dual 1.25 Mac is at least $3000 whereas you can get a 2GHz Dell from $489 or a 3GHz one from $1300.
The troubling part is that the problem is getting worse without any great hope on the horizon to stop the bleeding. Who can forget "The CISC architecture of the Pentium has no headroom" and "The PPC is great, it's just Mac OS 7/8/9 that is the bottleneck"? What's the current hope for the future now that both of those myths have been left in the dust?
--Michael DeGusta
Read the rest in Digital Photo Benchmarks (G4/Altivec Performance, Part 2 - MacInTouch Reader Report).
--John Perry Barlow
Read the rest in MotherJones.com | News
Why write something in five days that you can spend five years automating?
--Terrence Parr, creator of ANTLR
Read the rest in Why We Refactored JUn!
--Kurt Vonnegut
Read the rest in In These Times | Kurt Vonnegut vs. the !*!@
So far AOL Time Warner has written down its value by $99 billion dollars.
$99,000,000,000.00.
Billion. With a B. Impressive. Man, that's a lot of business not to have. And that's probably not the whole thing. Consider the momentum here. $54 bil back in Q1 of last year, and now $35 bil for AOL and $10 for the cable division. We're a lousy $1 bil away from a twelve-figure loss.
The real kicker here, the the eleven-zero irony, is that this merged company was counting on AOL, of all things, to provide understanding of the very platform on which all this inter-divisional "synergy" was going to take place. They actually thought AOL understood the Net. Amazing.
--Doc Searls
Read the rest in The Doc Searls Weblog : Friday, January 31, 2003
Computer Programs are Writings. As such, they should be subject to copyright law (narrowly interpreted) or trade secret protection, but.
--Phil Salin, July 15, 1991
Read the rest in Freedom of Speech in Software.
--Guido van Rossum
Read the rest in Programming at Python Speed.
--Bill Venners
Read the rest in Why We Refactored JUnit
I whine about how LinuxWorlds seem to have more managers and fewer geeks than ever, but in a way this is a logical progression. And, sometimes, yesterday's geeks and today's suits are the same people.
This was drummed in for me today when a manager-looking guy called out, "Hey, Robin," as if he was an old buddy, and I didn't recognize him until we were within hand-shaking range. Yes, it was someone I knew from the days when hippie-hacker college students showed up at LinuxWorld like mad, and some of them ended up crashing in my room because they had no other place to sleep.
A wife and a kid on the way tend to knock the wildness out of a lot of people, and I was looking at a prime example of this phenomenon. The "got root?" t-shirt covered with Linux and assorted political buttons was gone, replaced with a dress shirt and tie, the hair was short, and the shoes were shiny black loafers, not battered Doc Martins.
Idealistic? Sure. No big mental change, just a job at a company with a dress code. And not just a job, but now a management job, one with purchasing authority. And he's not here to get drunk and talk about coding projects and drink beer all night, but to check out server specs and shop for support, because his company is replacing several racks of commercial Unix and Windows 2000 servers with Linux, and he's been tasked with overseeing the migration.
--Robin 'Roblimo' Miller
Read the rest in NewsForge: The Online Newspaper of Record for Linux and Open Source
Many people still do not grasp that Big Brother surveillance is no longer the stuff of books and movies..
--Barry Steinhardt, Director of the ACLU's Technology and Liberty Program
Read the rest in American Civil Liberties Union : ÒBig BrotherÓ is No Longer a Fiction, ACLU Warns in New Report Ñ
Read the rest in The United States of America Has Gone Mad
Any set of government policies involves tradeoffs. We tax rich people to provide services for poor people, for example. But what we have with copyright are policies that protect a very small number of high-value, long lasting works at the expense of making millions, literally millions, of abandoned works perpetually unavailable for re-use. It's a matter of balance. The net win for society is much greater if we don't create policies that benefit a very small number of players at the expense of millions of others. As Kant used to say, we're looking for the greatest good for the greatest number. Finding a path to that goal is not always easy, but it should be the goal of any enlightened public policy.
--Tim O'Reilly on the Computer Book Publishing mailing list, Saturday, 18 Jan 2003
My taxes pay the bills for the government to protect your copyright. Under the regime specified by the Founders, that was an exchange: The state protected your copyright for a limited time and, in return, your work eventually passed into the public domain. Under Eldred, works may never again pass into the public domain, should the legislature so choose.
What Eldred is about is the theft-by-lobbying from the American people of intellectual property which had been promised to the public domain in exchange for time-limited protection.
--John Adams on the Computer Book Publishing mailing list, Thursday, 16 Jan 2003
Our country must fight terrorists, but America should not unleash virtual bloodhounds to sniff into the financial, educational, travel and medical records of millions of Americans. Congress ought to step in and put the brakes on this program now, before it grows unchecked and unaccountable.
--Senator Ron Wyden
Read the rest in Mercury News | 01/17/2003 | Senators vow to halt `data mining' project
As specification lead of the Java toaster JSR, you have decided to make the reference implementation, and maybe the TCK, available under an open-source license. If Big Bad Toasters takes that reference implementation and creates an incompatible derivative work from it while still claiming to implement the specification, then they would be in violation of the specification license.
On the other hand, if they took your work and implemented a completely different specification from it, say, com.bigbadtoasters.Toaster, that would be a legitimate, though annoying, thing for them to do. The specification intellectual property protection says that you can't lie to Java programmers about what Java is. The JCP defines that truth, and the materials produced in JSRs are used to validate it, and collectively the artifacts and the process work to maintain that assurance. In this case, however, they're not lying to anyone. They're not claiming it to be an implementation of the JSR, nor are they offering an artifact that would poach upon developers who were expecting it to be the JSR, since it lives in Big Bad Toasters' namespace.
--Rob Gingell, Sun Microsystems fellow and chief engineer
Read the rest in Standards and Innovation
This year, during a season that is sacred to many, I committed an unspeakable heresyÑat least as far as e-commerce orthodoxy is concernedÑI purchased no presents online.
For several years, I bought more and more gifts on the Web. Then something happened. Maybe it was a midlife crisis, the dot-com bust or maybe I just wanted to get out of the house.
I went to the mall.
The cool thing about the mall is you get ideas for gifts just from looking at stuff. You don't need a highly sophisticated search engine to make bad guesses as to what you might be interested in. And you can check out the Victoria's Secret storefront without creating an item in your history file and leaving a cookie.
--Stan Gibson
Read the rest in Adventures in Offline Buying.
--Richard Gabriel,
Distinguished Engineer at Sun Microsystems
Read the rest in The Poetry of Programming
MacHack is 17 years old. If it were a human, it'd be drinking illegally by now. Actually, it wouldn't—MacHack wouldn't get invited to those sorts of parties. It's an annual conference for hard-core geeks who, as teenagers, were likely to spend their Friday nights in chat rooms, arguing that Captain Picard should just rip out the Holodeck entirely, since it kept tossing the Enterprise into jeopardy every week.
We may not have had lives, but we learned a lot about computers and stuff—and managed to avoid the alcohol- and sex-related mishaps that can sidestep people from successful careers in technology. Now we work for Apple and Adobe and Microsoft and hundreds of lesser-known companies that create the code and hardware that people use every single day, and every June we come to Michigan to hang out in a hotel atrium, universally preferring the warm, snuggly cocoon of AirPort access to the fiery tyranny of the Giant Day-Ball outside. For three days, the place becomes like a really bad TechTV version of Big Brother
--Andy Ihnatko, Macworld, October 2002, p. 124
RAS (Reliability, Availability, and Serviceability) is a term IBM often uses to describe its mainframes. By the early 70's IBM had realized that the market for commercial systems was far more lucrative than that for scientific computing. They had learned that one of the most important attributes for their commercial customers was reliability. If their customers were going to use these machines for critical business functions, they were going to have to know they could depend on them being available at all times. So, for the last 30 years or so IBM has focused on making each new family of systems more reliable than the last. This has resulted in today's systems being so reliable that it is extremely rare to hear of any hardware related system outage. There is such an extremely high level of redundancy and error checking in these systems that there are very few scenarios, short of a Vogon Constructor fleet flying through your datacenter, which can cause a system outage. Each CPU die contains two complete execution pipelines that execute each instruction simultaneously. If the results of the two pipelines are not identical, the CPU state is regressed, and the instruction retried. If the retry again fails, the original CPU state is saved, and a spare CPU is activated and loaded with the saved state data. This CPU now resumes the work that was being performed by the failed chip. Memory chips, memory busses, I/O channels, power supplies, etc. all are either redundant in design, or have corresponding spares which can be can be put into use dynamically. Some of these failures may cause some marginal loss in performance, but they will not cause the failure of any unit of work in the system.
Serviceability comes into play in the rare event that there is a failure. Many components can be replaced concurrent with system operation (hot swapped); even microcode updates can often be installed while the system is running. For those components, such as CPUs, that cannot be replaced concurrently, the existence of spares allows the service outage to be scheduled at the customer's convenience.
--Ford Prefect
Read the rest in Ace's Hardware
Lawyers (save those from Chicago) are not typically trained to think about the business consequence of their legal advice. To many, business is beneath the law. When a Sony lawyer threatened a fan of the company's Aibo robotic dog, who had posted a hack online to teach the dog to dance to jazz, he or she no doubt never thought to ask exactly how making the Aibo dog more valuable to customers could possibly harm Sony. Harm was not the issue, a violation of the Digital Millennium Copyright Act was: consumers should be banned from hacking Sony dogs, whether or not it was to Sony's benefit.
Management should begin to demand a business justification for copyright litigation. How does this legal action advance the bottom line? How will it grow markets or increase consumer demand for our products? Will calling our customers criminals increase consumer loyalty?
--Lawrence Lessig
Read the rest in What lawyers can learn from comic books.
The spam pandemic has grown to epic proportions. In 2002, I received over 23,000 spam messages (about 35 percent of my mail), and that's even after employing the Mail Abuse Prevention System RBL+ realtime blackhole list and a handful of other conservative server-side spam filters on our primary mail server. There's no question that my address is both older (it hasn't changed since I switched away from the UUCP style
) and more widely published than most, but my exposure generally means I'm just ahead of the curve. If you're not getting a lot of spam now, you're both lucky and living on borrowed time.
--Adam C. Engst
Read the rest in TidBITS#661/06 - Jan - 03
This five-step process works for any security measure, past, present, or future:
1) What problem does it solve?
2) How well does it solve the problem?
3) What new problems does it add?
4) What are the economic and social costs?
5) Given the above, is it worth the costs?
When you start using it, you'd be surprised how ineffectual most security is these days. For example, only two of the airline security measures put in place since September 11 have any real value: reinforcing the cockpit door, and convincing passengers to fight back. Everything else falls somewhere between marginally improving security and a placebo.
--Bruce Schneier
Read the rest in How to Think About Security
we have better Linux standards than Unix had at the same point in its development. All of the vendors are providing standards-compliant systems. There have been a number of efforts to make sort of proprietary Linux systems that aren't too compatible with the free versions. They always fail commercially. Without the "free" part, a Linux system is just another SCO--nothing exciting, nothing worthy of the collaboration that has made Linux great, and not something that will win the market. So, I'd be wary of some of the "enterprise Linux" projects. The ones that can't maintain their free-software roots won't succeed.
--Bruce Perens
Read the rest in Vision Series 3: Bruce Perens - Tech News - CNET.com
The Java Community Process has been on an inexorable path to openness with Version 2.5, and what's really wonderful about that community is that it's been a collaborative process to come up with JCP 2.5 and the open process that that encompasses. From an industry perspective, it's important that the Java brand mean something so that the industry has an identity there. It's important that compatibility is maintained, because that's really what Java's all about. So there has to be some level of control around that.
But the process itself is remarkably open. Many dozens of companies, many of whom are competitors, cooperate together to define the specifications, move them into the Java platform and turn them into products. So it's been a pretty strong success in taking innovation to the market reasonably quickly in an open fashion.
--Mark Bauhaus, Sun vice president of Java Web services
Read the rest in Q&A: Sun VP lays out company's plans for Web services, Java
IP -- as the name suggests -- is the force that brought us the Internet, and has pretty much won the war for local networks as well. Lying vanquished in unmarked graves are old diehards like LANtastic, IPX, and (significantly to this story), Appletalk.
--Heath Johns
Read the rest in O'Reilly Network: Understanding Zeroconf and Multicast DNS [Dec. 20, 2002]
The world is heterogeneous. Most people's environments are heterogeneous. I appreciate that we'll have some Microsoft boxes, some of ours, some from other companies. Java serves as the leveler. The important thing for most enterprises is, "Will it work on my Linux box and my mainframe, and on Solaris and HP?" Things like that are why we spent so much effort on Java. WebSphere runs on these platforms, and Java is a solid language that runs on these as well. J2EE is battle tested, and Java is pretty much bulletproof. From the point of view of reliability and robustness, Java already has six or seven years of experience on many platforms, with solid code and solid libraries.
--Robert S. Sutor, Director of e-business Standards Strategy, IBM
Read the rest in Fawcette.com - IBM, Java, and the Future of Web Services
If one of Osama bin Laden's goals, as has been reported, was to trigger crackdowns against freedoms by Western governments, he got the ball rolling quite effectively on Sept. 11, 2001.
The United States now imprisons its own citizens incommunicado, indefinitely and without lawyers or trials, for the duration of what we're told is an essentially permanent state of war.
In the good old days of the iron curtain, we condemned other countries for such actions, calling them human rights violations. Now some of those same nations are our partners of convenience in the war on terror, and our own government has enthusiastically embraced our former adversaries' old tactics.
--Lauren Weinstein
Read the rest in Wired News: Year in Privacy: Citizens Lose
If Linux is giving Microsoft fits, it is doing far worse to Sun Microsystems, which I predict will have a very bad 2003. As just one example, Sun is in danger of losing the semiconductor design computer workstation market to Linux. Early this year, Cadence Design will be the last of the major vendors to port their software to Linux. In the server market, too, Linux is making real inroads at the expense of Sun, especially in 2003 as the 64-bit Linux boxes begin to appear. Why buy a $100,000 Sun server when a $10,000 Linux cluster is comparable in every way? And don't expect too much from Sun's own Linux boxes, which will be deliberately hobbled so they don't make problems for SPARC.
Meanwhile, China, which will eventually be the largest computer market on earth, will standardize on MIPS processors and Linux, much to the dismay of both Sun and Intel. This bodes well, by the way, for AMD with its new MIPS-based Systems-on-Chip that will be the major component in many of those el cheapo Chinese computers.
--Robert X. Cringely
Read the rest in I, Cringely | The PulpitÑoddly enough the three are wrapped up together pretty closelyÑwhat.
--Scott Meyers
Read the rest in Multiple Inheritance and Interfaces
I was searching through some old Usenet posts and I saw that a lot of disgruntled readers identified prognostications of mine that were so far off-base that it was actually humiliating. I predicted, for example, that OS/2 would represent a big platform change. Wishful thinking? Stupidity? The latter, apparently. Because of all the IBMers and OS/2 mavens who led me astray, I'm forced to continue to write to survive. I was also too critical of Java ("Born Loser") as well as some other solid trends. Java is not the world beater that it was predicted to be, but it remains important. These are but a couple of bonehead predictions that I made over the years. So I will never again predict anything!
--John Dvorak
Read the rest in New Year's Resolutions
Earlier quotes: | http://www.ibiblio.org/java/quotes2003.html | CC-MAIN-2014-49 | refinedweb | 26,700 | 60.85 |
Hello again!
Ok so this time i made a program that creates a subwin called son and it defines it's size and location. The application paints son's background with the color RED but it doesn't update all of it. When i set stdscr's background color and wrefresh it it updates all of its background. When i do the same with son it only paints where i wrote some text, why?
I found out if i add a wclear(son) after setting it's background color and then add text and finally refresh it, it updates all of it's background color.
Is this problem hapenning because refresh only updates parts of the window that actually changed it character? Then how come when i did that to stdscr it worked?
Thanks a lot!
Also if u uncomment the line of code below it will work as i wanted.
#include <curses.h> int main() { initscr(); WINDOW *son; noecho(); start_color(); init_pair(1,COLOR_RED,COLOR_BLUE); init_pair(2,COLOR_GREEN,COLOR_RED); son = subwin(stdscr,5,20,10,15); if(son==NULL) { endwin(); return 0; } wbkgd(stdscr, COLOR_PAIR(1)); waddstr(stdscr,"Hello son!"); wrefresh(stdscr); wbkgd(son,COLOR_PAIR(2)); //wclear(son); //Remove this to make it work... but why isn't a refresh enough? waddstr(son,"Hello father!"); wrefresh(son); wgetch(son); endwin(); return 0; }
Thanks once more and sorry guys i think i'll be asking a lot D: | https://sourceforge.net/p/pdcurses/discussion/95731/thread/87abdd15/ | CC-MAIN-2017-43 | refinedweb | 237 | 64.2 |
Undercloud deploy fails at rsyslog
I have attempted to install tripleo several times and keep getting errrors on centos. I used the same config on rhel and it completes normally.
TASK [Restart rsyslogd service after logging conf change] *************************************************************************fatal: [ucloud]: FAILED! => {"msg": "The conditional check 'logconfig|changed' failed. The error was: template error while templating string: no filter named 'changed'. String: {% if logconfig|changed %} True {% else %} False {% endif %}\n\nThe error appears to be in '/home/stack/undercloud-ansible-qjinqf/Undercloud/host_prep_tasks.yaml': line 807, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n register: logconfig\n - name: Restart rsyslogd service after logging conf change\n ^ here\n"}NO MORE HOSTS LEFT ***
I looked in the play book and the item listed was *- name: Check if rsyslog exists register: rsyslog_config stat: path: /etc/rsyslog.d - block: - copy: content: '# Fix for (...)
local2.* /var/log/containers/swift/swift.log & stop ' dest: /etc/rsyslog.d/openstack-swift.conf name: Forward logging to swift.log file register: logconfig
- name: Restart rsyslogd service after logging conf change service: name=rsyslog state=restarted when: - logconfig|changed when: rsyslog_config.stat.exists - file: path: '{{ item }}' state: directory name: create persistent directories with_items: - /srv/node - /var/cache/swift - /var/log/swift - /var/log/containers - /var/log/containers/swift *
I found the config file created in rsyslog.d but no sift.log in the containers folder. any thoughts on how to figure this out would be appriciated. | https://ask.openstack.org/en/question/128383/undercloud-deploy-fails-at-rsyslog/ | CC-MAIN-2021-04 | refinedweb | 252 | 50.12 |
Doctests in Python
Doctests are one of the most fascinating things in Python. Not necessarily because it’s particularly elegant or useful, but because it’s unique: I haven’t found another language that has a similar kind of feature.
Here’s how it works. Imagine I was writing an adder:
def add(a,b): return a + b
There’s two kinds of ways we can test it. The first is with unit tests, which everybody’s already used to. In pytest, that’d just be something like
assert add(1,2) == 3. But we can also write a doctest:
def add(a,b): """ Adds two numbers. >>> add(5,6) 11 """ return a + b
Then I can run the doctest with
python -m doctest. It will simulate adding every input in the REPL and confirm it matches the given outputs. They were invented in 1999, well before TDD really took off, so there wasn’t a common convention on how to write unit tests. Doctests experimented with providing a different use-case than unittest did: the test is embedded right in the documentation and directly matches the HCI.
Nobody really uses them anymore. Turns out they make pretty bad tests! Since there’s no corresponding
beforeEach for doctests (which would arguably miss the whole point), you have to repeat the same setup for every single doctest. Not a big deal when it’s a simple function, but imagine trying to use something like hypothesis in a doctest! Code-tests-code is much more flexible and scalable than documents-test-code. So for error checking, unittests are heavily used across languages while doctests remains a Python curiosity.
I think this is a huge shame. While doctests are much less efficient than unittests, they do have a unique property. Imagine you’re looking at this code:
def f(a,b): """ I think this adds two numbers but I'm not sure... >>> f(1, 2) 3 """ return a - b
The doctest fails. So what should you do? Depends on if the unit test fails. If it does, then you have a bug in your function. If it passes, though, you can reasonably assume that it was intended from the start to be a subtraction. That means there’s a problem with your documentation. Unit tests check your code for bugs, while doctests check your guide for bugs!
I find that really cool. A lot of people have talked about how hard it is to keep your docs in sync with your code. Some have even advocated that docs are somehow dysfunctional: the only way to know what your code is supposed to do is to look at the tests. It’s at the point where BDD advocates even call their tests ‘specs’, which is kind of like throwing a hammer at a car and calling the dent a blueprint. I can see the temptation, though: tests are constantly running, so they’re theoretically forced to stay in sync. Whether they actually do is another matter, of course, but it’s the dynamic nature, the tests as motion, that makes it an appealing substitute for documentation.
Doctests go the other way. Instead of making the tests have documentation, it adds a tool to test documentation consistency. It’s not very good at it, which is why nobody uses it, but that’s not its fault. People have iterated on how to best do unit testing for two decades now, while doctesting is a niche part of a single language.
I’ve seen a lot of work on automatically generating documentation from code. I’d be interested in seeing how people approach it the other way, writing documentation in human language and instead verifying it from the code. Partially because it would allow for clearer documentation, and partially because it seems like nobody’s looked that hard. Finding beauty in the missing places. | https://hillelwayne.com/post/python-doctests/ | CC-MAIN-2018-17 | refinedweb | 648 | 73.27 |
I want to build “pure” client-side ASP.NET AJAX web applications and I want to get the full benefits of a declarative framework. Currently, the ASP.NET AJAX framework does not support a good method of creating declarative client-side controls. In this blog entry, I examine different strategies for implementing declarative client-side controls that target the Microsoft ASP.NET AJAX framework.
I like the ASP.NET framework. Declarative server-side controls are a good thing. I would much rather add a GridView control to a page than script out all the necessary logic for rendering the user interface for sorting and paging a set of items. Likewise for the TreeView, Menu, and all of the other controls in the toolbox. In general, I believe that it is better to declare than to code.
I’ve been writing web applications for a long time. I used to build huge websites using classic Active Server Pages. Active Server Pages did not support declarative controls and the pages were not pretty. You had to swim in a sea of vbscript/jscript script when reading the page. The pages were hard to understand and maintain. The technical name for the code contained in this type of page is spaghetti code.
I like building user interfaces with declarative controls. Unfortunately, the ASP.NET framework currently only supports declarative server-side controls. The ASP.NET framework does not support declarative client-side controls. Since I want to build rich interactive responsive Ajax web applications, I need client-side declarative controls.
I’ve made a list of the features that I want in a declarative client-side Ajax framework:
· Declarative Controls – I want to be able to add a client-side control to a page in exactly the same way as I currently can add a server-side control to a page. You should be able to declare a rich client-side control simply by adding a tag to a page.
· Composite Controls – In some cases, a client-side control should get replaced by a rich set of HTML elements. Think of the ASP.NET Login control. This control renders TextBox, Validation, and Button controls. Some declarative client-side controls need to be composite controls.
· DOM Accessible – You should be able to interact with a client-side control using standard DOM methods such as getElementById(). As we’ll see later in this blog entry, this requirement is more challenging than you might think.
· Templates – Many of the server-side controls in the ASP.NET framework support templates. Templates are huge. Templates enable you to get rid of the majority of your spaghetti code. However, templates pose special issues for a declarative control framework. A template does not represent a set of controls. A template represents a pattern for a repeated set of controls. Templates introduce special issues about control IDs (you can’t repeat the same ID).
· Libraries – Different client-side controls might live in different client-side script libraries. The framework needs to be able to load the right client-script library for the right control. It would be even better if the framework could determine library dependencies and load the right dependencies automatically and non-redundantly.
· Scriptless – In the perfect world, a page would contain nothing but tags. It would be a valid XHTML document (both the source code and the rendered content). It should not contain scripts or even data-binding expressions. A person entirely ignorant of the ASP.NET framework should be able to load it up into an application like Dreamweaver and start modifying it without endangering any of the page logic (you know, the designer should be able to design without breaking your page’s functionality).
· Notepad-able – I don’t trust any code that I can’t write in notepad. I get nervous whenever I hear anyone say that the Visual tools will make some code easy to write (for example, creating entity associations in LINQ to SQL is currently much too hard). One of the things that I really liked about the original ASP.NET framework was that I could do everything in Notepad. That gave me the feeling that I could understand the code.
The list above is a lot to want, but it describes what I want. The rest of this blog entry will describe different strategies for implementing this type of declarative client-side framework.
Before we do anything else, let’s start by creating a really, really simple ASP.NET AJAX client-side control. The code in Listing 1 defines a simple client-side control named the MyControl control. The MyControl control renders the string “blah”.
Listing 1 – MyControl.js
1: /// <reference name="MicrosoftAjax.js"/>
2:
3: Type.registerNamespace("Code");
4:
5: Code.MyControl = function(element) {
6:
7: Code.MyControl.initializeBase(this, [element]);
8:
9: element.innerHtml = "blah";
10:
11: }
12:
13: Code.MyControl.prototype = {
14:
15: initialize: function() {
16:
17: Code.MyControl.callBaseMethod(this, 'initialize');
18:
19: // Add custom initialization here
20:
21: },
22:
23: dispose: function() {
24:
25: //Add custom dispose actions here
26:
27: Code.MyControl.callBaseMethod(this, 'dispose');
28:
29: }
30:
31: }
32:
33: Code.MyControl.registerClass('Code.MyControl', Sys.UI.Control);
34:
35: if (typeof(Sys) !== 'undefined') Sys.Application.notifyScriptLoaded();
If the code in Listing 1 does not make sense to you, don’t worry. The code defines a very minimal client-side control. So, let’s look at different ways that we can add this control to a page.
You create an ASP.NET AJAX control by calling the client-side $create() method. The $create() method can be used to convert a normal HTML element in a page into a client-side Ajax control.
For example, the page in Listing 2 instantiates the MyControl control that we created in the previous section:
Listing 2 – Way1.aspx
1: <%@ Page Language="C#" %>
3: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
5: <html xmlns="">
7: <head id="Head1" runat="server">
9: <title>Way 1</title>
11: <script type="text/javascript">
13: function pageLoad()
15: {
17: $create(Code.MyControl, null, null, null, $get("ctl"));
19: }
21: </script>
23: </head>
25: <body>
27: <form id="form1" runat="server">
29: <div>
31: <asp:ScriptManager
33: <Scripts>
35: <asp:ScriptReference
36:
37: </Scripts>
38:
39: </asp:ScriptManager>
40:
41: <div id="ctl"></div>
42:
43: </div>
44:
45: </form>
46:
47: </body>
48:
49: </html>
There are two things to notice about this page. First, notice that the MyControl.js JavaScript file is imported with the server-side ScriptManager control. The ScriptReference loads the MyControl.js file.
Second, notice the page includes a JavaScript pageLoad() function. This function gets called by the ASP.NET AJAX framework automatically after all of the scripts are loaded and the DOM has been parsed. The call to the $create() method is contained in this pageLoad method.
The $create() method converts a normal, unassuming HTML element into a client-side ASP.NET AJAX control. The script above converts a DIV tag declared in the body of the page into an ASP.NET AJAX control.
The $create() method accepts the following set of parameters:
· Type – Indicates the type of control to create. For example, Code.MyControl
· Properties – Indicates a list of initial property values for the control. You pass this list as a JavaScript object literal.
· Events – Indicates a list of initial event handlers for the control. You pass this list as a JavaScript object literal.
· References – Indicates a list of references to other controls. You pass this list as a JavaScript object literal.
· Element – Indicates the DOM element where the control will be initialized.
I almost never use any of the parameters except for Type, Properties, and Element.
I call this method of instantiating an AJAX control the Windows Forms approach. I call it the Windows Forms approach since it reminds me of the way that you must code the instantiation of all of the controls in a Windows Forms form.
Nobody writes Windows Forms applications in notepad because this method of instantiating controls is just too awkward and time consuming. Obviously, this is not a good way to create AJAX controls. This approach is a non-declarative approach. It does not work well in the case of pages with a lot of controls.
The ASP.NET framework currently supports one method of creating declarative client-side controls. You can create a declarative client-side control by launching it from a declarative server-side ASP.NET control.
To make this easy, the ASP.NET 3.5 framework includes a base control class for this very purpose: the ScriptControl. The code in Listing 3 defines a server-side control named MyControl that launches the client-side control also named MyControl.
Listing 3 – MyControl.cs
1: using System;
3: using System.Web;
5: using System.Web.UI;
7: using System.Collections.Generic;
9: namespace Code
11: {
13: public class MyControl : ScriptControl
17: protected override HtmlTextWriterTag TagKey
19: {
21: get
23: {
25: return HtmlTextWriterTag.Div;
27: }
29: }
31: protected override IEnumerable<ScriptDescriptor> GetScriptDescriptors()
33: {
35: ScriptDescriptor descriptor = new ScriptControlDescriptor("Code.MyControl", this.ClientID);
37: return new List<ScriptDescriptor>() { descriptor };
39: }
41: protected override IEnumerable<ScriptReference> GetScriptReferences()
43: {
45: ScriptReference refer = new ScriptReference("~/MyControl.js");
47: return new List<ScriptReference>() { refer };
49: }
50:
51: }
52:
53: }
The server-side control defined in Listing 3 has one property named TagKey and two methods named GetScriptDescriptors() and GetScriptReferences. The TagKey property determines the tag that the server-side control renders to the browser. In the code above, the control renders a DIV tag.
The GetScriptDescriptors() method is responsible for generating the $create() method call that creates the client-side control. The control in Listing 3 creates a client-side control named MyControl. The control generates the following $create statement which is executed on the client:
$create(Code.MyControl, null, null, null, $get("ctl"));
$create(Code.MyControl, null, null, null, $get("ctl"));
Finally, the GetScriptReferences() method enables you to add the JavaScript library that contains the definition for the client-side control to the page. In Listing 3, the GetScriptReferences() method is used to add a reference to the MyControl.js JavaScript library that contains the definition of the client-side MyControl control.
After you create the server-side script control, you can use the control in a page just like any other ASP.NET control. The page in Listing 4 takes advantage of the server-side MyControl control to launch the client-side MyControl control.
Listing 4 – Way2.aspx
3: <%@ Register TagPrefix="ajax" Namespace="Code" %>
5: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
7: <html xmlns="">
9: <head runat="server">
11: <title>Way 2</title>
13: </head>
15: <body>
17: <form id="form1" runat="server">
19: <div>
21: <asp:ScriptManager
23: <ajax:MyControl
25: </div>
27: </form>
29: </body>
31: </html>
The advantage of this approach to creating client-side controls is that it integrates very well with the existing ASP.NET framework. For example, the very popular AJAX Control Toolkit uses this approach (more or less) for all of the toolkit controls such as the DragPanel and AutoComplete controls (see). This is a very good approach to take when you want to incrementally add Ajax functionality to an existing server-side ASP.NET application.
However, I’m interested in building “pure” Ajax applications with the Microsoft ASP.NET framework. If you want to abandon all of the baggage of server-side ASP.NET -- such as view state, the postback model, and the page and control event model -- then you won’t want to use heavy weight server-side controls as a launch mechanism for your client-side controls.
In particular, the ScriptControl base control requires you to add a ScriptManager control to a page. The ScriptManager control, in turn, requires you to add a server-side form control to a page. Therefore, as soon as you start using a ScriptControl to launch your client-side controls, you’ve already committed yourself to the web form model.
Furthermore, it is important to realize that a ScriptControl derives from the base WebControl class. The base WebControl class has a rich set of events, methods, and properties tied to the server-side Web Forms page execution model. A ScriptControl participates in the normal page execution lifecycle.
When building a “pure” Ajax application, this is the very stuff that I want to go away:
· Postbacks – The whole point of an Ajax application is to avoid submitting entire pages back to the server. From the perspective of an Ajax developer, posting a page back to the server is just an opportunity to create a bad user experience. Imagine, for the moment, freezing a desktop application and making the screen shake whenever a user performed any action; that would be crazy. The holy grail of a “pure” Ajax application is the single page application.
· View State – View state is great for a server-side ASP.NET web application. However, view state is a nightmare when you are building a client-side application. Keeping view state synchronized between the client and server is a huge pain. In any case, there is no point in maintaining view state when you are not performing postbacks (see previous bullet point). Since view state must be pushed back and forth across the wire in a hidden form field, view state just hurts performance.
· Page and Control Execution Lifecycle – ASP.NET provides developers with a rich server-side page execution lifecycle which is completely useless in the case of a "pure" client-side application. Who cares about Init, Load, PreRender, and Unload events when these events happen on the server-side? Just render my client-side code please.
Now, I want to emphasize that the features listed in the bullet points above are what makes ASP.NET such a fantastic framework for building server-side web applications. Be that as it may, I want my fast Ajax fighter jet of a client-side application.
The .NET and ASP.NET frameworks provide many useful features that can be used in a pure Ajax web application. For example, I want to take advantage of ASP.NET services such as the authentication, role, and profile services. I also want to take advantage of .NET framework features such as LINQ to SQL. I just don’t want to be forced to adopt a heavy-weight server-side page model just to take advantage of these useful framework features.
If server-side ASP.NET controls are too heavy to launch client-side Ajax controls, then why not build lightweight server-side controls? Why not create server-side controls that don’t assume view state, the postback model, or a server-side page execution lifecycle? This is the approach to creating declarative client-side controls that we will consider in this section.
When I started to investigate this approach, I quickly realized that heavyweight web controls are baked very deeply into the ASP.NET framework. Every control used in an ASP.NET page must derive from the base Control class (System.Web.UI.Control class). The base Control class already assumes a heavy-weight server-side page model. Therefore, if you want to create lightweight controls, you must also abandon ASP.NET pages.
One of the nice features of the ASP.NET framework is that it was designed to be very flexible. If you want to abandon ASP.NET pages as they currently exist, then you can do this. You simply need to remap ASP.NET pages to a new HTTP Handler.
I want my handler to do something really simple. I want it to parse a page and generate $create() method calls for any page elements that do not inhabit the XHTML namespace. My HTTP Handler is contained in Listing 5:
Listing 5 – AjaxPageHandler.cs
5: using System.Xml;
7: using System.Text;
9: public class AjaxPageHandler : IHttpHandler
13: public void ProcessRequest(HttpContext context)
17: // load page
19: string pagePath = context.Request.PhysicalPath;
21: XmlDocument doc = new XmlDocument();
23: try
25: {
27: doc.Load(pagePath);
31: catch
35: throw new Exception("Could not load " + context.Request.Path);
37: }
39: // find Ajax elements
41: XmlNamespaceManager nsManager = new XmlNamespaceManager(doc.NameTable);
43: nsManager.AddNamespace("default", "");
45: XmlNodeList controls = doc.SelectNodes(@"//*[namespace-uri(.) != '']", nsManager);
47: if (controls.Count > 0)
49: {
51: // Add Microsoft AJAX Library Script reference
53: AddScriptTag(context, doc, "/Microsoft/MicrosoftAjax.js");
54:
55: // Add libraries
56:
57: AddLibraries(context, doc);
58:
59: // build $create method calls
60:
61: AddCreates(doc, controls);
62:
63: }
64:
65: // Render XML doc
66:
67: doc.Save(context.Response.Output);
68:
69: }
70:
71: private void AddLibraries(HttpContext context, XmlDocument doc)
72:
73: {
74:
75: foreach (XmlAttribute att in doc.DocumentElement.Attributes)
76:
77: {
78:
79: if (att.Name.StartsWith("xmlns:"))
80:
81: {
82:
83: AddScriptTag(context, doc, att.Value);
84:
85: }
86:
87: }
88:
89: }
90:
91: private void AddScriptTag(HttpContext context, XmlDocument doc, string path)
92:
93: {
94:
95: XmlElement script = doc.CreateElement("script", "");
96:
97: script.SetAttribute("type", "text/javascript");
98:
99: script.SetAttribute("src", context.Request.ApplicationPath + path);
100:
101: script.InnerText = ""; // don't create minimal element
102:
103: doc.DocumentElement.AppendChild(script);
104:
105: }
106:
107: private void AddCreates(XmlDocument doc, XmlNodeList controls)
108:
109: {
110:
111: // Build $create method calls
112:
113: StringBuilder sb = new StringBuilder();
114:
115: sb.AppendLine();
116:
117: sb.AppendLine(@"//<![CDATA[");
118:
119: sb.AppendLine("Sys.Application.initialize();");
120:
121: sb.AppendLine("Sys.Application.add_init(function() {");
122:
123: foreach (XmlElement el in controls)
124:
125: {
126:
127: if (!el.HasAttribute("id"))
128:
129: throw new Exception("Element " + el.Name + " missing id");
130:
131: sb.AppendFormat("$create({0}.{1},null,null,null,$get('{2}'));\n", el.Prefix, el.LocalName, el.GetAttribute("id"));
132:
133: }
134:
135: sb.AppendLine("});");
136:
137: sb.AppendLine("//]]>");
138:
139: // Add script element
140:
141: XmlElement script = doc.CreateElement("script", "");
142:
143: script.SetAttribute("type", "text/javascript");
144:
145: script.InnerXml = sb.ToString();
146:
147: doc.DocumentElement.AppendChild(script);
148:
149: }
150:
151: public bool IsReusable
152:
153: {
154:
155: get { return true; }
156:
157: }
158:
159: }
The HTTP handler in listing 5 actually does 3 things:
· It adds a reference to the Microsoft AJAX Library. The assumption is that this library is located at the path /Microsoft/MicrosoftAjax.js. You can download the standalone Microsoft AJAX Library from
· It adds a reference to each library needed by the client-side controls.
· It calls the $create() method for each client-side control.
You can apply the HTTP Handler to any aspx page in a folder named AjaxPages with the web configuration file in Listing 6 by adding this configuration file to the AjaxPages folder.
Listing 6 – /AjaxPages/Web.config
1: <?xml version="1.0"?>
3: <configuration>
5: <system.web>
7: <httpHandlers>
9: <remove verb="*" path="*.aspx"/>
11: <add verb="*" path="*.aspx" type="AjaxPageHandler"/>
13: </httpHandlers>
15: </system.web>
17: </configuration>
Finally, Listing 7 contains a page that contains two declarations for client-side controls. The page contains the declaration for a control named MyControl and a control named Boom.
Listing 7 – /AjaxPages/Page.aspx
1: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "">
3: <html xmlns="" xmlns:
5: <head runat="server">
7: <title>Ajax Page</title>
9: </head>
11: <body>
13: <div>
15: <Code:MyControl</Code:MyControl>
17: <Ajax:Boom</Ajax:Boom>
19: </div>
21: </body>
23: </html>
The page in Listing 7 is a valid XHTML page. Notice that the page’s document element contains three xmlns attributes. It has xmlns attributes for the following namespaces:, MyControl.js, and Boom.js. The last two namespaces do double duty as paths to client control libraries.
Notice, furthermore, that the page contains the declarations for two client-side controls named MyControl and Boom. The declarations use the namespace prefixes from the xmlns attributes.
When you request the page, the AjaxPageHandler HTTP Handler executes and parses the page. The rendered page in Listing 8 gets sent to the browser.
Listing 8 – Page.aspx (rendered content)
1: <?xml version="1.0" encoding="utf-8"?>
3: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""[]>
5: <html xmlns="" xmlns:
7: <head runat="server">
9: <title>Ajax Page</title>
11: </head>
13: <body>
15: <div>
17: <Code:MyControl
19: </Code:MyControl>
21: <Ajax:Boom
23: </Ajax:Boom>
27: </body>
29: <script type="text/javascript" src="/Code/Microsoft/MicrosoftAjax.js">
31: </script>
33: <script type="text/javascript" src="/Code/MyControl.js">
35: </script>
37: <script type="text/javascript" src="/Code/Boom.js">
39: </script>
41: <script type="text/javascript">
43: //<![CDATA[
45: Sys.Application.initialize();
47: Sys.Application.add_init(function() {
49: $create(Code.MyControl,null,null,null,$get('ctl'));
51: $create(Ajax.Boom,null,null,null,$get('boo1'));
53: });
55: //]]>
57: </script>
59: </html>
This seems like a great approach to solving the problem of declaring client-side controls. This solution appears to enable us to build “pure” client-side Ajax applications. Our lightweight pages do not shackle us to view state, the postback model, or the server-side page execution lifecycle.
In fact, when I first started writing this blog entry (days ago), I thought that this solution would be the best solution. However, the devil is in the details. There are some real problems with this solution that you encounter as soon as your declarative controls get more complicated.
You quickly encounter problems with the solution when working with Firefox. The problem is that Firefox interprets all of the custom control declarations using its tag soup processor. This processor mangles the tags in weird ways. You might have noticed that I declared the two tags in Listing 7 using explicit opening and closing tags:
<Code:MyControl
<Code:MyControl
</Code:MyControl>
</Code:MyControl>
<Ajax:Boom
<Ajax:Boom
</Ajax:Boom>
</Ajax:Boom>
If, instead, I had declared the tags using self-closing tags, then the page would not have been interpreted correctly by Firefox:
<Code:MyControl
<Code:MyControl
<Ajax:Boom
<Ajax:Boom
The Firefox tag soup processor interprets the above tag declarations like this:
<Ajax:Boom</Ajax:Boom>
<Ajax:Boom</Ajax:Boom>
Notice that the second tag has gotten gobbled up by the first tag. When you have a series of custom client-side controls in a row, they all get swallowed up by the first custom tag in the series.
This problem might seem minor, but it gets more acute when you start adding multiple client-side tags to a page. Eventually, I want to be able to declare client-side controls that contain templates that contain multiple client-side controls. This will never happen if Firefox always insists on re-arranging all of my tags.
There is a way to fix this problem when working with Firefox. You need to serve your pages as XHTML pages (using the "application/xhtml+xml" MIME type) instead of the normal text/html MIME type. When a page is served to Firefox using the "application/xhtml+xml" MIME type, Firefox does not use its normal tag soup processor. Instead, it uses its stricter XML parser to parse the page. No more garbling the page.
You can serve pages to Firefox using the "application/xhtml+xml" MIME type by using the Global.asax file in Listing 9:
Listing 9 – Global.asax
1: <%@ Application Language="C#" %>
3: <script runat="server">
5: void Application_PreSendRequestHeaders(object sender, EventArgs e)
7: {
9: HttpContext context = ((HttpApplication)sender).Context;
11: if (context.Request.Path.ToLower().EndsWith(".aspx"))
13: {
15: if (Array.IndexOf(context.Request.AcceptTypes, "application/xhtml+xml") != -1)
17: {
19: context.Response.ContentType = "application/xhtml+xml";
21: }
23: }
25: }
27: </script>
Notice that the Global.asax file in Listing 9 only serves a page as XHTML when the browser accepts it. Microsoft Internet Explorer cannot handle XHTML pages – so don’t attempt to serve an XHTML page to this browser.
Unfortunately, this solution introduces yet another problem. When a page is served as "application/xhtml+xml", you can no longer use the DOM getElementById() method with custom tags. Because the Microsoft AJAX Library $get() method is a shorthand for the getElementById() method, and we use the $get()method with the last parameter of the $create() method call, this means that our $create() method calls will no longer work. Drats!
So, what do we do now? If we want to be able to use getElementById() with our declarative client-side controls, then we must replace the custom tags with standard XHTML tags. For example, we can replace any custom client-side tags, such as the Code:MyControl tag, with a standard XHTML DIV tag. If we perform this replacement, then getElementById() works again.
We could modify our AjaxPageHandler HTTP Handler to perform this replacement on the server-side for us. However, at this point, I’m going to abandon the server-side approach to parsing declarative controls and turn to a client-side approach.
The final solution for parsing client-side declarative controls that I want to examine in this blog entry is the client-side approach. When following this strategy, you perform all of your control parsing within your client-side JavaScript code.
The basic idea is that you perform a DOM walk. You walk through all of the elements in a page and you execute a call to the $create() method whenever you encounter any custom tags. You can use the JavaScript library in Listing 10 to perform a DOM walk.
Listing 10 – DomWalk.js
3: Sys.Application.add_init( appInit );
5: function appInit()
9: // Find Ajax controls
11: var controls = [];
13: var els = document.getElementsByTagName("*");
15: var el;
17: for (var i=0;i < els.length;i++)
19: {
21: el = els[i];
23: if (isControl(el))
25: controls.push( el ) ;
27: }
29: // Create controls
31: for (var k=0;k < controls.length;k++)
33: {
35: el = controls[k];
37: $create(Type.parse( getControlType(el) ), null, null, null, el);
39: }
41: }
43: function isControl(el)
45: {
47: if (el.tagUrn)
49: return true;
51: if (el.namespaceURI && el.namespaceURI != "")
53: return true;
55: return false;
57: }
59: function getControlType(el)
61: {
63: return (el.tagUrn || el.namespaceURI) + "." + (el.localName || el.tagName);
65: }
67: Sys.Application.notifyScriptLoaded();
The code in Listing 10 marches through all of the elements in a page searching for elements that are not part of the default XHTML namespace. Next, the code calls the $create() method for each of non-XHTML elements in order to instantiate a client-side control. The page in Listing 11 uses the DomWalk.js JavaScript library.
Listing 11 – Way3.aspx
3: <html xmlns="" xmlns:
7: <title>Way 3</title>
9: <script type="text/javascript" src="Microsoft/MicrosoftAjax.js"></script>
11: <script type="text/javascript" src="SuperControl.js"></script>
13: <script type="text/javascript" src="DomWalk.js"></script>
15: </head>
17: <body>
21: <ajax:SuperControl
23: </div>
25: </body>
27: </html>
There are three things that you should notice about the page in Listing 11. First, notice that the HTML document element includes an xmlns attribute that has the name ajax and the value Code. Next, notice that each of the required script libraries -- MicrosoftAjax.js, SuperControl.js, and DomWalk.js-- are manually imported with <script> tags. Finally, notice that the body of the page contains the declaration for a single control named the SuperControl.
In order for the page in Listing 11 to work correctly, it must be served as an XHTML document to Firefox. In other words, you need the Global.asax file contained in Listing 9. One consequence of this requirement is that the page must be an aspx page instead of an html page (otherwise, the Global.asax file won’t execute when you request the page).
The SuperControl is contained in Listing 12.
Listing 12 – SuperControl.js
5: Code.SuperControl = function(element) {
7: // create replacement element
9: this.originalElement = element;
11: element = document.createElement("div");
13: element.id = this.originalElement.getAttribute("id");
15: this.originalElement.parentNode.replaceChild(element,this.originalElement);
17: Code.SuperControl.initializeBase(this, [element]);
19: // add inner HTML
21: element.innerHTML = "Super!";
23: }
25: Code.SuperControl.prototype = {
27: initialize: function() {
29: Code.SuperControl.callBaseMethod(this, 'initialize');
31: // Add custom initialization here
33: },
35: dispose: function() {
37: //Add custom dispose actions here
39: Code.SuperControl.callBaseMethod(this, 'dispose');
41: }
43: }
45: Code.SuperControl.registerClass('Code.SuperControl', Sys.UI.Control);
47: if (typeof(Sys) !== 'undefined') Sys.Application.notifyScriptLoaded();
Notice that the constructor for the SuperControl includes logic to replace the original element (the ajax:SuperControl element) with a new element (a DIV element). There are two reasons why you need to do this. First, you cannot use document.getElementById() with a custom element like the ajax:SuperControl element. Second, innerHTML does not work with custom elements. If you want your custom controls to work like normal XHTML elements, then you need to replace them with normal XHTML elements.
So which approach is the best approach for parsing client-side Ajax controls? I’m leaning on the side of the client-side approach described in the last section. I have two reasons for preferring the client-side approach over the lightweight server-side approach.
First, I know that I need to replace the custom elements that represent Ajax controls in a page with standard XHTML elements. Otherwise, I can’t interact with the controls as normal elements (getElementById() and innerHTML don’t work). I could perform this replacement on the server-side. However, I might want to dynamically inject controls into a page from JavaScript code. In that case, it would make more sense to parse the elements on the client-side.
Imagine, for example, a client-side Login control. This is an example of a composite control. I would want to replace the ajax:Login element with a set of normal XHTML elements like DIV and INPUT elements. I don’t want to do these replacements in server-side code since I might want to add a Login control to a page from JavaScript code dynamically.
Second, I don’t want to perform a getElementById() call for each element that I want to instantiate with the $create() method since the getElementById() method is notoriously slow. However, if I build my $create() method calls on the server-side, I don’t know of any way of avoiding calling getElementById() for each control. When rendering from the server, I am forced to represent an element with its Id string and not with the element itself.
If, on the other hand, I perform the parsing on the client-side, then there are many ways of avoiding getElementById(): I can use getElementsByTagName(), I can use XPATH, I can perform an XSLT transformation, and so on. I don't know a priori which method is faster, but it seems like I have much more flexibility.
With that amount of efforts you can develop your own Ajax framework, from ground up just using ASP.NET Callbacks.
By the way, "declarative" is not always good in my opinion, "visual" is.
Pingback from webcontrol
"We could modify our AjaxPageHandler HTTP Handler to perform this replacement on the server-side for us. However, at this point, I’m going to abandon the server-side approach to parsing declarative controls and turn to a client-side approach."
I was with you until that step. To me this seems like the point to throw in the towel and abandon custom client tags. Allow the server to render pure XHTML and call it a day.
Sure, you lose strict getElementById functionality for your controls. But that's ok, IMHO. The client only speaks XHTML and you can't really change that. After all, this is your presentation layer. Every graphical application must eventually bow to a presentation layer at some point. You can still do rich client side manipulation of your controls if you build client side datastructures to manage the state of each control. You could even keep your own DOM if you want to - much like the Control heirarchy in ASP.NET. In other words you just need a smarter controller. If you want to insantiate your controls from the client, have the controler callback to a standard creation mechanism on the server that returns pure XHMTL for the control in question.
Great work though. I think this general direction in good and will become the prefered way to develop web applications.
Pingback from xslt namespace
Pingback from http my att net
Pingback from firefox opening web pages to slow
Pingback from 125 amp wire size
Pingback from paging file missing
Pingback from doc type declarations | http://weblogs.asp.net/stephenwalther/archive/2008/03/03/declaring-client-side-asp-net-ajax-controls-part-i.aspx | crawl-002 | refinedweb | 5,411 | 58.69 |
Created on 2010-08-05 18:21 by valgog, last changed 2010-08-05 19:02 by brian.curtin.
When executing the following code on Windows 7 64-bit ::
import sys
import signal
import time
print 'Version:'
print sys.executable or sys.platform, sys.version
print
print
def h(s, f): print s
signal.signal(signal.CTRL_BREAK_EVENT, h)
we get the following output::
Version:
C:\Python27\python.exe 2.7 (r27:82525, Jul 4 2010, 07:43:08) [MSC v.1500 64 bit (AMD64)]
Traceback (most recent call last):
File "signal_ctrl_break_event.py", line 14, in <module>
signal.signal(signal.CTRL_BREAK_EVENT, h)
RuntimeError: (0, 'Error')
When trying to register a handler for a signal.CTRL_C_EVENT the exception is as follows::
File "signal_ctrl_c_event.py", line 6, in <module>
signal.signal(signal.CTRL_C_EVENT, h)
ValueError: signal number out of range
Those two signals are only intended to work with os.kill -- they are specific to the GenerateConsoleCtrlEvent function in Modules/posixmodule.c. I'll have to change the documentation to note that.
If you want to send those events to other processes, have a look at os.kill and some example usage in Lib/test/test_os.py and Lib/test/win_console_handler.py. This also needs better documentation.
Fixed the first part, denoting that signal.CTRL_C_EVENT and signal.CTRL_BREAK_EVENT are for os.kill only. Done in r83745 (py3k) and r83746 (release27-maint).
Leaving open for the second part about their usage. | http://bugs.python.org/issue9524 | CC-MAIN-2014-10 | refinedweb | 237 | 63.05 |
react-native-gallery-swiper
An easy and simple to use React Native component to render an image gallery with common gestures like pan, pinch and double tap. Supporting both iOS and Android.
Improved and changed on top of
react-native-image-gallery.
Install
Type in the following to the command line to install the dependency.
$ npm install --save react-native-gallery-swiper
or
$ yarn add react-native-gallery-swiper
Usage Example
Add an
import to the top of the file. At minimal, declare the
GallerySwiper component in the
render() method providing an array of data for the
images prop.
import GallerySwiper from "react-native-gallery-swiper"; //... render() { return ( <GallerySwiper style={{ flex: 1, backgroundColor: "black" }} images={[ { source: require("yourApp/image.png"), dimensions: { width: 1080, height: 1920 } }, { uri: "", dimensions: { width: 1080, height: 1920 } }, { uri: ""}, { uri: ""}, { uri: ""}, { uri: ""}, { uri: ""}, ]} /> ); } //...
API
<GallerySwiper /> component accepts the following props...
Props
Scroll state and events
onPageScroll: (event) => {}.
The event object carries: :
1. Clone the Repo
Clone
react-native-gallery-swiper locally. In a terminal, run:
$ git clone react-native-gallery-swiper
2. Install and Run
$ cd react-native-gallery-swiper/example/
iOS - Mac - Install & Run
1. check out the code 2. npm install 3. npm run ios
Android - Mac - Install & Run
1. check out the code 2. npm install 3. emulator running in separate terminal 4. npm run android | https://reactnativeexample.com/an-easy-and-simple-to-use-react-native-component-to-render-an-image-gallery/ | CC-MAIN-2020-50 | refinedweb | 222 | 62.54 |
01 May 2012 16:44 [Source: ICIS news]
LONDON (ICIS)--Petkim's exports rose 50% in value year on year to $288m (€219m) during the first quarter of 2012, ?xml:namespace>
The figure showed the company was on course to break the billion-dollar barrier for exports during this year, having achieved an export value of $834m last year, it added.
The target should be attainable despite the renewed economic downturn in some parts of
The State Oil Company of Azerbaijan (SOCAR), which in April bought a 10.32% stake in Petkim to bring its controlling stake in the producer to 61.42%, has set out to transform the company with the construction of a $10bn petrochemical “super site” on a peninsula in Aliaga, near Izmir on western Turkey’s Aegean coast.
Modelled on the Jurong Island industrial zone in Singapore, which includes a chemical manufacturing cluster, it should by 2023 more than triple Petkim's petrochemical production capacity to 10m tonnes/year and boost the company's share of the Turkish petrochemical market from its current 25% to 40%, Petkim said.
However, Petkim hopes that a $400m container port, to be built as part of the complex, will also allow the company to use the new capacity to make major inroads into new export | http://www.icis.com/Articles/2012/05/01/9555453/Turkeys-Petkim-targeting-billion-dollar-export-barrier.html | CC-MAIN-2013-48 | refinedweb | 215 | 50.4 |
URL: <> Summary: PPP log messages Project: lwIP - A Lightweight TCP/IP stack Submitted by: marcoe Submitted on: Thu 04 Jan 2018 03:27:27 PM UTC Category: PPP Should Start On: Thu 04 Jan 2018 12:00:00 AM UTC Should be Finished on: Thu 04 Jan 2018 12:00:00 AM UTC Priority: 3 - Low Status: None Privacy: Public Percent Complete: 0% Assigned to: None Open/Closed: Open Discussion Lock: Any Planned Release: None Effort: 1.00 _______________________________________________________ Details: Hello all, I've noticed that also with PPP_DEBUG = 0 some kiloBytes of flash memory are occupied by ppp_vslprintf() code and by many PPP debug strings. In fact, debugging messages are not consistent across PPP code because: - in some points log messages are correctly produced using macros like LCPDEBUG(a) defined in pppdebug.h - in other points there are direct calls to ppp_warn(), ppp_notice(), ... - in other points the calls to ppp_warn(), ppp_notice(), ... are inside a #if PPP_DEBUG ... #endif block I think it would be nice to have the possibility to completely remove these log messages and related helper functions in a production environment as well as we can do it for all the other debug messages inside lwIP. Bye _______________________________________________________ Reply to this item at: <> _______________________________________________ Message sent via/by Savannah | http://lists.gnu.org/archive/html/lwip-devel/2018-01/msg00080.html | CC-MAIN-2019-09 | refinedweb | 211 | 54.76 |
IL2CPP Optimizations: Devirtualization
The scripting virtual machine team at Unity is always looking for ways to make your code run faster. This is the first post in a three part miniseries about a few micro-optimizations performed by the IL2CPP AOT compiler, and how you can take advantage of them. While nothing here will make code run two or three times as fast, these small optimizations can help in important parts of a game, and we hope they give you some insight into how your code is executing.
Modern compilers are excellent at performing many optimizations to improve run time code performance. As developers, we can often help our compilers by making information we know about the code explicit to the compiler. Today we’ll explore one micro-optimization for IL2CPP in some detail, and see how it might improve the performance of your existing code.
Devirtualization
There is no other way to say it, virtual method calls are always more expensive than direct method calls. We’ve been working on some performance improvements in the libil2cpp runtime library to cut back the overhead of virtual method calls (more on this in the next post), but they still require a runtime lookup of some sort. The compiler cannot know which method will be called at run time – or can it?
Devirtualization is a common compiler optimization tactic which changes a virtual method call into a direct method call. A compiler might apply this tactic when it can prove exactly which actual method will be called at compile time. Unfortunately, this fact can often be difficult to prove, as the compiler does not always see the entire code base. But when it is possible, it can make virtual method calls much faster.
The canonical example
As a young developer, I learned about virtual methods with a rather contrived animal example. This code might be familiar to you as well:
Then in Unity (version 5.3.5) we can use these classes to make a small farm:
Here each call to Speak is a virtual method call. Let’s see if we can convince IL2CPP to devirtualize any of these method calls to improve their performance.
Generated C++ code isn’t too bad
One of the features of IL2CPP I like is that it generates C++ code instead of assembly code. Sure, this code doesn’t look like C++ code you would write by hand, but it is much easier to understand than assembly. Let’s see the generated code for the body of that foreach loop:
I’ve removed a bit of the generated code to simplify things. See that ugly call to Invoke? It is going to lookup the proper virtual method in the vtable and then call it. This vtable lookup will be slower than a direct function call, but that is understandable. The Animal could be a Cow or a Pig, or some other derived type.
Let’s look at the generated code for the second call to Debug.LogFormat, which is more like a direct method call:
Even in this case we are still making the virtual method call! IL2CPP is pretty conservative with optimizations, preferring to ensure correctness in most cases. Since it does not do enough whole-program analysis to be sure that this can be a direct call, it opts for the safer (and slower) virtual method call.
Suppose we know that there are no other types of cows on our farm, so no type will ever derive from Cow. If we make this knowledge explicit to the compiler, we can get a better result. Let’s change the class to be defined like this:
The sealed keyword tells the compiler that no one can derive from Cow (sealed could also be used directly on the Speak method). Now IL2CPP will have the confidence to make a direct method call:
The call to Speak here will not be unnecessarily slow, since we’ve been very explicit with the compiler and allowed it to optimize with confidence.
This kind of optimization won’t make your game incredibly faster, but it is a good practice to express any assumptions you have about the code in the code, both for future human readers of that code and for compilers. If you are compiling with IL2CPP, I encourage you to peruse the generated C++ code in your project and see what else you might find!
Next time we’ll discuss why virtual method calls are expensive, and what we are doing to make them faster.
Related posts
60 CommentsSubscribe to comments
SpixySeptember 18, 2016 at 3:30 pm
Hi, does this optimalization work if I call
this.Speak()inside of Cow class?
Does this optimalization work in Mono as well?
Josh PetersonSeptember 19, 2016 at 2:38 pm
Yes, this optimization is independent of where the method is called from, it would work in any case. Mono also does this optimization.
Greg NagelSeptember 21, 2016 at 6:03 am
A related more complicated example: Does boxing get optimized out with generics in both mono and IL2CPP?
T Convert(U other)
{
return (T)(object)(other);
}
Josh PetersonSeptember 21, 2016 at 1:58 pm
I’m not sure if the box opcode is optimized out here. I don’t believe that the code generation for IL2CPP or Mono will do this, but the C# compiler might. Have a look at the IL code in the assembly generated by the C# compiler, and that should make things clearer.
GruSeptember 11, 2016 at 7:53 pm
Thanks for letting us know about the advancement in this area and answering all the questions Josh!
One thing that is a bit unclear to me… Here: it’s mentioned .Net 3.5 but in profiles there is only 2.0 and 2.0 Subset. Is it the compiler supports 3.5 and for Unity it’s enabled only 2.0?
Josh PetersonSeptember 19, 2016 at 2:36 pm
This .NET profile numbering can be a bit confusing. Although in Unity the profiles are named 2.0 and 2.0 Subset set, they are roughy equivalent to .NET 3.5 (there are a few minor differences, probably not important enough to mention). This is the case both with and without the new C# compiler. The C# compiler upgrade does not change the .NET profile version or the C# language version Unity supports (those improvements will be coming later).
The difference in names is there for historical reasons, and we’ll use the proper name for .NET 4.6 (and later) when that is released.
Maxim KamalovSeptember 10, 2016 at 2:58 am
Finally, sealed is used for optimization! As far I know, Microsoft guys had some low-priority plans to use it for optimization in their JIT but never got down to it.
nikolaykuSeptember 9, 2016 at 9:44 pm
Nice article but some questions to you:
Assume that we have a class derived from Cow
public class FlyingCow : Cow{
public override string Speak() {
return “Moooooooo”;
}
}
and class hierarchy is next:
Animal (base class) -> Cow -> FlyingCow
in this case we need use “sealed” keyword ONLY in “top” class (in my example FlyingCow) and this keyword will not work in Cow class (IL2CPP will generate C++ code with template) RIght ?
Le_NainSeptember 10, 2016 at 1:44 pm
If I’m not mistaken, there are 2 possible cases, depending on the specifications of your Flying cow: does it make the same sound than the Cow or not?
1) If they make different sounds (“Moo” vs “Moooooooo”), it’s the case you state. Flying cow derives from Cow, so obviously the Cow class can’t be sealed. You can only seal the Flying cow class, so just as you stated this class will be the only one benefiting from the devirtualization.
2) Now, if they make the same sound (“Moo”) but the difference lies somewhere else in the class, what you can do is: seal the Flying cow class just like before so it still benefits from the entire devirtualization; but also seal the method Speak() on the Cow class (and of course don’t override it in the Flying cow class, because it’s the same) and then it should still be devirtualized.
Josh PetersonSeptember 19, 2016 at 2:33 pm
Le_Nain answered it better than I could, thanks!
ValdecoSeptember 9, 2016 at 4:52 am
C++ also offers virtual and override features. Why implementing a “virtual function invoker” instead of simply translating C# to C++ classes? As I understand, in pure C++ the difference then would be just a matter of 4 lines of assembly code against 1. Is that also the case explained in the article?
Thanks! Best regards :)
Josh PetersonSeptember 9, 2016 at 1:04 pm
This is an interesting suggestion, and is something we have considered. We don’t use C++ virtual functions for two reasons. First, IL2CPP generates all function as C++ free functions. Doing so simplifies our code generation logic and allows us to do things like generic sharing to optimize for better binary size. Second, the requirements of a managed runtime with respect to virtual functions are a bit more strict than they are for C++. For example, IL2CPP needs to throw an exception when a virtual method call is made for a method that was not generated at compile time. See this post for more details:
ObulaAugust 5, 2016 at 3:07 am
Hi Josh, thanks for post.
About your example, how about using interface to instead of abstract class/method?
Something like:
IAnimal animal = new Cow();
animal.Speak()?
Is that the same problem for generating c++ code?
Josh PetersonAugust 5, 2016 at 1:06 pm
I’ve not tried this myself, but IL2CPP should be able to devirtualize the method call in this case as well.
TorresmoAugust 3, 2016 at 7:54 am
Hi Josh,
Do you know have any information on what happens to unused code? I have in my project tons of debug and test code that will never be called in production, but when I check the assemblies (inside the android APK), all that code is still there.
I wonder if the IL2CPP or the platform compiler removes unused code. I expect that less code would make the game launch a bit faster and the package smaller as well.
Cheers.
Josh PetersonAugust 3, 2016 at 1:14 pm
Yes, the IL2CPP build toolchain includes a managed code stripper. So any managed code in IL assemblies that is not used will be removed. This applies only to assemblies, though. Script code in your project is not stripped. If you have script code in your project which is not used, can you manually remove it, or use #if guards to conditionally include it in a build.
Also, the native code linker will usually remove unused code, so even if the managed code remains, the generated C++ code which is not used will likely not be present in the final binary.
DavidJuly 29, 2016 at 4:27 am
Apologies for the double post. I thought that something went wrong when I submitted the first comment.
Josh PetersonJuly 29, 2016 at 1:23 pm
No worries! C++ scripting is not on our roadmap right now. But as I mentioned, we’ve had a number of discussions about it in the past.
DavidJuly 29, 2016 at 4:24 am
C++ scripting would be great.
DavidJuly 29, 2016 at 4:23 am
C++ scripting would be awesome!
TaylanKJuly 27, 2016 at 10:47 pm
So this optimization only saves you time if you are calling the method from the inherited class, correct? It does not benefit when you call Animal.Speak()?
Josh PetersonJuly 28, 2016 at 2:59 am
Correct, this does not apply to a call via the Animal class. That must always be a virtual method call.
Jason HughesJuly 27, 2016 at 4:52 pm
Any chance we will see IL2CPP on PC standalone builds anytime soon? I know it’s in the Windows Store builds already… what’s the hold up? Our Garbage Collection is causing hitches and I’ve heard IL2CPP will help that a lot. Love the blog post, by the way.
Josh PetersonJuly 27, 2016 at 5:13 pm
I’m glad you liked the post, I enjoyed writing and the participating in this discussion!
I don’t think that IL2CPP will be available for PC standalone builds in Unity 5.5. It may be available later, but I don’t have any firm information about that now. The hold-up is due to a few issues:
– We need a good story for installing and locating platform-specific C++ compilers on the machine running the Unity editor. We don’t have that yet.
– We need to add some additional support for parts of IL2CPP that are not implemented because they don’t occur on mobile and console platforms (where IL2CPP ships now) too often, e.g. external process handling
– We need to prepare to support the additional platforms from the QA and documentation end.
All of these are hurdles that we can overcome, as with many things in software development, it is all a matter of resources and priorities. We want to get to the point where this can be released though.
With that said, I’m not sure that IL2CPP will help the garbage collection hitches you are seeing. We’ve not had reports of IL2CPP significantly improving GC pauses on other platforms. IL2CPP does use a newer version of the Boehm GC than the Mono that ships with Unity does, but it is still a conservative GC.
Actually, our team is currently focused almost entirely on the Mono runtime upgrade, which is the next big step in getting an incremental GC for Unity. Note that the Mono runtime upgrade may not come with a new GC initially (we have other, non-Mono work to do to make a new GC work), but it is a prerequisite.
So although we’re not working on IL2CPP for standalone now, we are working what will hopefully solve the GC hitch problem you see!
Robert CummingsJuly 28, 2016 at 2:10 am
Thanks for the article(s)! Very interested in performance gains above all other things in Unity as more perf means I can do more things. With that in mind I’d love to see an article on the GC improvements you’ve mentioned! the GC has been a thorn for a decade or more :)
Josh PetersonJuly 28, 2016 at 3:01 am
Unfortunately we don’t have any GC changes to discuss now. Sorry if I can misleading! However GC improvements are coming, after the Mono runtime upgrade, so stay tuned.
TimJuly 27, 2016 at 3:07 pm
Does this have any effect on classes directly inheriting from MonoBehaviour, like Farm in the example?
I know the usual Unity methods like Start, Awake and Update aren’t overrides of an abstract method, but is there something else thats ‘abstract’ in MonoBehaviour?
Josh PetersonJuly 27, 2016 at 3:36 pm
This should not have any impact on classes deriving directly from MonoBehaviour. The MonoBehaviour does not have any virtual methods, so Unity does not call any of the methods you might implement on a class like Farm via the virtual method call process.
Jackie Engberg ChristensenJuly 27, 2016 at 1:57 pm
Must admit this is a pretty interesting read as I’m very interested in the IL2CPP, although I’m not sure how it entirely works behind the covers. Additional I’ve been wondering for some times why Unity doesn’t simply allow you to make C++ script which can be used like the C#/UnityScritpt/boo scripts native, any wouldn’t this over all be a huge performance gain for skilled C++ programmers?
Josh PetersonJuly 27, 2016 at 2:17 pm
You can find out more about how IL2CPP internally in this blog post series:
C++ scripting has been an active topic of conversation both inside and outside of Unity for some time. Opinions about its ease of use and performance impact in Unity vary, so I can’t really speak to them. I can give some specific reasons why C++ scripting is not supported in Unity now though.
– The Unity API is pretty large and exposed for managed code now. Supporting and documenting this API in C++ is a really large task.
– The Unity editor reloads script code on the fly when you go in and out of play mode. This is pretty difficult to reliably do with C++ code.
I don’t think that Unity will support C++ scripting any time soon, but this is just my opinion, things are always subject to change. If you need C++ for performance in some specific case, it is possible to use native code plugins with Unity, although they don’t have access to the Unity API.
Jackie Engberg ChristensenJuly 27, 2016 at 5:52 pm
Pretty nice post and further insight in IL2CPP.
It makes fine sense why C++ isn’t supported at this point, let alone from the things you’ve mentioned, but it would be a nice addition at some point I feel like, but I get it that new features in the engine and tool is much more important and preferable :) – Cheers!
YuriJuly 27, 2016 at 12:00 pm
Is the best way using
foreach (var animal in animals)
or may be
for(var i=0; i<animals.Count; i++)
?
PeterJuly 27, 2016 at 2:00 pm
I found these the fastest versions of “for”, since list.Count is evaluated only once; rather than every loop iteration.
for (int i=list.Count-1; i>=0; –i) …
for (int i=0, iend = list.Count; i<iend; ++i) …
Where the decrementing version could run faster on platforms that support the "subs" instruction (eg ARM devices), which means there could be one "cmp" operation less per loop-iteration.
foreach is often considered the slowest method to iterate over a list, because it allocates an enumerator and it calls various methods each iteration (Next and Current).
However, you will never know, if you don't profile your code.
Josh PetersonJuly 27, 2016 at 2:09 pm
This is really the best advice for any performance question:
> However, you will never know, if you don’t profile your code.
But of course the answer depends on what you mean by “best”. I find the foreach loop to be more readable than the for loop, so I think it is the best from a source code perspective. Looking at the IL code generated in each case (I encourage you to try this using a tool like ILSpy), it seems that the for loop generates smaller IL code, and Peter indicated.
For performance issues, never afraid to question, investigate, and measure. I’m often surprised what I find.
Michael HolubJuly 27, 2016 at 10:07 am
Why can’t you check if class doesn’t have subclasses and then don’t require “sealed” keyword? Linked libraries?
Josh PetersonJuly 27, 2016 at 1:58 pm
That is certainly possible, and in this example, looks to be pretty easy. In general for .NET code, we cannot make the assumption that a given virtual call will resolve to a specific method because assemblies can be loaded at runtime, so the compiler must play it safe and make the virtual method call for classes that are not sealed.
With IL2CPP the situation can be a little different, since IL2CPP is an ahead-of-time compiler that does not allow assemblies to be loaded at runtime. It does know all possible code that could derive from Cow at compile time. However, we’ve not implemented any whole-program optimization like this in IL2CPP, so it does not have the intelligence to make this decision (although it theoretically could).
ZinJuly 27, 2016 at 4:05 pm
I feel like in the time it took to explain the ‘sealed’ keyword in this blog, it would have been possible to write an auto-sealer pass for IL2CPP. Not that knowing about ‘sealed’ is a bad thing, but it just seems fairly trivial and would be one less thing for people to worry about having to remember to do.
Is it more than just looping over all types to mark them ‘sealed’ by default and then doing a second pass and if the type has a base, clear the sealed flag on the base?
Josh PetersonJuly 27, 2016 at 4:18 pm
Well, I think it might take a little longer to write an auto-sealer than it would to read this blog post, at least for me. :)
This does bring up a good point about whole-program optimization though. For ahead-of-time compilation, whole program analysis of the IL code like this is an option. I don’t think it is something that IL2CPP should do, since it is already complex enough as a transpiler for IL code to C++ code. I do think there is room for this sort of optimization pass in a separate tool though.
ZinJuly 27, 2016 at 7:00 pm
Sorry, I forgot that this was at the IL level, and not in the actual C# compiler. It *should* be trivial if you were able to do it in the compiler before it generated IL (would be nice if there was already ‘sealed’ assembly support). Though I think it should still be possible in IL2CPP, but it means modifying IL instructions to change a callvirt to a regular call which probably isn’t fun at that level.
One other question — does IL2CPP attempt to generate inline-able C++ functions? Or is it just relying on the C++ compiler’s WPO support to inline functions?
Josh PetersonJuly 28, 2016 at 3:17 am
You’re correct, it is possible for IL2CPP to do this. So far, we’ve not added an IL pre-processing pass to IL2CPP, so I would rather see it in a separate tool.
As far as inlining for generated C++ functions is concerned, IL2CPP does’t do anything special to try to get generated functions to be inlined. However, the generated C++ code does end up in a few (relatively speaking) large .cpp files. The C++ compiler then has a lot of code to work with in each translation unit, and can usually do a good job of inlining functions.
Johnathan RossitterJuly 27, 2016 at 5:06 am
Great Article, please keep more like this coming.
I know you plan to have other optimizations discussed, but can you point to a resource (like in the documentation) that talks more about what we can do to make our coding practices better for cpp?
Josh PetersonJuly 27, 2016 at 1:52 pm
Thanks, we have two more articles ready now – they will be publish one for each of the next two weeks.
We don’t have any documentation about writing C# code to produce better C++ code, because, well, producing good C++ code is our job. :)
In all seriousness, we would like to you write good C# code as you normally would, just follow good C# coding practices. Te IL2CPP, Mono, and .NET scripting backends should generate the best machine code possible. This has not always been the case, for example, the C# compiler currently shipping with Unity will generate unnecessary allocations for some foreach loops, so you should prefer for loops instead. We’re fixing this specific issue with a new C# compiler in Unity 5.5 (), so you can go back to writing foreach loops and not worry about their cost.
This is the approach we’re going for in general, good C# code should produce good machine code.
Marc-André JutrasJuly 27, 2016 at 3:17 am
One day I would love to know the purpose of;
int32_t L_6 = V_3;
int32_t L_7 = L_6;
Just passing int around? Sounds wasteful.
MrPhilJuly 27, 2016 at 4:20 am
I’m a complete amateur in this area but I believe it has something to do with the way SSA (Static Single Assignment) improves code analysis,
Josh PetersonJuly 27, 2016 at 1:43 pm
As I’ve thought more about this, I bet that it would be fun to dive deep on this question, but the C# code all the way down to the generated assembly code. I feel another blog post coming…
By way of a brief explanation, I’ll say this. IL is a stack-based virtual machine, and the C++ code generated by IL2CPP represents that stack using local variables in C++ methods. In order to correctly track the IL stack manipulations, IL2CPP will often generate code like this which seems unnecessary.
Indeed, from the C++ source code side, this is wasteful. However, the C++ compiler will optimize these two instructions, so the assembly code that is actually executed will be no different than it would for a single C++ instruction.
Lior TalJuly 27, 2016 at 3:10 am
Nice post, however i am not sure what tangible gains will something like this give. Is the cost of virtual vs. direct method call in IL2CPP really that different?
Marc-André JutrasJuly 27, 2016 at 3:32 am
Yes, the difference is quite notable when in a restrictive environment like iOS, DS or PSP.
A VTable lookup can eat quite a few valuable cycle that would be better used somewhere else. I worked on a PSP game where class hierarchy were destroyed and merged on purpose to reduce the number of lookup in code path that were most used per frame.
Josh PetersonJuly 27, 2016 at 1:37 pm
As usual with performance measurement, tangible gains really depend on the specific hardware and code involved. In the next post I’ll discuss the cost of virtual method invocation a bit more and look at something we’ve done in IL2CPP to make that cost less.
In general (not specifically with IL2CPP) virtual methods do cost more than direct methods. A direct method call is usually a single assembly instruction that jumps to a hard-coded address. For a virtual method, at least a few more assembly instructions are required, since that address cannot be known at compile time.
But yes, nothing here will drastically improve performance. Most games we see are not CPU-bound. With that said though, every cycle saved for things like method calls, which are not important to the player, is another cycle that can be used for better graphics rendering or improved AI, or other things that do matter to the player. Our focus on the VM team is to make things like this as cheap as possible to give you more time per frame for the stuff that matters to the player.
Alan MattanoJuly 26, 2016 at 10:36 pm
Grate Thanks. And If is posible, a video tutorial introduction to IL2CPP in the learning live training section?
Josh PetersonJuly 27, 2016 at 1:27 pm
Alan, What kind of information are you interested in seeing in a video? We have a few other resources about IL2CPP, first in a Unite 2015 presentation about debugging and profiling IL2CPP () and also in a blog post series about IL2CPP internals ().
I’d love to hear suggestions about what else you would like to see though. Thanks!
Alan MattanoJuly 27, 2016 at 9:20 pm
Thanks for listening. I need to optimize and run faster same cpu expensive code but I’m a beginner. I wish to run it in c++. I have never use IL2CPP so I do not know what I’m talking about. In my case I am Pc standalone user so i will wait for IL2CPP patiently.
Since IL2CPP is not yet available for PC standalone builds then is not necessary to make a new C++ scripting section in unity3d.com/learn/tutorials . But in unity3d.com/learn/tutorials/topics/scripting there is no C++ or IL2CPP video at all. And can be useful in “LIVE SESSIONS ON SCRIPTING” a introduction video about “GETTING STARTED WITH IL2CPP” for how do not know how to implement it in a simple way made by Adam Buckner, Matthew Schell or Mike Geig. Or include the more advance video “Unite 2015 – IL2CPP: Profiling and Debugging” into scripting topic. But this video is a bit too advance for a beginner like me.
Josh PetersonJuly 28, 2016 at 3:23 am
Alan, I’m glad you are programming and using Unity. Welcome to the community (hint: it’s a great community of people!)
We don’t have any information on the Learn site about C++ scripting because it is not possible in Unity. And we don’t have much about IL2CPP, especially at the beginner level, because you should not have to care about it! If I’m doing my job right, the IL2CPP scripting backend should “just work” for all of your code in Unity.
Starting off, I would focus on writing your code in C# with Unity. Once you have it working properly, then use a profiler (like the one built into Unity) to see where the performance hot spots are. Only then should you consider moving anything to C++ code.
KoblaviJuly 26, 2016 at 9:02 pm
Pretty neat optimisation. And kinda obvious too, but appreciated nonetheless ;-) . At this point I’m wondering if you’re beginning to see the value in open sourcing IL2CPP. The community can figure out many more optimisations at many times the speed at which you’re currently doing it. Do the cons really outweigh the pros of open sourcing?
Josh PetersonJuly 26, 2016 at 9:19 pm
Yes, this looks pretty obvious to the human reader. The IL2CPP code conversion utility has a bit more to consider though, but could probably be changed to to this without the sealed keyword, at least in theory.
Internally, we’ve had many discussions about open sourcing IL2CPP for a while now. We do see the value – that has never been the problem. There is a cost on our side to open sourcing it though. At the moment, that cost is large enough to cut into other priorities (like the .NET Upgrade, for example), so we’re focusing elsewhere now.
Of course the fastest way to get access to the IL2CPP source code and to improve it is to come work with our team. :)
GregJuly 27, 2016 at 2:53 am
What is this IL2CPP code conversion utility? Will it make our games and code easier to hack?
How much harder in general is IL2CPP to hack over Mono? Will it make things more difficult to hack? Are there any tips and tricks on how to make IL2CPP code more robust to combat hackers? I’m sure Unity is aware of the massive amounts of piracy taking place in our Unity games, so wondering if UNity is doing anything to help mitigate this?
I know it’s not possible to prevent hacking and piracy, I just want to know what is being done to make it more difficult.
Marc-André JutrasJuly 27, 2016 at 3:26 am
As it’s name imply, it convert IL (Intermediate Language, what our C# compile into) to C++ (or C Plus Plus, CPP). It then goes the normal compilation pipeline C++ follows, which is at the end to produce native machine code. It makes your code far much harder to hack. It is also far faster to run than IL code.
IL can just be easily read by any IL Inspector utility. Not so much with compiled code.
Josh PetersonJuly 27, 2016 at 1:23 pm
Marc-André provided a great answer here. If you want more details about how IL2CPP works internally, check out this blog post series:
Marc-André JutrasJuly 27, 2016 at 3:20 am
I would work at Unity just for the purpose of making the inspector able to copy/paste values… and handling sub-component/composition pattern. :p
KoblaviJuly 28, 2016 at 6:46 am
Points taken. Good to know you’re focusing on the .net profile upgrade. Is there a chance we’ll get an ETA for this? Will it be in the 5.5 timeframe? Are we to expect significant performance gains even with the initial release with the old GC?
As for working on IL2CPP, I guess I’ll just wait for the inevitable open sourcing of the transpiler even if it happens in 2 years ;-)
Josh PetersonJuly 28, 2016 at 1:19 pm
We will let you know an ETA for the .NET profile upgrade as soon as we know when it will be ready. I don’t think it will be in the 5.5 timeframe though, as our internal cut-off for new features in 5.5 is quickly approaching.
So far we’ve not seen performance gains from the new Mono runtime on the platforms where we have it working. However, we’ve only tested it on some internal projects so far, and not on anything very big yet. I don’t anticipate performance gains before we get to the better GC though. | https://blogs.unity3d.com/2016/07/26/il2cpp-optimizations-devirtualization/ | CC-MAIN-2019-13 | refinedweb | 5,527 | 69.62 |
The term Rich Domain Model is used to describe a domain model which really shows you how you should be using and manipulating the model, rather than letting you do anything with it. It is the opposite of an Anaemic Domain Model, which provides a very low abstraction over the data storage (generally), but with little to no enforcing of rules.
The Anaemic Domain Model
To take the standard model of a person who has addresses and phone numbers etc seems a little contrite, so lets run through an example using timesheets (bear in mind I don't know what really goes into a timesheet system, this just seems reasonable). The current model looks something like the following:
public class TimeSheet : DbEntity { public DateTime WeekDate { get; set; } public TimeSheetStates State { get; set; } public TimeSheetLineCollection Lines { get; set; } //... } public class TimeSheetLineCollection : DbEntityCollection<TimeSheetLine> { } public class TimeSheetLine : DbEntity { public DateTime Day { get; set;} public LineTypes LineType { get; set; } public decimal HourlyRate { get; set; } public decimal Hours { get; set; } } public enum TimeSheetStates { New, Saved, Submitted, Approved, Rejected } public enum LineTypes { Normal, Holiday, Sick }
The first problem with this model is that the domain entities are inheriting directly from a
DbEntity which is coupling our logic directly to our data access, which amongst other things is a violation of SRP. Putting this aside for the time being, the next issue is that the domain model lets you do anything with the objects and collections.
The model implies that there are rules governing its usage somewhere, but gives no hint as to what these rules are, or where they are located. Rules such as 'only allow hours to be entered in increments of half an hour' and 'no more than 5 lines in a given week' really should be in the domain model itself, as a Rich Domain Model should not allow itself to get into an invalid state.
The model also is leaking what kind of data store it is built on - after all, if you had an Event Sourcing pattern for storage, a
Delete operation on the
TimeSheetLineCollection would not make a lot of sense.
The Rich Domain Model
A better version of this model is to make all the behaviour explicit, rather than just exposing the collections for external modification:
public class TimeSheet { public DateTime WeekDate { get; private set; } public TimeSheetStates State { get; private set; } public IEnumerable<TimeSheetLine> Lines { get { return _lines; } } private readonly List<TimeSheetLine> _lines; private readonly TimeSheetRules _rules; public TimeSheet(TimeSheetRules rules, DateTime weekDate) { _lines = new List<TimeSheetLine>(); _rules = rules; WeekDate = weekDate } public void AddLine(DayOfWeek day, LineTypes lineType, decimal hours, decimal hourlyRate) { var line = new TimeSheetLine { Day = WeekDate.AddDays(day), LineType = lineType, Hours = hours, HourlyRate = hourlyRate }; _rules.ValidateAdd(Lines, line); //throws a descriptive error message if you can't do add. _lines.Add(line); } }
The Rich model does a number of interesting things. The first is that all the properties of the
TimeSheet class are now
private set. This allows us to enforce rules on when and how they get set. For example, the
WeekDate property value gets passed in via the constructor, as our domain says that for a week to be valid it must have a weekdate.
The major improvement is in adding lines to the
TimeSheet. In the Anaemic version of the model, you could have just created a
TimeSheetLine object and set the
Day property to an arbitrary date, rather than one in the given week's range. The Rich model forces the caller to pass in a
DayOfWeek to the function, which ensures that a valid datetime will get stored for the line. The
AddLine method also calls
_rules.ValidateAdd() which gives us a central place for putting rules on line actions.
Now that the user has been able to fill out all the lines in their timesheet, the next likely action they want to perform is to submit it for authorization. We can do this by adding the following method:
public void SubmitForApproval(User approver) { _rules.ValidateTimeSheetIsComplete(this); approver.AddWaitingTimeSheet(this); State = TimeSheetStates.Submitted; }
Note this method only validates if the timesheet is complete enough to be approved - validation for whether the approver can actually approve this timesheet is held within the
apperover.AddWaitingTimeSheet method.
The next thing to consider is when the approver rejects the timesheet because the user filled out the wrong weekdate. Rather than just exposing Weekdate to be publicly setable, we can capture the intent of the adjustment with a set of methods:
public void UserEnteredIncorrectWeek(DateTime newDate) { var delta = WeekDate - newDate; WeekDate = newDate; _lines.ForEach(line => line.Day = line.Day.AddDays(-delta)); }
Note how the method is named to capture the reason for the change. Although we are not actively storing the reason, if we were using an EventStream for the backing store, or maintaining a separate log of changes we would now have a reason as to why the change was made. This helps guide UI elements - rather then just having an "Edit Week Date" button, there could be a UI element which says "Change Incorrect Week" or similar.
The function also has some logic baked into it - each of the
TimeSheetLines needs its
Day property re-calculating.
Hopefully this helps demonstrate why Rich Domain Models are better solutions to complex domain problems than Anaemic Domain Models are.
For a really good video on this subject, check out Jimmy Bogard's Crafting Wicked Domain Models talk. | http://stormbase.net/2014/05/04/rich-domain-modeling/ | CC-MAIN-2017-17 | refinedweb | 900 | 53.24 |
2));
}
}
}
Agree, this is a nice approach to this problem. This has been taken a bit further by the NullableTypes project:
Of course, they wouldn’t need this hack if value types were nullable. 🙂
If you are going to all the effort of creating a new class, why not use an extra bool instead of DateTime.MinValue?
How will this integrate with third party UI controls?
Steven,
You could use an extra bool instead of a specific value, if you were willing for your types to be bigger.
That may be a reasonable tradeoff in some situations.
Thomas, the answer to your question is probably "not too well".
I’m not really fond of the name, though, as the struct isn’t necessarily empty; it’s empty-able, although a better name escapes me
A library I wrote a couple of years ago requires each component of a Date to be undeclared – eg 5th May, unknown year – or 5th May, year 2001 or 2002 or 2003
The only way to handle this was to create a wrapper around a date.
BTW – any chance of extending the date class to support BC dates rather than AD (CE) dates? And any chance of WinForms supporting the whole range of DateTime dates in it’s calendar control, rather than stopping at the OLE limit?
What about System.Data.SqlTypes.SqlDateTime ?
Isn’t it what it’s all about ?
In our company we use an approach that was published in the Visual Basic Programmer’s Journal in 2000. The article is availabe online via MSDN at
The idea is called the "Alias Null concept", where a constant value acts as a surrogate for a database NULL value.
We have implemented a utility class NullValue that offers operations like bool IsNullAlias(object val) to check, if a current value is a Null Alias. Our windows forms data binding takes care of this, so database NULL values will be displayed correctly.
Our persistence framework converts database NULL values into Alias Nulls when reading data in and writes database NULL values into the database when properties of persistent objects have an Alias Null value. Works pretty well for us.
I did like Eric but took Sql Server’s MinDate instead of DateTime.MinDate.
SqlTypes are not serializable (AFAIK). They’re also bound to SQL Server (duh!), which is no use if you need to use another DB.
I have to admit, I don’t like this issue either. This is probably the only thing that irks me when it comes to dealing with value types.
I can’t admit to having a solution (having a new type that encapsulates the value type doesn’t seem right, unless it is provided in the System namespace).
Also, implementing INullable on value types doesnt seem like it would work either.
For the solution you provided, you should have implemented INullable for completeness.
Also, you could have recommended that they use the SqlDateTime structure, whcih provides null semantics.
I have done something similar in my own business objects. The next step for your own nullable type is to implement IComparable so that you can sort on this column and IConvertible will allow you to pass this type directly to command objects and the like.
Of course for databinding for a property named BornOn of class Budweiser, binding will look for a property called BornOnIsNull to determine if its null or not, so going forward I’ve come to the conclusion that I should not expose my custom nullable types outside of my business objects, just use them internally.
Paul
Nicholas: IConvertible lets a value type tell a command object its DBNull.Value.
Dietmar,
The is talking about VB 5/6 – in which this isn’t really a major issue, since you would just use a Variant (which can handle Null).
Seeya
Make you yearn for a generic called Nullable<T>. 😉
So why is DateTime sealed?
This would do the trick if it weren’t:
public class EmptyDateTime : DateTime
{
private EmptyDateTime() {}
public static EmptyDateTime Empty;
}
I have not worked with value types, so it could be the singelton pattern doesn’t work here, or the Equals method must be overridden.
I like the solution, but that name of ‘EmptyDateTime’ is quite ugly and missleading.
It should’ve been NullableDateTime or something like that.
What about adding NAD to DateTime?
Or you could just hold your breath for another 8 mo. or so and use .NET 2.0 with their nullable types (or jump into the beta):
DateTime? dt=new DateTime();
and then
if (dt.HasValue)
{
}
[][][][][][][][][][][][][][][][][][][]
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/ericgu/2004/03/23/adding-emptiness-to-the-datetime-class/ | CC-MAIN-2017-51 | refinedweb | 767 | 71.44 |
import qrcode -- QRcodes in Pythonista
Hi !
Do you think it is possible to import the zbar library in a further release or Pythonista ?
Zbar already exists on iOS and there are some wrappers in Python.
We could script a QRCode decoder for example with camera input directly in Pythonista.
Or if somebody knows a pure Python QRCode decoder, I would be very very very happy but unfortunately I don't think it exists...
Got one dollar to spare? This might be easier than you think.
Say a bit more about your use case...
Especially, do you want to scan barcodes that <b>you create</b> or do you want to scan preexisting barcodes? If you are creating your own barcodes, you should be able to get ZBar to automatically launch your own Pythonista scripts for free using the Pythonista URL Scheme. See:
You should also push the ZBar community to adopt a zbar:// style for launching their app so that you can launch ZBar directly from Safari or your Pythonista scripts. See: , , -- This last one makes it seem like other barcode apps for iOS already have handleopenurl capabilities that you might want to play with.
<b>Scanner Go</b> might be a great place to start your search as it is <b>built on the ZBar engine and already supports the bi-directional x-callback-URL</b>. It will cost you a buck in the iTunes Store but that is cheap.
import webbrowser; webbrowser.open('ilu://')
will cause Pythonista to launch Scanner Go. See: for getting Scanner Go to automatically pass the resulting barcode data back into your Pythonista script.
helloBarcode.py -- ZBar delivers barcode data to Pythonista.
An even better (round trip) solution using the free <b>pic2shop</b> app...
Hello guys and thank you for your answers !
I am a little bit late :-)
I need to give a mobile phone to un-tech people. They will use it to perform a PC hardware replacement. They will scan the defect PC with the new one. The app will then send informations to a REST API with both serial informations to swap hardwares. No more mistakes...
I need a really simple app for dummies. Instead of coding this app in Objective C, I would play with Pythonista and do it with.
Using callbacks will be too touchy for the target users but it is a good proof of concept before coding the full iOS App. I will try this !!!
Thank you very much.
Check out the pic2shop solution above... To me it looks like you could build an Pythonista app around it (see Hvmhvm's code in the post above) that does everything that you are looking for. It is simple to use even for non-techies and it is a free download from the AppStore.
I hope a built-in integration of barcode and QR code scanning (with zbar) in Pythonista will come soon. I am working on a bitcoin wallet app, which requires QR scanning. It would be nice indeed if we would just import zbar and do everything in Python.
It seems that iOS 7 even has built-in support for QR code scanning according to this article:
The roundtrip with pic2shop works today. Why wait??
The pic2shop trick works but if you want to create a standalone app, that's not an option.
Can I use pic2shop for QR Code scanning? or its just for barcode?
I realize that implementing new features in Pythonista can be far from trivial. However, since iOS7-onward supports barcode/UPC/QR code scanning, I'd like to chime in with a request as well -- if the AVCaptureMetaDataOutput option could be supported as part of an AVCaptureSession, that would (presumably) give Pythonista the ability to natively scan QRcodes and such. I don't know the level of effort required to bridge arbitrary iOS API calls, but native barcode/QR code scanning would be extremely useful.
[I've also used the method of having a scanning app (CNS Barcode) communicate with Pythonista via launch URL schemes, but it is a bit cumbersome and creates an additional dependency when trying to deploy to other devices]
Thanks.
In the current Pythonista you can just
import qrcodeand generate your own qrcodes but... how cool would it be to generate color animated gif qrcodes?!?-示例 | https://forum.omz-software.com/topic/1261/import-qrcode-qrcodes-in-pythonista | CC-MAIN-2021-17 | refinedweb | 719 | 73.17 |
I'm getting started with Walmart's react/redux/react-router/isomorphic boilerplate called electrode and I'm having trouble adding multiple routes. When I add the 2nd route it seems to do nothing and linking and pushing to the other routes does not change the page.
Here's what the single route in the boilerplate looked like
// routes.jsx
import React from "react";
import {Route} from "react-router";
import Home from "./components/home";
export const routes = (
<Route path="/" component={Home}/>
);
import React from "react";
import {Route, IndexRoute} from "react-router";
import Home from "./components/home";
import Foo from "./components/foo";
export const routes = (
<Route path="/" component={Home}>
<Route path="/foo" component={Foo}/>
</Route>
);
//
// This is the client side entry point for the React app.
//
import React from "react";
import {render} from "react-dom";
import {routes} from "./routes";
import {Router} from "react-router";
import {createStore} from "redux";
import {Provider} from "react-redux";
import "./styles/base.css";
import rootReducer from "./reducers";
//
// Add the client app start up code to a function as window.webappStart.
// The webapp's full HTML will check and call it once the js-content
// DOM is created.
//
window.webappStart = () => {
const initialState = window.__PRELOADED_STATE__;
const store = createStore(rootReducer, initialState);
render(
<Provider store={store}>
<Router>{routes}</Router>
</Provider>,
document.querySelector(".js-content")
);
};
A few things...
You can avoid the "side by side jsx" warning by wrapping your routes in an empty route or returning an array.
// return nested routes return ( <Route path="/"> <Route path="foo" component={Foo}/> <Route path="bar" component={Bar}/> </Route> ) // return array, must use keys return [ <Route key="foo" path="/foo" component={Foo}/>, <Route key="bar" path="/bar" component={Bar}/> ]
If you want to nest routes, you need to give way to the child component by adding
{this.props.children} to the parent component's render.
If you want truly separate routes that are not nested, it shouldn't be a child of the first route. I don't think adding an
IndexRoute would provide any benefit unless you want some UI that is top level for all routes (like rendering a header, sidebar, etc). | https://codedump.io/share/gskYgF3Z83Uq/1/adding-multiple-routes-with-react-router-in-an-electrode-app-does-nothing | CC-MAIN-2016-50 | refinedweb | 349 | 56.25 |
User Reviewed
How to Build a Robot at Home
Five Parts:Assembling the RobotWiring the RobotWiring the PowerInstalling the Arduino SoftwareProgramming the RobotCommunity Q&A
Do you want to learn how to build your own robot? There are a lot different types of robots that you can make by yourself. Most of the people want to see a robot perform the simple tasks of moving from point A to point B. You can make a robot completely from analogue components or buy a starter kit from scratch! Building your own robot is a great way to teach yourself both electronics as well as computer programming.
Steps
Part 1
Assembling the Robot
- 1Gather your components. In order to build a basic robot, you'll need several simple components. You can find most, if not all, of these components at your local electronics hobby shop, or at a number of online retailers. Some basic kits may include all of these components as well. This robot does not require any soldering:
- Arduino Uno (or other microcontroller)
- 2 continuous rotation servos
- 2 wheels that fit the servos
- 1 caster roller
- 1 small solderless breadboard (look for a breadboard that has two positive and negative lines on each side)
- 1 distance sensor (with four-pin connector cable)
- 1 mini push button switch with 1 10kΩ resistor.
- 1 set of breakaway headers
- 1 6 x AA battery holder with 9V DC power jack
- 1 pack of jumper wires or 22-gauge hook-up wire
- Strong double-sided tape or hot glue
- 2Flip the battery pack over so that the flat back is facing up. You'll be building the robot's body using the battery pack as a base.
- 3Align the two servos on the end of the battery pack. This should be the end that the battery pack's wire is coming out of The servos should be touching bottoms, and the rotating mechanisms of each should be facing out the sides of the battery pack. It's important that the servos are properly aligned so that the wheels go straight. The wires for the servos should be coming off the back of the battery pack.
- 4Affix the servos with your tape or glue. Make sure that they are solidly attached to the battery pack. The backs of the servos should be aligned flush with the back of the battery pack.
- The servos should now be taking up the back half of the battery pack.
- 5Affix the breadboard perpendicularly on the open space on the battery pack. It should hang over the front of the battery pack just a little bit, and will extend beyond each side. Make sure that it is securely fastened before proceeding. The "A" row should be closest to the servos.
- 6Attach the Arduino microcontroller to the tops of the servos. If you attached the servos properly, there should be a flat space made by them touching. Stick the Arduino board onto this flat space so that the Arduino's USB and Power connectors are facing the back (away from the breadboard). The front of the Arduino should be just barely overlapping the breadboard.
- 7Put the wheels on the servos. Firmly press the wheels onto the rotating mechanism of the servo. This may require a significant amount of force, as the wheels are designed to fit as tightly as possible for the best traction.
- 8Attach the caster to the bottom of the breadboard. If you flip the chassis over, you should see a bit of breadboard extending past the battery pack. Attach the caster to this extended piece, using risers if necessary. The caster acts as the front wheel, allowing the robot to easily turn in any direction.[1]
- If you bought a kit, the caster may have come with a few risers that you can use to ensure the caster reaches the ground.
Part 2
Wiring the Robot
- 1Break off two 3-pin headers. You'll be using these to connect the servos to the breadboard. Push the pins down through the header so that the pins come out in an equal distance on both sides.
- 2Insert the two headers into pins 1-3 and 6-8 on row E of the breadboard. Make sure that they are firmly inserted.
- 3ConnectConnect red jumper wires from pins C2 and C7 to red (positive) rail pins. Make sure you use the red rail on the back of the breadboard (closer to the rest of the chassis).
- 5Connect black jumper wires from pins B1 and B6 to blue (ground) rail pins. Make sure that you use the blue rail on the back of the breadboard. Do not plug them into the red rail pins.
- 6Connect white jumper wires from pins 12 and 13 on the Arduino to A3 and A8. This will allow the Arduino to control the servos and turn the wheels.
- 7Attach the sensor to the front of the breadboard. It does not get plugged into the outer power rails on the breadboard, but instead into the first row of lettered pins (J). Make sure you place it in the exact center, with an equal number of pins available on each side.
- 8Connect a black jumper wire from pin I14 to the first available blue rail pin on the left of the sensor. This will ground the sensor.
- 9Connect a red jumper wire from pin I17 to the first available red rail pin to the right of the sensor. This will power the sensor.
- 10Connect white jumper wires from pin I15 to pin 9 on the Arduino, and from I16 to pin 8. This will feed information from the sensor to the microcontroller.
Part 3
Wiring the Power
- 1Flip the robot on it's side so that you can see the batteries in the pack. Orient it so that the battery pack cable is coming out to the left at the bottom.
- 2Connect a red wire to the second spring from the left on the bottom. Make absolutely sure that the battery pack is oriented correctly.
- 3Connect a black wire to the last spring on the bottom-right. These two cables will help provide the correct voltage to the Arduino.
- 4Connect the red and black wires to the far-right red and blue pins on back of the breadboard. The black cable should be plugged into the blue rail pin at pin 30. The red cable should be plugged into the red rail pin at pin 30.
- 5Connect a black wire from the GND pin on the Arduino to the back blue rail. Connect it at pin 28 on the blue rail.
- 6Connect a black wire from the back blue rail to the front blue rail at pin 29 for each. Do not connect the red rails, as you will likely damage the Arduino.
- 7Connect a red wire from the front red rail at pin 30 to the 5V pin on the Arduino. This will provide the power to the Arduino.
- 8Insert the push button switch in the gap between rows on pins 24-26. This switch will allow you to turn off the robot without having to unplug the power.
- 9Connect a red wire from H24 to the red rail in the next available pin to the right of the sensor. This will power the button.
- 10Use the resistor to connect H26 to the blue rail. Connect it to the pin directly next to the black wire that you connected a few steps ago.
- 11Connect a white wire from G26 to pin 2 on the Arduino. This will allow the Arduino to register the push button.
Part 4
Installing the Arduino Software
- 1Download and extract the Arduino IDE.Connect the battery pack to the Arduino. Plug the battery back jack into the connector on the Arduino to give it power.
- 3Plug the Arduino into your computer via USB. Windows will likely not recognize the device.
- 4Press .⊞ Win+R and type devmgmt.msc. This will launch the Device Manager.
- 5Right-click on the "Unknown device" in the "Other devices" section and select "Update Driver Software." If you don't see this option, click "Properties" instead, select the "Driver" tab, and then click "Update Driver."
- 6Select "Browse my computer for driver software." This will allow you to select the driver that came with the Arduino IDE.
- 7Click "Browse" then navigate to the folder that you extracted earlier. You'll find a "drivers" folder inside.
- 8Select the "drivers" folder and click "OK." Confirm that you want to proceed if you're warned about unknown software.
Part 5
Programming the Robot
- 1Start the Arduino IDE by double-clicking the arduino.exe file in the IDE folder. You'll be greeted with a blank project.
- 2Paste "backwards." }
- 3Build and upload the program. Click the right arrow button in the upper-left corner to build and upload the program to the connected Arduino.
- You may want to lift the robot off of the surface, as it will just continue to move forward once the program is uploaded.
- 4Add); } }
- 5Upload); }
Community Q&A
- How much time will it take to build a robot?wikiHow ContributorThis depends on the complexity of the robot. You could build a simple robot in as little as a day. A more complex robot could take several several months.
- Why do I need to download the Arduino software?wikiHow ContributorArduino code is very simplified and based off of c++ in audition other micro controllers are available with other software.
- What is the meaning of arduino?wikiHow ContributorArduino is like the operating systems for your phones, but it is the OS for robots. There are many such programmes, like C++ or Java, etc.
- How do you connect the jumper wires.
- After the program is uploaded will it work without connecting to laptop?
- What are the components needed to build a robot?
- Can I use any other motor instead of spring RC SM S4303R servo motor?
Example
The following code will use the sensor attached to the robot to make it turn to the left whenever it encounters an obstacle. See the comments in the code for details about what each part does. The code below is the entire program.
#include <Servo.h> Servo leftMotor; Servo rightMotor; const int serialPeriod = 250; // this limits output to the console to once every 1/4 second unsigned long timeSerialDelay = 0; const int loopPeriod = 20; // this sets how often the sensor takes a reading to 20ms, which is a frequency of 50Hz unsigned long timeLoopDelay = 0; // this assigns the TRIG and ECHO functions to the pins on the Arduino. Make adjustments to the numbers here if you connected differently const int ultrasonic2TrigPin = 8; const int ultrasonic2EchoPin = 9; int ultrasonic2Distance; int ultrasonic2Duration; // this defines the two possible states for the robot: driving forward or turning left #define DRIVE_FORWARD 0 #define TURN_LEFT 1 int state = DRIVE_FORWARD; // 0 = drive forward (DEFAULT), 1 = turn left void setup() { Serial.begin(9600); // these sensor pin configurations pinMode(ultrasonic2TrigPin, OUTPUT); pinMode(ultrasonic2EchoPin, INPUT); // this assigns the motors to the Arduino pins leftMotor.attach(12); rightMotor.attach(13); } void loop() { if(digitalRead(2) == HIGH) // this detects the kill switch { while(1) { leftMotor.write(90); rightMotor.write(90); } } debugOutput(); // this prints debugging messages to the serial console if(millis() - timeLoopDelay >= loopPeriod) { readUltrasonicSensors(); // this instructs the sensor to read and store the measured distances stateMachine(); timeLoopDelay = millis(); } } void stateMachine() { if(state == DRIVE_FORWARD) // if no obstacles detected { if(ultrasonic2Distance > 6 || ultrasonic2Distance < 0) // if there's nothing in front of the robot. ultrasonicDistance will be negative for some ultrasonics if there is no obstacle { // drive forward rightMotor.write(180); leftMotor.write(0); } else // if there's an object in front of us { state = TURN_LEFT; } } else if(state == TURN_LEFT) // if an obstacle is detected, turn left { unsigned long timeToTurnLeft = 500; // it takes around .5 seconds to turn 90 degrees. You may need to adjust this if your wheels are a different size than the example unsigned long turnStartTime = millis(); // save the time that we started turning while((millis()-turnStartTime) < timeToTurnLeft) // stay in this loop until timeToTurnLeft has elapsed { // turn left, remember that when both are set to "180" it will turn. rightMotor.write(180); leftMotor.write(180); } state = DRIVE_FORWARD; } } void readUltrasonicSensors() { // this is for ultrasonic 2. You may need to change these commands if you use a different sensor. digitalWrite(ultrasonic2TrigPin, HIGH); delayMicroseconds(10); // keeps the trig pin high for at least 10 microseconds digitalWrite(ultrasonic2TrigPin, LOW); ultrasonic2Duration = pulseIn(ultrasonic2EchoPin, HIGH); ultrasonic2Distance = (ultrasonic2Duration/2)/29; } // the following is for debugging errors in the console. void debugOutput() { if((millis() - timeSerialDelay) > serialPeriod) { Serial.print("ultrasonic2Distance: "); Serial.print(ultrasonic2Distance); Serial.print("cm"); Serial.println(); timeSerialDelay = millis(); } }
Sources and Citations
Article Info
Featured Article
Categories: Featured Articles | Robots
In other languages:
Italiano: Costruire a Casa un Robot, 中文: 在家制作机器人, Русский: построить робота дома, Español: construir un robot en casa, Português: Construir um Robô em Casa, Deutsch: Einen Roboter bauen, Bahasa Indonesia: Membuat sebuah Robot di Rumah, Français: construire un robot chez soi, العربية: صنع إنسان آلي في المنزل, हिन्दी: घर पर रोबोट बनायें (Kaise, Kare, Robot)
Thanks to all authors for creating a page that has been read 809,088 times.
About this wikiHow
Reviewed by: Atharva Chitre
wikiHow Technology Team
This version of How to Build a Robot at Home was reviewed by Atharva Chitre on June 7, 2016. | http://www.wikihow.com/Build-a-Robot-at-Home | CC-MAIN-2016-26 | refinedweb | 2,219 | 63.59 |
Computer Science Archive: Questions from April 14, 2008
- iscout123 asked1a) The member variables of a class must be of the same type. b) The member functions of a class mus3 answers
- Anonymous askedHow would you sort four decimal numbers in ascending ordescending order in JFLAP with turing m... Show morehelloHow would you sort four decimal numbers in ascending ordescending order in JFLAP with turing machine?• Show less0 answers
- Anonymous askedsuppose we want to find the longest common subsequence of 3 sequesncs. give a dynamic algorithm for ... More »1 answer
- Anonymous askedtoSt... Show more7 answers
- Anonymous askedThis web basedapplicatio... Show moreProject Title :-Assets Management& Operational System• Show lessProject Description This web basedapplication maintains record of all assets installed in allbranches of an organization including the Head Office. It alsoprovides the facility to make a request for any item required byany employee of the organization. Flow of making a request would beas follows:
2 answers
- A normal employee would make a request to his head. Therespective head will check the feasibility. If feasibility is ok,then the head will forward the request to the store keeper. In caseif the head rejects the request he will put some remarks againstthe request.
- The head will make a request directly to the storekeeper. Store keeper will accept or reject the request as statedabove.
- Anonymous asked3 answers
- Anonymous asked
What is Mac, and how is it different from PC? Which one isbetter for... Show more
What is Mac, and how is it different from PC? Which one isbetter for professionalusers, and why?• Show less0 answers
- Anonymous askedA client and a Web server are connected by a directlink with capacity C. The client retrieves an obj... Show more
A client and a Web server are connected by a directlink with capacity C. The client retrieves an object of the sizeequals 15MSS. The RTT is assumed to be constant. Ignoring protocolheaders determine the time to retrieve the object when
a. MSS/C>RTT;
b. MSS/C+RTT> 4MSS/C
c. 4MSS/C> MSS/C+RTT>2MSS/C• Show less1 answer
- Anonymous askedSuppose a TCPconnection with window size 1, looses every other packet. Thosethat do arrive have RTT... Show more
Suppose a TCPconnection with window size 1, looses every other packet. Thosethat do arrive have RTT = 1 second. Consider the case when after apacket is eventually received, we resume with TimeOut initializedto the last exponentially backed-off value used for the timeoutinterval. What happens?• Show less2 answers
- Anonymous asked0 answers
- Anonymous asked1 2 3 4 5... Show more
Write a C++ nested for loop code to print out the following onthe screen
1 2 3 4 5 67 8 9
1 2 3 4 5 67 8
1 2 3 4 5 67
1 2 3 4 56
1 2 3 45
1 2 34
1 23
12
1• Show less2 answers
- Anonymous asked
Show the exact output of the following codesegments:
(a) for(x=0; x<20; x=x+2)
cout<< x << ‘ ‘;
cout... Show more
Show the exact output of the following codesegments:
(a) for(x=0; x<20; x=x+2)
cout<< x << ‘ ‘;
cout<< endl ;
(b) i=10;
for (;i>0; i =i/2;)
cout<< i;
(c) int x, y;
for (x=1;x>=5; x--)
{
for (y=x;y>=1; y--)
cout<< x*y << ‘\t’;
cout<< endl;
}• Show less2 answers
- Anonymous askedoutputs in the order as they... Show more
Show the output of the following program: (make sure to listthe program
outputs in the order as they are displayed on thescreen)
#include<iostream>
usingnamespace std;
intFnc(int);
int x,y;
intmain()
{
intx=3;
y=x;
x =Fnc(x+2)+ 15;
cout<< "In main after calling Fnc, x = " << x << ", y= " << y << endl;
x = Fnc(Fnc(y) ) ;
cout<< "In main after calling Fnc, x = " << x << ", y= " << y << endl;
return0;
}
int Fnc(int z)
{
inty=10;
z =z+y;
x =z;
cout<< “In Fnc, x= “ << x << “,y=” << y << “, z=” << z<< endl;
returnz;
}• Show less1 answer
- Anonymous askedWrite a C++value retu... Show more
Given thefollowing declarations:
const intNUM_STUDS=25;
intscores[NUM_STUDS];
Write a C++value returning function (show the function definition only) thattakes scores as parameters and returns the mode of the scores stored in arrayscores. (mode is the most frequent value)• Show less1 answer
- Anonymous asked
Given anarray of names:
const intSIZE=30;
stringnames[SIZE];
… //code eliminated here
i=0;
while((i... Show more
Given anarray of names:
const intSIZE=30;
stringnames[SIZE];
… //code eliminated here
i=0;
while((i<SIZE)&&myIn>>names[i]) i++;
actualSize= i;
cout<< “Enter a new name:”;
cin>> newName;
// Here ishow the new function will be called
SortedInsert(names, newName, actualSize);
The namesstored in the array are in alphabetically ascendingorder.
Write a C++function that takes a new name and insert the name into the array,such that after the insertion, the names in the array is still inalphabetically ascending order. Show the function definitiononly.• Show less1 answer
- Anonymous askedcan someonehelp me adjust my program below? what i need to do and can't reallyunderstand is to have... Show morecan someonehelp me adjust my program below? what i need to do and can't reallyunderstand is to have my command below should save theloopcounter, compare min with H and if they are equal thenchange H to the object following H and return, try to find theobject which comes before the min, test for min==T and ifthat’s true delete the object at the tail andreturn, and give the single instruction needed to delete minwhen it is in the middle. can some one please help me w/ this? imnot completely lost but im lost. thanks.friendvoid deleteyoung (applicant *&H, applicant *&T){
applicant*looppointer,*min;min=H;looppointer=H;
while(looppointer!=NULL) {
cout<<"\nExecuting the loop.";
if((*looppointer).age<(*min).age){min=H;
}looppointer=(*looppointer).link;
}
if(min!=NULL) {
cout<<"\nDeleting the youngest worker, : ";
(*min).printworker( );
}
return;
}for the partthat comes before the patial code above, i dont quite know how tohave it count the qualifying applicants and then, after theloop, print the count. the code i have below just prints them outbut im supposed to print the count.• Show lessfriendvoidcountgenderexper(applicant*H,charG,intY){
//write code to gohere
while(H!=NULL){
if((*H).sex == sex && (*H).experienceyears>= years){
(*H).print( );
}
H=(*H).next; //thiscommand does NOT change head in the mainprogram
}
}0 answers
- Anonymous askedi5using the program down below, can some one show me how to do these three things im missing from... Show morex.øi5using the program down below, can some one show me how to do these three things im missing from my program and dont quite understand how to do? thanks. also, the last two sections of the program isnt correct completely, so if anyone has any insight w/ that can you show me what mistakes i made? thanks again.INSTRUCTIONS:
add one additional function to the program given which processes a queue of job applicants.
The new function will print a count and also print a list of applicants. The count will be the number of applicants who are below a user specified age. The candidates printed will include those who are below the specified age and who also have less experience years than the applicant who comes immediately after them in the queue.
a) Give the header for this function which would be included in the public part of the class declaration.
b) Fill in the code which would appear in the main function to allow the user to select this option:
if(userchoice==’F’) {
//fill in code here
}
c) Give the code for the function which would appear after the main program.
(you can also include in the printed list of applicants those who are below the specified age and who also have less experience years than at least one applicant who comes anywhere after them in the queue.)PROGRAM:
#include "stdafx.h"
using namespace std;
#include<iostream>
#include<fstream>
class applicant{
private:
char name[20], sex;
int age, experienceyears;
applicant *next;
public:
void input( );
void print( );
friend void printqueue(applicant *H);
friend void dequeue(applicant *&H, applicant *&T);
friend void countgenderexper(applicant*H, char G, int Y);
friend void deleteyoung(applicant *&H, applicant *&T);
friend void enqueue(applicant x, applicant *&H, applicant *&T);
};
int main( ) {
applicant *head, *tail;
head=NULL;tail=NULL;
applicant tempapplicant;
char userchoice=' ',usersex;
int userexpyears;
while(userchoice!='Q') {
cout<<"\nEnter A to add a applicant to the tail of the queue.";
cout<<"\nEnter P to print all the applicants in the queue.";
cout<<"\nEnter D to delete the youngest applicant in the queue.";
cout<<"\nEnter E to print a count of all applicants with specified experience and gender";
cout<<"\nEnter Q to quit.";
cin>>userchoice;
if(userchoice=='A') {
tempapplicant.input( );
enqueue(tempapplicant, head, tail);
}
if(userchoice=='P') {
printqueue(head);
}
if(userchoice=='D') {
deleteyoung(head, tail);
}
if(userchoice=='E') {
cout<<"\nEnter gender: ";
cin>>usersex;
cout<<"\nEnter years of experience: ";
cin>>userexpyears;
countgenderexper(head,usersex,userexpyears);
}
} //End of loop that processes applicants
char r;cin>>r;
}
void applicant::input( ) {
cout<<""nEnter name: ";cin>>name;
cout<<"\nEnter sex (m/f): ";cin>>sex;
cout<<"\nEnter years of experience"; cin>>experienceyears;
cout<<"\nEtner age";cin>>age;
next=NULL;
}
void applicant::print( ){
cout<<"\napplicant has the following values:";
cout<<"\nName is: "<<name;
cout<<"\nThe sex is "<<sex;
cout<<"\nThe years of experience are "<<experienceyears;
cout<<"\nThe age is "<<age<<"\n";
}
void printqueue(applicant *H) {
while(H!=NULL) {
(*H).print( );
H=(*H).next; //this command does NOT change head in the main program
}
}
void enqueue(applicant x, applicant *&H, applicant *&T) {
applicant *newapplicant;
newapplicant = new applicant;
*newapplicant = x; /*The new memory now contains all the
data that was in x.*/
if(T==NULL) { //nobody in the queue
H=newapplicant;T=newapplicant;return;
}
//since the function didn't return there is alredy somebody in the queue
// the next pointer in (*newapplicant) is already NULL from the input
(*T).next=newapplicant; /*Makes the previously last applicant in the
queue contain the address of the applicant we are adding.*/
T=newapplicant; //makes the new applicant the tail of the queue.õ>friend void countgenderexper(applicant*H, char G, int Y) {
//write code to go here
while(H!=NULL){
if((*H).sex == sex && (*H).experienceyears >= years){
(*H).print( );
}
H=(*H).next; //this command does NOT change head in the main program
}
}
friend void deleteyoung(applicant *&H, applicant *&T) {
//write code to go here
//quit if nobody to delete
if(H==NULL)
return;
cout<<"\nDeleting this young person:";
(*H).printworker( );
H=(*H).link;
if(H==NULL)
T=NULL;
return;
}1 answer
- Anonymous asked0 answers
- Anonymous askedPOP3 allows users tofetch and download e-mail from a remote mailbox. Does this meanthat the internal... Show more
POP3 allows users tofetch and download e-mail from a remote mailbox. Does this meanthat the internal format of mail boxes has to be standardized soany POP3 program on the client side can read the mailbox on anymail server? Discuss your answer.• Show less1 answer
- Anonymous askedEachinstruction is i... Show more
Question: What will be the value of AX after the executionof each instruction? Eachinstruction is independent of others.
If AX=44FF and BX=011F
a. xor ah, bh
b. ror bx, 2
c. not ah
d. and ax, bx
e. rcl ax, 2 if carry flag=0• Show less1 answer
- Anonymous askedP... Show moreConsider two programs having three types of instructions given asfollows.
Compare both the programs for the following parameters:1. Instruction count(IC)2. Speed of execution(ET)• Show less1 answer
- Crusader askedA paint company has determined that for... Show moreI need a program written using Java that does the following:A paint company has determined that for every 115 feetof wall space, one gallon of paint and 8 hours of labor will berequired. The company charges $18 per hour for labor. Write a program that allows the user to enter the number of roomsto be painted and the price of the paint per gallon. Itshould also ask for the square feet of wall space in eachroom. The program should have methods that return thefollowing data:-the number of gallons of paint required-the hours of labor required-the cost of the paint-labor charges-total cost of paint jobThen it should display this data on the screen. Be sure toinclude comments in each method that fully describe the purpose,input and output of the method. Use JOptionPane windows for inputand output. Be sure to format the decimals and round to twoplaces behind the decimal when giving the costs. Also DO NOT assume that every room will have the samesquare footage.• Show less1 answer
- Anonymous askedConsider the following function written in the C programminglanguageint partition(int A[], int n) {... Show moreConsider the following function written in the C programminglanguageint partition(int A[], int n) {
int i, j; /* array indices */
int x; /*pivot item */
int temp ;
x = A[0] ;
i = -1 ;
j = n ;
while (1) {
do {
j-- ;
} while (A[j] >x) ;
do {
i++ ;
} while (A[i] <x) ;
if (i < j){
temp = A[i] ; /* swap A[i] and A[j] */
A[i] = A[j] ;
A[j] = temp ;
} else {
return j ;
}
}
}
1) Suppose that partition(A, n) is called with an arrayA holding the 12 items
16, 22, 12, 8, 15, 11, 10, 7, 14, 5, 9, 24
and with n equal 12. How is A rearranged whenpartition returns? What is the meaning of the valuereturned by partition? (Hint: thepartition() function might be used in Quicksort.)
2) Develop a loop invariant for the outer while loopthat can be used to prove that partition() does what yousay it does. Argue that the loop invariant holds initially and foreach iteration of the outer while loop.
3) What does the loop invariant say when the outerwhile loop terminates (using the return statementin the else part of the if statement)?• Show less0 answers
- Anonymous askedI'm trying to make a GridLayout with the gridlines displayed. I haven't figured out how to do it... Show more
I'm trying to make a GridLayout with the gridlines displayed. I haven't figured out how to do it aftermuch effort. I'm trying to make an empty Tic-Tac-Toe boardusing a 3 by 3 GridLayout. If someone can help, that would beappreciated. Thanks.• Show less1 answer
- Praggy asked1 answer
- Anonymous askedI'm writing a statistics program in C++ and was wondering if it's possible to put the normal curve (... More »0 answers
- Anonymous askedQuestion 1)Show the result of inserting1,2,3,4,5,12,11,10,9,8,7,6 (in the given order) in aninitiall... Show more
Question 1)Show the result of inserting1,2,3,4,5,12,11,10,9,8,7,6 (in the given order) in aninitially empty AVL tree. Show the AVL tree for the followingcases a) after the insertion of 11, and b) after insertion of 6.Make sure you mark the balance factor for each node in thetree.
Question 2)Show the result of inserting1,2,3,4,5,12,11,10,9,8,7,6 (in the given order) in an initiallyempty Red-Black tree using top-down method. Show thetree for the following cases a) after the insertion of 5, andb) after insertion of 6. Make sure you mark the colors.• Show less0 answers
- Anonymous askedWell i need a little help from the C++gods if there are any out there.i know this is asking quite a... Show moreWell i need a little help from the C++gods if there are any out there.i know this is asking quite a bitbut it would help me out tremendously! I have attempted thisassignment numerous times but have gotten no where with it fast.any help is greatly appreciated!
i must write a progam that does the following:
1. reads a single input record from the file transactions.dat whichincludes
it contains theinitial number of each coun and currency type in a till frompurchases are made. located ()
5 5 10 20 40 50 40 50
2.83 3.00
2.34 20.00
2.53 100.00
2.30 100.00
0.99 1.00
3.21 3.21
0.98 20.00
9.32 10.00
5.78 10.00
then i must echo this information into the text file change.dat
2. read an arbitrary number of input records from the same file,each containing shop transactions consisting of purchase costfollowed by the amount tender by the customer to pay for thepurchase. For each transaction print to text file change.dat thefollowing:
cost
payment
change and the number of each coin and banknote in the change
the resulting till contents. (assuming money from purchase goes ina lock box)
this is all i have:
#include <iostream>
#include <fstream>
#include <cstdlib>
using namespace std;
void makeChange);
int main()
{
float cost, payment, change;
int twentiesInTill, tensInTill, fivesInTill,dollarsInTill, quartersInTill, dimesInTill, nickelsInTill,penniesInTill,
twentiesInChange, tensInChange, fivesInChange,dollarsInChange, quartersInChange, dimesInChange, nickelsInChange,penniesInChange;
ifstream infile;
ofstream outfile;
infile.open("transactions.dat");
if (infile.fail()){
cerr << "Can't opentransactions.dat for infile.\n";
system ("PAUSE");
return 1;
}
infile>>twentiesInTill>>tensInTill>>fivesInTill>>dollarsInTill>>quartersInTill>>dimesInTill>>nickelsInTill>>penniesInTill;
while(infile){
infile >> cost >> payment;
change = payment*100-cost*100;
system ("PAUSE");}
system ("PAUSE");
infile.close();
outfile.close();
return 0;
}
please help if you can!!
thanks
• Show less0 answers
- Anonymous askedNot just any greedy approach to the activity selection problemwill yeils a maximum sized set of mutu... Show moreNot just any greedy approach to the activity selection problemwill yeils a maximum sized set of mutually compatibleactivities1. Give an example to show that the approach of choosing theactivities with the earliest start time from those that arecompatible with previously selected activities does not work.2. Do the same for the apporach of selecting the activity withsmallest value of start times finish time.3. Do the same for the apporach of selecting the activity ofshortest duration. For each of your examples, give a graphicalrepresentation of the activities along with a list showing thestart and finish times.• Show less2 answers
- Anonymous askedUse the algorithm to find a maximum sized set of mutuallycompatable activities for set of activities... Show moreUse the algorithm to find a maximum sized set of mutuallycompatable activities for set of activities______________________________________________________i 1 2 3 4 5 6 7 8Si 1 8 5 2 3 6 7 4Fi 5 8 7 4 6 8 9 7• Show less1 answer
- Anonymous askedMe and my partner attempted this and didn't get real far with theprogram. We need to use strings and... Show moreMe and my partner attempted this and didn't get real far with theprogram. We need to use strings and string functions. Also, loops.I will try to explain it and show the code we have got.
I have 2 documents named Emma and Letter. The file contained Emmahas a lot of introductory material which I need to skip. Easiestway is to hunt for the string "Chapter I". Then start withfile position marker on the first line. The way this works is withgroups of 3 words from letter.txt. The first word says how manylines you need to move down in the text by the number of letters inthe words. Second word tells you the word on that line by thenumber of letters in the word. Third word, use the length of theword to find the particular letter. Repeat thus until the letter orcharacterrepeating.
My code(which is not much):
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
int main()
{
ifstream inputFile;
ifstream input;
string a, w, x, z;
a = "CHAPTER I";
inputFile.open("Emma.txt"); // opens the txtfile
input.open("Letter.txt");
while (w != a)
getline(inputFile, a);
while (!input.eof())
getline(input, z);
input >> ;
return 0;
}
EDIT: Here is the original text of the information on how to do itand a screen shot of the ouput.
The document that contains the message is Emma. The free textversion, obtained from project
Gutenburg, is available for you use in webCT. The document thattells you how to find the secret
message is letter.txt, also in webCT. When you read it,you’ll find some things that look strange or
misspelled, like someone wrote it in a hurry. Don’t make anycorrections to the file; it really is just
as it should be in order to find the secret message.
The file that contains Emma has a lot of introductory material– mostly about project Gutenburg.
You’ll need to skip all of that. I found it easiest to huntfor the string “Chapter I”. Then you’ll want
to start with your file position marker (think of it as a cursor)on the first line.
The way the key works is with groups of 3 words. The first wordsays how many lines you need to
move down in the text by the number of letters in the word. Thesecond word tells you the word
on that line by the number of letters in the word. To find theparticular letter, use the length of the last word of the group of3.
You’ll repeat this until the letter or character found is aperiod (.).
-from what i remember teacher said something about a loop that run3 words at a time in letter.txt and finds the letter inemma.txt
Screen shot:
• Show less0 answers
- Anonymous askedPrompt the user t... Show moreThis question uses SSH secure shellprogram.Write a C-shell script that is a menu.
Prompt the user to select a command from these possibilities:
date , cal , ps, who
The command ps is process status.
If the user selects the letter:
a , execute the commanddate
b , execute the commandcal
c , execute the commandps
d , execute the commandwho
If the user enters anything besides these choices,• Show less
tell them they have an invalid selection, try again.1 answer
- Piedpiper askedThe user must enter... Show moreAssignment
A password testing program. Use JOptionPane for both inputand display.
The user must enter a password that meets these requirements:
o at least 8 characters in length, no spaces allowed
o contains at least one uppercase and one lowercaseletter
o contains at least one number (digit)
Also need comments on this !!!
If the password is bad, display an appropriate message. Givethe user up to three chances.
If the password is good, enter the password a second time and checkfor a match.
If OK, display a "password accepted" message.
Hint: Build this program insteps.
• Show less1 answer
- Anonymous askedWrite a program that will read a file containing student IDnumbers and associated test scores. The f... Show moreWrite a program that will read a file containing student IDnumbers and associated test scores. The file must be namedstutest1.txt. The 1st line of the file will contain an interger inthe 1st 3 columns. This will be the number of students in the file.Each student will have exactly 4 associate scores in thecolumns.Sample of the text file. IDs scores and numbers of line willchange but the formatting will be constant.
00234523 100 9590 90
03639454 100 95100 98
58392938 92 87 8590
93842345 80 90 9268
28274002 100 100100 100
12936530 80 85 9095The output should be a nicely formatted table (with headings)showing each students Id number (ID num's may begin with 0's )their exam scores and their averages The average for each exam setshould be shown as well. All Averages should be displayed to 2decimal placesif the data in the sample above was used in this program thisshould appear something like this in output:
Name Test 1 Test2 Test 3 Test 4 Average
00234523 100 9590 90 93.75
03639454 100 95100 98 98.25
58392938 92 87 8590 88.50
93842345 80 90 9268 82.50
28274002 100 100 100 100 100.00
12936530 80 85 9095 87.50
Averages 92.0092.00 92.83 90.17 91.75Press any key tocontinueNotes + Specifications:Although the sample file has 6 students, the program should be ableto handle anything between 1 to 100 students. FThe file must beread and closed prior to outputting anything. The file specified inthe fopen() call may not contain any path; it must simply bestutest1.txt• Show less1 answer | http://www.chegg.com/homework-help/questions-and-answers/computer-science-archive-2008-april-14 | CC-MAIN-2014-52 | refinedweb | 4,003 | 63.59 |
I wrote this code and it will not work but i am convinced this is correct? Can someone please enlighten me?
return (num > 0) ? “positive” : (num < 0) ? “negative” : (num === 0) ? “zero”;
Thanks in advance,
Hassan
I wrote this code and it will not work but i am convinced this is correct? Can someone please enlighten me?
return (num > 0) ? “positive” : (num < 0) ? “negative” : (num === 0) ? “zero”;
Thanks in advance,
Hassan
Hi, when i click on “Run all tests” nothing happens … the only error i notice is the last “}” has a red underline…
I think the best way to approach this problem is to follow the format of the example given on the left hand side of the screen.
My understanding of it (and please someone feel free to call me out on this because I’m not 100% sure) is that in your format you are essentially saying:
IF (num > 0) { return “positive”};
ELSE IF (num < 0) { return “negative”};
ELSE IF (num ===0) { return “zero”};
and with conditional operators this will not work because you don’t have a final ELSE statement. You therefore need to find a way to turn your last IF/ELSE statement into just an ELSE
it worked thank you so much! I changed the code to what you said…
return (num > 0) ? “positive” : (num < 0) ? “negative” : “zero”;
This is my first time posting here and with community responding it made me understand my problems better.
Thanks zdflower and hogues.uk ^_^.
I think it could work this way:
return (num > 0 ? "positive" : (num < 0 ? "negative" : (num === 0 ? "zero" : "not possible")));
And you need both the result for the true branch and for the false branch. That’s why the “not possible”.
You are not returning completely. This is one situation where a code formatter would help you debug this.
I think you can debug this yourself. Go to and type in your statement word by word. Don’t paste it in because it’ll blow the error trace. Do that and it’ll give you a hint on what’s going on.
Wow this is awesome thanks for telling me about the website. I am going to paste so much code into here
so far it did not like " but preferred single like this ’ haha …
I should have mentioned this –
This site is for formatting code correctly, with minor options. Quotes and semi-colons can be toggled on the bottom left button, show options. These options are personal and don’t affect how code runs.
Also, if you use an editor like VScode, you can install this plugin.
I got stuck on this one for a while, until I enclosed the curly braces } after checkSign(10) and with a semi-colon in there, which isn’t in the original code so assumed it wasn’t needed - I tried a few solutions here - but none worked - it was only after doing these things mentioned above did it finally pass:
function checkSign(num) {
return (num == 0) ? “zero” :
(num > 0) ? “positive” :
(num < 0) ? “negative”:
checkSign(10);}
You have 3 ternary operators, but it can be solved only using 2. See if you can only use 2 ternary operators.
function checkSign(num) {
return (num == 0) ? “zero” :
(num > 0) ? “positive” : “negative”:
checkSign(10);}
Your code has been blurred out to avoid spoiling a full working solution for other campers who may not yet want to see a complete solution.
Thank you.
Hi, I’m not sure if this issue is still active or not but i solved the problem as below:
return num > 0 ? “positive” : (num < 0) ? “negative” : (num === 0)? “zero” : " ";
return (num === 0) ? "zero" : (num > 0 ) ? "positive" : "negative";
@freecodecamp-team:
The description for this Challenge is not clear enough!
You need to tell the user, that the “STRING” “positive”, “negative” , “zero” should be returned.
BE SPECIFIC, because it’s the daily bread for a Developer to be!
just my 2 cents but don’t use multiple ternary operators inline please. you may save a few lines of code but it makes an unreadable mess for anyone who should have to maintain your code.
^ Please listen to this suggestion.
Multiple ternary operators are actually pretty elegant if you lay them out like a table:
noise = "dog" ? "bark" : "cat" ? "meow" : "horse" ? "neigh" : "bird" ? "chirp" : ""
Just don’t try this in PHP, which gets the associativity wrong | https://www.freecodecamp.org/forum/t/basic-javascript-use-multiple-conditional-ternary-operators/197326 | CC-MAIN-2019-22 | refinedweb | 725 | 75.2 |
I'm attempting to recreate the Unix shell program "todo.txt" using C++ on Windows. An example use of my program in the Command Prompt would be:
>todo /A "I like cheese."
which would append the item "I like cheese." to the file "todo.txt". In my program, there is a line which checks the arguments for the "/A" trigger:
if (argv[1] == "/A") { ... }
My problem is that when I use the "/A" trigger in the Command Prompt, my program returns the previous if statement as false (???). I've noticed that the argument "argv[]" is an array of char arrays:
int main (int argc, char* argv[]) { ... }
so I have attempted to check for the "/A" argument against another array of chars instead of a string:
char A[2] = {'/', 'A'}; if (argv[1] == A) { ... }
but this also returns false. I've attempted to debug this by printing out what argv[1] is in the console, and it is in fact the string "/A" but for some reason
(argv[1] == "/A")
insists on returning false. The funny this is it was working when the program was simpler and then decided to make no sense later on. Anyone have any ideas? Thanks.
My program is still pretty small, so I'll post it here:
/* This project is an attempt to recreate Gina Trapani's * todo.txt Unix shell CLI in C++ for Windows. I understand * that one can most likely use her program on Windows in * some fashion, but this is mostly a learning experience. */ #include <iostream> #include <fstream> #include <string> using namespace std; int main (int argc, char* argv[]) { // Check to make sure the user has input the correct number of arguments if ((argc < 3) | argc > 3) { cout << "The syntax of the command is incorrect.\n"; } else { // Debugging cout << "argc == " << argc << "\nargv[0] == " << argv[0] << "\nargv[1] == " << argv[1] << "\n\n"; // Check if the user is adding a todo item // Alternate check (char array instead of string): /*char A[2] = {'/', 'A'}; if (argv[1] == A) {*/ if (argv[1] == "/A") { // Print the item to the todo.txt text file ofstream todo ("todo.txt", ios::app); if (todo.is_open()) { todo << argv[2] << "\n"; } else { cout << "Unable to open todo.txt. Please check your permissions.\n"; } // Check if the user is requesting help } else if (argv[1] == "/?") { cout << "A shell interface to manage a todo text file.\n\n" << "TODO (/A | /R) item\n\n" << " item Specifies the item to be modified in the todo text file.\n" << " /A Indicates the item will be added to the todo text file.\n" << " /R Indicates the item will be removed from the todo text file.\n"; } else { cout << "The syntax of the command is incorrect.\n"; } } return 0; }
*Edit: I followed the instructions from the following tutorial from cplusplus.com: | https://www.daniweb.com/programming/software-development/threads/347839/cmd-line-parameter-confusion | CC-MAIN-2016-44 | refinedweb | 463 | 82.04 |
, ...
KILL(2) OpenBSD Programmer's Manual KILL(2)
NAME
kill - send signal to a process
SYNOPSIS
#include <signal.h>
int
kill(pid_t pid, int sig);
DESCRIPTION
The kill() function sends the signal given by sig to pid, a process or a
group of processes. sig may be one of the signals specified in sigac-
tion pro-
cess group ID of the sender, and for which the process has per-
mission;.
Setuid and setgid processes are dealt with slightly differently. For the
non-root user, to prevent attacks against such processes, some signal de-
liveries are not permitted and return the error EPERM. The following sig-
nals are allowed through to this class of processes: SIGKILL, SIGINT,
SIGTERM, SIGSTOP, SIGTTIN, SIGTTOU, SIGTSTP, SIGHUP, SIGUSR1, SIGUSR2. receiv-
ing process. When signaling a process group, this error is
returned if any members of the group could not be signaled.
SEE ALSO
getpgrp(2), getpid(2), sigaction(2), killpg(3)
STANDARDS
The kill() function is expected to conform to IEEE Std1003.1-1988
(``POSIX'').
OpenBSD 2.6 April 19, 1994 2 | http://www.rocketaware.com/man/man2/kill.2.htm | crawl-002 | refinedweb | 177 | 63.29 |
>>.'"
K?: (Score:3, Informative)
While I'm not disputing the usefulness of their bindings, I'd describe them as "working", but not necessarily "superb". Their API is not very pythonic or concise and feels pretty much like writing C++, without the segfaults
:P
Re:Kudos to Nokia (Score:4, Interesting)
The point of PyQt is to remain faithful to the official C++ Qt API, making your skills & docs directly transferrable and code easy to port over in either direction.
PyQt has added a new, more pythonic API for signals, and PySide will do the same thing eventually. But it's essential that we retain almost-1:1 C++ mapping (though losing trivial stuff like QString).
Re: : :5, Interesting)
I'm personally a fan of what Nokia is doing. In general, the big GUI libraries need to be LGPL or BSD to gain the widest acceptance. Requiring license fees for non-GPL leaves companies like Nokia flapping in the wind with no solution. This is also why GTK gained so much momentum at Qt's expense. This allows me to learn one way to write code, and to be able to either contribute it to the open-source community (which I do often), or to sell it through my work (which I actually get paid for).
However, this is a troubling new way for a big company to crush a small one... "Give me your technology for free, or I'll rewrite it and then give it to the world for free." It sounds a bit like Microsoft.
Re:Kudos to Nokia (Score, Informative)
How did this get modded up to 5? There's no indication anywhere that Nokia demanded PyQt for free.
Re:Kudos to Nokia (Score:4, Interesting)
This is also why GTK gained so much momentum at Qt's expense.
At the same time, what would Qt be without the license income from commercial licenses? Nokia can justify putting money into it to support their products, but Trolltech was a software company. I did look at both GTK and Qt for hobby development and Qt was much higher quality in my personal opinion, and that's because they had real income to hire dedicated developers and make a kick-ass development platform. McDonald's is cheap and popular, that's what GTK is to companies but it doesn't imply that it's good food.
I often find the development of open source projects painfully slow (yes, this is from the and-I-want-a-free-pony-too department) but Qt has always been a positive high note. I love how they take top notch things like WebKit and make them incredibly easy to use in QtWekKit. They're a little bit like Apple, except for developers - they take what's out there and really everybody's doing already but is difficult, package it up in a great way to make it easy for the win.
One thing I really like is that they have, unlike much other open source stuck in the 20th century, embraced long function names and code completion. I just tried to check what the longest function name was and "availableAudioOutputDevicesChanged()" is pretty close. But since it's object oriented, you type the
./-> and the autocomplete list appears. Unlike the fcntl vs iocntl or whatever article that was here recently, it's retarded naming for people still using text editors (let the flamewars begin).
In short, you talk about how much momentum Qt would make, I'm hoping it won't lose any of the momentum it has had. It's basically the standard library C++ should have had to compete with Java/C#...
Re: (Score:3, Insightful)
A bit like Microsoft, yes. But MS would've been more like "Sell/give us your technology for next-to-nothing, or we'll buy someone else's inferior competing software, market the fuck out of it, and ruin you."
Re: lear: (Score:3, Interesting)
The problem with PyQt was that as it was the only python binding, you had no choice but to pay to riverbank. Considering that they don't own Qt themselves, they got a slight "monopoly"/gatekeeper position on other people's code.
Also, PyQt was not all *that* open source, you couldn't fork it because they only release the generated code.: (Score:3, Interesting)
1) complicates the build process
The build process is already needlessly complicated. One more preprocessor won't hurt.
2) pollutes the global namespace terribly (emit, signals, etc.)
"If you're worried about namespace pollution, you can disable this macro by adding the following line to your
.pro file:
CONFIG += no_keywords"
3) slows rebuild as MOC has to inspect the code and regenerate MOC files if needed
gcc is orders of magnitude slower than moc.
4) cannot understand normal and common C++ code (inner class, templates)
6) doesn't know const char * vs. char const * are the same
7) same goes for any other compatible but not strictly-exact prototype
Use the subset that moc understands. Problem solved.
5) causes binding errors that are (maybe not) discovered at runtime
That's a valid point, but what do you recommend to fix it? These errors disappeared when I switched to Qt Creator. There's autocomplete for signals and slots there.
8) adding one more tool/compiler to code generation (to make, compiler, resource compilers, linker..)
Yes, one more.
Re:Kudos to Nokia (Score:5, Interesting)
Full disclosure:
At work, I have a PyQT commercial license. I've had to look into the licensing issues around Qt and PyQt. The following is based upon my reading of the various licences and issues. I am an engineer, not an IP lawyer, and even were I a lawyer, I am not your lawyer - so do your own research.
I am all for supporting companies like Riverbend who offer both GPL and proprietary licenses.
However, there is a complication that Nokia was trying to address here. Whatever license you are using for PyQT you must also be using for Qt, due to the way they are linked.
Now, with Riverbend, the only licenses they offer are GPL and proprietary. That means that if I want to release a proprietary application using PyQt, I must use the proprietary PyQt. However, that means that I now must use the proprietary license for Qt as well. But that now means I have to buy the developer licenses for my team from Nokia - again, not a big deal from an initial monetary outlay.
Now, for my application I cannot use the GPL license because parts of my application are licensed from other people who don't want to GPL their code - it sucks, and I'd rather not deal with it, but when you are in the RF communications business and you have to support CDMA, you HAVE to do business with Qualcomm, and they will NOT change their minds.
So when I ship my app as a Windows or GNU/Linux application, I cannot use the GPL. Now, just considering Qt, I can use the LGPL - the library is dynamically linked against my code, the user can replace the library, all is right with the world.
Except that I cannot use the LGPL for Qt and use PyQt, as PyQt does not support LGPL. So, in case you cannot draw the Venn diagram for yourself, I am left with using the proprietary license for both Qt and PyQt. Now, even though the licensing terms are pretty generous, I still have to track all the licensed code I ship - so you just added a bunch of cost to my accounting of program. This gets even worse when the program is freely available (remember, you can be free and not be Free).
I had contacted Riverbend when Nokia announced the LGPL'ing of Qt, and at that time they said they were considering it. Obviously, they decided they couldn't do it and remain viable as a business - and while that sucks for me, I can certainly understand their point of view. But without the ability to use the LGPL version of Qt from Python, the utility of Qt is greatly diminished. I can understand why Nokia did what they did. Yes, it would have been nice if Nokia could have worked out a way to fund Riverbend such that Riverbend could have LGPL'ed PyQt, but evidently that couldn't happen.: (Score:3, Interesting)
Some of your arguments sound like the arguments of a religious or political fanatic - sorry.
If there aren't a lot of copies of your work already in circulation, or you're worried about someone making a product out of your work and then removing all trace of the master copy, (which I have seen done in the case of some public domain books, for instance) a copyleft license can be an appropriate choice.
It depends also on what your motivations are. If you have no intention of making money from your work:5, Interesting)
This is what Richard Dale (the main author of SMOKE and the Ruby and C# bindings for Qt and KDE, and C, Objective C and Java bindings in the past, to) said [kdenews.org] about PyS.
It looks like PySide are huge (3x the size of PyQt and 6x the size of SMOKE-generated bindings!) and there is very little improvement they can do if they keep on using Boost::Python to generate PySide.
Given that PyQt costs only £350 (roughly 400 EUR) with full support and is much lighter and mature, I can't see why I would use PySide (unless Nokia gives me full, free, support with my commercial C++ license, of course, which I think they won't be doing because they required you to buy a 1000 EUR separate license for Qt Jambi -the Java bindings- ): (Score:3, Interesting)
Given that PyQt costs only ã350 (roughly 400 EUR) with full support and is much lighter and mature, I can't see why I would use PySide
For me, it's partly philosophical. Python is available under a BSD-like license. Qt is available under the LGPL. However, to make Python talk to Qt, you currently have to add the extra restrictions of the GPL. Not to give short shrift to Riverbank, but I thought it was pretty silly that the most restrictive license in that chain was in the glue that connected the more permissive components.
If PySide makes it to Windows soon, this will probably be my company's GUI development platform. I'd always. | https://developers.slashdot.org/story/09/08/30/0823206/nokia-makes-lgpl-version-of-pyqt?sdsrc=nextbtmnext | CC-MAIN-2017-17 | refinedweb | 1,765 | 67.59 |
Introduction to Spark Structured Streaming - Part 3 : Stateful WordCount third post in the series. In this post, we discuss about the aggregation on stream using word count example. You can read all the posts in the series here.
TL;DR You can access code on github.
Word Count
Word count is a hello world example of big data. Whenever we learn new API’s, we start with simple example which shows important aspects of the API. Word count is unique in that sense, it shows how API handles single row and multi row operations. Using this simple example, we can understand many different aspects of the structured streaming API.
Reading data
As we did in last post, we will read our data from socket stream. The below is the code to read from socket and create a dataframe.
val socketStreamDf = sparkSession.readStream .format("socket") .option("host", "localhost") .option("port", 50050) .load()
Dataframe to Dataset
In the above code, socketStreamDf is a dataframe. Each row of the dataframe will be each line of the socket. To implement the word count, first we need split the whole line to multiple words. Doing that in dataframe dsl or sql is tricky. The logic is easy to implement in functional API like flatMap.
So rather than working with dataframe abstraction, we can work with dataset abstraction which gives us good functional API’s. We know the dataframe has single column value of type string. So we can represent it using Dataset[String].
import sparkSession.implicits._ val socketDs = socketStreamDf.as[String]
The above code creates a dataset socketDs. The implicit import makes sure we have right encoders for string to convert to dataset.
Words
Once we have the dataset, we can use flatMap to get words.
val wordsDs = socketDs.flatMap(value => value.split(" "))
Group By and Aggregation
Once we have words, next step is to group by words and aggregate. As structured streaming is based on dataframe abstraction, we can use sql group by and aggregation function on stream. This is one of the strength of moving to dataframe abstraction. We can use all the batch API’s on stream seamlessly.
val countDs = wordsDs.groupBy("value").count()
Run using Query
Once we have the logic implemented, next step is to connect to a sink and create query. We will be using console sink as last post.
val query = countDs.writeStream.format("console").outputMode(OutputMode.Complete()) query.start().awaitTermination()
You can access complete code on github.
Output Mode
In the above code, we have used output mode complete. In last post, we used we used append mode. What are these signify?.
In structured streaming, output of the stream processing is a dataframe or table. The output modes of the query signify how this infinite output table is written to the sink, in our example to console.
There are three output modes, they are
Append - In this mode, the only records which arrive in the last trigger(batch) will be written to sink. This is supported for simple transformations like select, filter etc. As these transformations don’t change the rows which are calculated for earlier batches, appending the new rows work fine.
Complete - In this mode, every time complete resulting table will be written to sink. Typically used with aggregation queries. In case of aggregations, the output of the result will be keep on changing as and when the new data arrives.
Update - In this mode, the only records that are changed from last trigger will be written to sink. We will talk about this mode in future posts.
Depending upon the queries we use , we need to select appropriate output mode. Choosing wrong one result in run time exception as below.
org.apache.spark.sql.AnalysisException: Append output mode not supported when there are streaming aggregations on streaming DataFrames/DataSets without watermark;
You can read more about compatibility of different queries with different output modes here.
State Management
Once you run the program, you can observe that whenever we enter new lines it updates the global wordcount. So every time spark processes the data, it gives complete wordcount from the beginning of the program. This indicates spark is keeping track of the state of us. So it’s a stateful wordcount.
In structured streaming, all aggregation by default stateful. All the complexities involved in keeping state across the stream and failures is hidden from the user. User just writes the simple dataframe based code and spark figures out the intricacies of the state management.
It’s different from the earlier DStream API. In that API, by default everything was stateless and it’s user responsibility to handle the state. But it was tedious to handle state and it became one of the pain point of the API. So in structured streaming spark has made sure that most of the common work is done at the framework level itself. This makes writing stateful stream processing much more simpler.
Conclusion
We have written a stateful wordcount example using dataframe API’s. We also learnt about output types and state management. | http://blog.madhukaraphatak.com/introduction-to-spark-structured-streaming-part-3/?utm_campaign=Revue%20newsletter&utm_medium=Newsletter&utm_source=SF%20Data%20Weekly | CC-MAIN-2018-34 | refinedweb | 844 | 66.94 |
Suresh Srinivas created HDFS-4923:
-------------------------------------
Summary: Save namespace when the namenode is stopped
Key: HDFS-4923
URL:
Project: Hadoop HDFS
Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
In rare instances the namenode fails to load editlog due to corruption during startup. This
has more severe impact if editlog segment to be checkpointed has corruption, as checkpointing
fails because the editlog with corruption cannot be consumed. If an administrator does not
notice this and address it by saving the namespace, recovering the namespace would involve
complex file editing, using previous backups or losing last set of modifications.
The other issue that also happens frequently is, checkpointing fails and has not happened
for a long time, resulting in long editlogs and even corrupt editlogs.
To handle these issues, when namenode is stopped, we can put it in safemode and save the namespace,
before the process is shutdown. As an added benefit, the namenode restart would be faster,
given there is no editlog to consume.
What do folks think?
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201306.mbox/%3CJIRA.12653948.1371740608956.148928.1371740721503@arcas%3E | CC-MAIN-2017-47 | refinedweb | 198 | 51.78 |
Type '<typename>' is not defined
The statement has made reference to a type that has not been defined. You can define a type in a declaration statement such as Enum, Structure, Class, or Interface.
Error ID: BC30002
To correct this error
Check that the type definition and its reference both use the same spelling.
Check that the type definition is accessible to the reference. For example, if the type is in another module and has been declared Private, move the type definition to the referencing module or declare it Public.
Check that the namespace of the type is not redefined within your project. If it is, use the Global keyword to fully qualify the type name. For example, if a project defines a namespace named System, the System.Object type cannot be accessed unless it is fully qualified with the Global keyword: Global.System.Object.
If the type is defined, but the object library or type library in which it is defined is not registered in Visual Basic, click Add Reference on the Project menu, and then select the appropriate object library or type library. | https://msdn.microsoft.com/en-us/library/sy234eat(VS.80).aspx | CC-MAIN-2017-09 | refinedweb | 185 | 61.46 |
Buckswood News Compendium
The complete school newspapers for 2010 - 2011 Academic Year
school
HASTINGS
TJ’s Thank you for supporting TJ’s. Let me explain the ethos behind TJ’s:
The food We are following the Food for Life programme that encourages the seller to buy locally and where possible to grow products or bake products ourselves. You will notice that all of the cakes are made here in the café, and the fresh products are all sourced locally and where possible they are organic products.
NEWS
LSE Success for Buckswood Alumni
Ray Li visited Buckswood School this week to collect his 2010 A Level results after two years of hard work. Ray achieved a remarkable three A*s and two As at A Level and is commencing his undergraduate studies at University College London (UCL) next week. UCL was recently ranked as the 4th best university in the world in the 2010 QS University rankings.
We therefore believe in supporting the local farmers and suppliers rather than buying in from the multinational wholesalers. If we of course buy locally, as a consumer, we also are doing ‘our bit’ to help the local economy as well as helping to reduce the carbon foot print (i.e. buying apples locally rather than having them shipped in from abroad).
TJ’s the Business A group of A level students studying accounts will be ‘auditing the books’ each week with the Head of accounts.
The profit and our social responsibility. All of the profits from the café will go towards feeding the very poor HIV children in the rural areas of Swaziland. The money will be transferred to the Buckswood Swaziland Bank account (directly with no admin charges) and PC Sandile (known as Shabby Shirt) will ensure that each week fruit is purchased from local street traders (not shop keepers). This fruit will then be distributed to the children to boost their vitamin intake each week, which in turn will strengthen their immune systems.
MARAT OMAROV (Buckswood 2004–07) Marat, Head of School, graduated from the University of York this summer and has gained a post-graduate place at the London School of Economics. CORNELIUS KOELLN (Buckswood 2000-06) Cornelius, Headboy, graduated from the University of Bath this summer and has won a post-graduate place at the London School of Economics reading for an MSc in Management for two years.
Therefore we are all winners . . .
. You are eating healthier . You are learning to be responsible consumers . You are helping to support and sustain a grass roots local family in Africa . You are helping a poor and needy child
Thank you Mr Sutton Headmaster
issue 28 . september 2010 .
B UCKSWOOD
THE GLOBAL VILLAGE . .
s Latest New
cord welcome a re recession we the to • Despite the ar ye is th s w student number of ne mily. Fa d oo sw ck Bu e! new bus rout rted another t rs • We have sta Hu f at up and drop of iam and We now pick nenden, North Be k, oo br an Green, Cr . Sedlescombe call Dave on details please for further
The Stables
Due to the increased popularity of the Stables we welcome a new horse ‘Poppy’ to the livery. Beginners to advanced riders can have lessons. To book a lesson or to go for a hack across the estate ema il Karen at [email protected]
Riding
Riding Riding
Riding
813 01424 813
Astro Turf
Early in October our new astro turf will be ready, so we can play eve n MORE sports. If you are interested in booking thi s facility then email [email protected]
Splitz Dance A c
ademy We are p r o ud to b to anno e able u Dance S nce that Split z at Buck chool is now b ased swood a can tak nd pupil e Modern lessons in Ball s , Tap an e d Jazz. t,
Dancing
University Success
A school is inevitably judged on its performance and the university destinations of Buckswood’s Sixth Formers in recent years tells an observer that much good work is being done at A Level. This is evermore important in order to stay ahead of the chasing pack in the jobs market in what will become an increasingly competitive environment for graduates. LSE Success: Cornelius Kölln and Marat Omarov take up a place at LSE
Open D ays
Our nex t ge Open M neral will be 9 orning th Octo ber at 10am .
ce announ oud to ia. We r d is pr e o o ig w N s k d pupils ia: Buc f Buckswoo Niger g the K. o ening lcomin the op ward to we ool to the U h or look f ur sister sc from o
S CHOOL
. ON THE A259
TEL
01424 813813
Business
TJ ’s
We have opened our own 6th form cafe where pupils, staff and parents can buy freshly made and locally produced cakes and salads. This initiative follows Food for Life programme promoting healthy living.
The Business Management Department has expanded to meet the pupils’ requirements. This includes running two mini businesses in the school which include overseeing the business side of running the Rolls Royce and the setting up of the vineyard. The department is now offering CIM and CMI qualifications.
Buc
Passport to Success
livery of Buckswood announce de us. new school prospect facilities on offer ny Now showing all the ma st for all parents mu A within the school. ect school for their seeking to find that perf children. NEW Buckswood Letterhead
23/3/05
3:55 pm
Page 1
swood Buck Hastings UK
The College 16+ yrs old
The School 10-16 yrs old
d
Welcome to Buckswoo
Buckswood UK
Buckswood St George's B UC KS W
Mr Tim Fish, Headteacher
AD
O
OD
SC H O O
L
ERIA IG N
To get your copy order it via the web on
you should what we do and why a feel for who we are, to get some sort of that our broad aims I’m sure you will begin and continuous and that education is holistic for others; the pages of our prospectus As you start to turn your child. We believe politeness and a respect school to educate wrong; common decency; Buckswood as the values of right and encourages and rewards seriously consider programme which We offer: traditional of ethos of the school. attention; an academic forging of a culture complement the overall personalised care and less fortunate; the and global citizenship; for service to others the principles of internationalism activities and opportunities a wealth of extra-curricular learning and achievement; at for life and adventure. during his or her time success; and a lust child to try and accomplish not our of what we want every visit; chat to our students, will have a fuller picture measure. Come and pages I’m sure you and stretched in equal After reading the following encouraged stimulated, need to be challenged, Buckswood. Children the measure of us. staff – then you’ll have
VITAM PARAM
US
Buckswood Nigeria
BUCKSWOOD A member of the Buckswood Education Group
TN35 4LT • ENGLAND • • EAST SUSSEX • nr HASTINGS (0)1424 812100 SCHOOL • GUESTLING 813813 • Fax +44 Tel. +44 (0)1424 od.co.uk Email: achieve@buckswo .co.uk Website: in England No 3824108
Buckswood Georgia
Buckswood School
Limited • Registered
Buckswood St Nova Institute Buckswood School Auckland, New Zealand Hastings [email protected] [email protected]
Buckswood St George’s St Leonards on Sea [email protected]
THE BUCKSWOOD
EXPERIENCE
e
ls Royc
’s Rol kswood
After School Timetable (5-6.30) • Tenni s • Basketba ll • Rug by • Hoc key • Football • Netball • Cheerleadin g • Ballet • Fenc ing • Polo • Gym • Aca dem ic Cat ch- up
Buckswood pupils are going places: they have been given their Passports to Success to record their achievements throughout the year. Buckswood believes that every child has a talent and it is the teacher’s responsibility to cast the net of opportunity so wide that any talent gets a chance to develop and to ensure that many of these talents should be just as highly ranked as academic scholarship. The concept of the average pupil must give way to the concept of an infinite variety of youngsters heading for an infinite variety of success! Private education is not as expensive as you think . . .
1st Assembly The message given by the Headmaster Good afternoon and welcome to the first assembly of the term and the new academic year. The end of the week assembly here at the local church, I feel, is an important part of the weekly timetable. This gathering is not religious and has no bias towards any one religion but it is for me, and hopefully for you, a time when you can unwind a bit, have that sense of closure for the week that has just gone by and feel a sense of achievement. We live in such a busy world, your timetables are busy, every minute of the day is filled with something, in fact I doubt that many of you will say that this week has been a slow week and that time has gone slowly, it is Friday already! So this meeting together every Friday is a time where you can spend a few moments in reflection, in this beautiful old English church on the side of the school estate. I want it to be a time where the Buckswood family come together as a body of people to congratulate each other on the achievements of the week. Each one of you has a part to play in the smooth running of the school. It will not take long before you feel at home and part of your new Year Group with new friends and exciting things to do. We are a team, a team determined to win – but we must all be part of that team – like a rugby team (I chose rugby as football is such an odd sport, loads of over paid men running around a field kicking a bag of wind) anyhow we are like a rugby team ready to beat Battle Abbey, we must all follow instructions from our captain and we must put into practice what we have been taught and go into the game with the determination to win. Therefore school is about achievement, about success, about winning. It is about gaining every opportunity that is going - be it joining the football or rugby team, going horse riding, playing the bag pipes, going to Kip on Saturday morning. The more that you can experience the wealthier the person you will be. Oh – I have touched on a dirty word here – wealth. Wealth here in the great Kingdom of Buckswood has a different meaning as yes we are all financially well off to be able to attend a school like this but here it is NOT about the watch you have on your wrist or the expensive flash Armani jacket that you may be wearing – and yes just wait Saturday is coming and the school will look more like some flash
Paris cat walk – but is this really important. Wealth at Buckswood is what is inside you, it is knowledge and achievement – if you have these you are a rich man or woman in deed. In fact lets take this one step further – a good bit of homework = good exam results = a good place in Upper 6th = a good university = a good job. None of these can be obtained by not working hard – it is a simple plain fact. Those that did not work hard last term were asked not to return (of course they will tell you something else on face book) but they were not good enough for us – they did not understand the equation above, they did not want in their hearts achieve.Those that did do well and have gone on to LSE or York or other top Universities – they are the achievers in life and I am so proud of them – they sat up and listened to the advice their teachers gave them, they were the ones that grabbed life in both hands and ran with it. Oh and by the way those cynical people sitting here – being clever does not make you a boring person – it makes you a well respected person within a community be it at school or at home. So therefore here we are, and as I sat at my computer this afternoon typing out these few words of wisdom, I had 2 emails from old students –reminding me that assembly was in an hour and that I should remind the students of the importance of this little weekly meeting. A final few words that I think are worth mentioning. At school we are very lucky to be able to start a fresh – that is make new resolutions. Here today you can make that commitment to yourself - to be a better person. You may be a new student and no one knows you, you may be an old student or a new member of staff – again who are you - what are others thinking about you. During the next few weeks this is the time when others will make judgments about you – oh this is the boy
that is so cool at rugby or this is the girl that is fab at maths – or this is the kid that is always naughty or may be this is the boy that has smelly socks and always leave his books lying around in the courtyard. What reputation do you want to give to others? To help you ensure that others around you get a good impression of you, some words of advice. You must follow the rules set, lets substitute the word rules as it has a negative meaning and say you have to abide by the structures that are laid down, these are in your Student Handbook in The Passport for Success in the Topics to be taught book and the 101 things to do at Buckswood booklet. This is the structure of the school but you must know yourself -that is know your strengths and weaknesses and be able to accept praise and criticism. Learn humility – that is avoid thinking that you are too important. You are of course unique but you are only a tiny part of history. Although we rightly stress the importance of the individual we are all dependant on each other. We are also dependant on the past and those who will come in the future will depend on us., be proud of your achievements – however big or small they may be. You must seek the truth and there is no substitute for hard work and hard thinking. If the truth has an ugly face, if it is beautiful, don’t sneer at it. The truth is always complex. And therefore be truthful to yourself – good luck and I end on a little quote that I mentioned to the staff in the staff meeting – go into this year with a good and happy heart, you alone are Master of your destiny and captain of your soul – I hope you achieve everything you want to achieve this year but it is down to you and you alone to ensure that you Achieve.
GCSE Success at Buckswood ALAN CHEUNG
Joined Buckswood in 1995 from Guestling Bradshaw Primary School He achieved 7A*, 1A and 2Bs and is now taking his A Levels in Biology, Chemistry, Maths, Physics and Law. “I am planning to go to university to read Medicine. I would really like to go into Pharmaceutical Forensics. I really appreciated all the 1:1 help the teachers gave to me when I needed it. The teachers are really great and supportive”.
BETHANY ENGLISH-SMITH
Joined Buckswood in 1995 from Beckley Primary School She achieved 4A* and 4As and is now taking her Alevels in Biology, Chemistry, Physics and Maths. “Buckswood has given me the encouragement to succeed. The teaching is excellent and ‘Access Time’ provided me with extra help when I needed it. I would like to study Medicine or Biochemistry at University”.
IvANA SUMMErfIELd
Joined Buckswood in 1998 from St Richards School. She achieved 1A* and 6As and is now taking her Alevels in Economics, English Literature, Law and History. “Buckswood’s teachers encouraged me to learn and study hard even outside school hours, and with all the extra curricular activities and clubs I had lots of fun too”.
An anxious new mum . . . Thank you for making the very difficult task of getting my daughter up for school so easy, today I heard the golden words of “Mum hurry up, I want to get to school early!” When I picked myself up off the floor, I promptly got in the car (after polishing shoes) and took my daughter who hated going to school, chirping away about all the wonderful teachers and telling me even though the spellings were hard, she is going to really try and get the hang of them (she finds spelling very hard, as do I although I have never told her that). Her day is so filled with wonderful experiences at Buckswood, and on Saturday she is riding in the B team at a horse show on her own horse who we stable at Buckswood. She also tells me she has joined cross country after school as she likes to run, but that is forbidden on campus! She followed that up with “but that’s a good thing mum ‘cos your shoes will last longer!” She seems so happy and whilst I always knew since coming to Buckswood for riding that she has Buckswood in the blood, I did not realise what a wonderful opportunity it would be for her to grow into a beautiful, intelligent and clever young lady.
... where the world comes together ...
Once again, Thank you to you and all your staff for making the beginning of her future with Buckswood such a memorable and enjoyable one.
football football
Hockey Hockey
Hillcrest Hillcrest 0 v0 7v Buckswood 7 Buckswood
Buckswood Buckswood vsvs Bethany Bethany
Buckswoods’ Buckswoods’young younglions lionstravelled travelled totoHillcrest Hillcresttoday todayand andputputin inanan outstanding outstandingperformance performancetotostun stuna a hard hard working working Hillcrest Hillcrest side. side.
OnOnWednesday Wednesdaythethe23rd 23rdof ofSeptember SeptemberthetheGirls GirlsU14 U14 hockey hockey team team played played Bethany Bethany away away in in 7-a-side 7-a-side game. game.
In Inthethefirst firstleague leaguematch matchof ofthethe season season wewe started started slowly slowly and and were wereunder underpressure pressurefrom fromthethekick kick offoffbefore beforethetheoutstanding outstandingOscar Oscar Kotting Kotting and and Mac Mac Millan Millan settled settled things things at at thethe back back forfor us.us. After After7 7minutes minutesa debut a debutgoal goalfrom from our ourGeorgian Georgianboy boyMurtaz Murtazwhich which went wentin inoffoffa adefender defenderforforone onenilnil lions. lions. The Thegoal goalspurred spurredthetheyoung young lions lionsononand andwithin within4 4minutes minutesthey they were were 2-02-0 upup thanks thanks toto a good a good finish finish from from man man of of thethe match match Harry Harry Reece. Reece.
We Weplayed playedinto intothethesunsunfirst firstonona avery veryhothotSeptember September day, day,fitness fitnesslevels levelswere weretested. tested.We Wedominated dominatedthethefirst first 20-minute 20-minutehalfhalfand andshould shouldhave havescored scoreda bucket a bucketfull.full. Just Just before before thethe end end of of thethe first first half, half, Eleanor Eleanor Craven Craven latched latched onon toto anan Alana Alana Lawes Lawes pass pass toto score score Buckswoods Buckswoods first first goal. goal. Chloe Chloe Ryan Ryan was was replaced replaced at at halfhalf time time after after showing showing a lot a lot of of energy energy in in thethe first first 20.20. Gemma Gemma Hubert Hubert replaced replaced herher and and showed showed a lot a lot of of heart heart byby chasing chasing everything everything down. down. The The second second halfhalf Buckswood Buckswood scored scored early early through through thethe Captain Captain Alana Alana Lawes Lawes and and Buckswood Buckswood looked lookedcomfortable. comfortable.Jodie JodieDavies Daviesand andChristina Christina Viadero ViaderoDiez Diezrunning runningthethemidfield midfieldand and Daniela Daniela Keithley Keithley and and Tara Tara Reid-McCoy Reid-McCoy in in goals goals clearing clearing upup at at thethe back. back. ButBut two two late late goals goals against against play play made made it 2-2. it 2-2. End End result result Buckswood Buckswood win win onon obtaining obtaining more more short short corners, corners, a tally a tally of of 4-1. 4-1. Well-done Well-done Girls Girls a good a good start. start.
The The game game swung swung back back and and forth forth and and Christian Christiancontrolled controlledthethemiddle middleof of thethe park park whilst whilst our our defence defence worked worked hard hardtotokeep keepa clean a cleansheet. sheet. Murtaz Murtaz gotgothishissecond secondafter aftera agreat greatsolo solo finish finishfrom fromhim himtotomake makeit three it threenilnil lions. lions. One One minute minute later later Harry Harry Reece Reece struck struck again again forfor four four nilnil Buckswood. Buckswood. Zak ZakOlujobi Olujobiand andthetheoutstanding outstanding Alexandro Alexandro added added toto thethe total total before beforehalfhalftime timetototake takeususin in6-06-0 up.up. The Thesecond secondhalfhalfsaw sawa anumber number of ofchanges changesand andMax MaxLake Lakeadded addedtoto our our total total with with a wonderful a wonderful finish finish forfor 7-07-0 lions. lions. The The game game petered petered outout with with Hillcrest Hillcrest notnot able able toto break break down down our our superb superb defence defence and and it ended it ended 7-07-0 Buckswood, Buckswood, a fantastic a fantasticpassing passingdisplay displayfrom fromthethe lads lads and and anan even even better better debut debut from from man manof ofthethematch matchHarry HarryReece. Reece.I I made madea aprediction predictionthat thatthisthisgroup group of of players players willwill bring bring usus a trophy a trophy thisthis season seasonand andthisthisresult resultwas wasa agiant giant step step towards towards achieving achieving that. that.
rugby rugby The Theboys boyshad hada atough toughstart starttotothethe season season losing losing their their first first four four games. games. We We started started against against Rye Rye College College with with a result a result of of12-5. 12-5. Our Oursecond secondgame gamewas wasagainst against William WilliamParker Parkerlosing losing22-0. 22-0.Our Ourthird third game game was was against against Bexhill Bexhill which which wewe lost lost 30-5 30-5and andfinally finallyour ourfourth fourthgame gamewas was against against Claverham Claverham which which wewe lost lost 35-0. 35-0.
Basketball Basketball Friday Friday 17th 17th September, September, Buckswood Buckswood Seniors Seniors played played away awayagainst againstHastings HastingsFalcons Falcons and andwon! won!AnAnimpressive impressivesore sore of of59-29. 59-29.Buckswood Buckswoodplayed played a agood goodzone zonedefence, defence,and and scored scoredlots lotsof ofinside insidepoints points toto start start thethe game, game, thethe Falcons Falcons were were unable unable toto gain gain anyany
The Thefuture futureis isbright. bright. The Thefuture futureis is indeed indeed Buckswood Buckswood football! football!
control control of of thethe game. game. Vadim, Vadim, Ben Ben and and Toye Toye dominated dominated close close toto thethe basket, basket, thethe bigbig Falcon Falcon players players were were intimidated intimidated and and scared scared toto shoot. shoot. We We had had a full a full squad squad of of 1212 players players and and it was it was great great toto seesee everyone everyonecontribute contributetowards towardsthethevictory! victory!However, However, one one player playerwho whostood stoodoutoutfrom fromthetherest restwas was Vadim! Vadim!HisHisfirst first game game forfor Buckswood, Buckswood, hehe ledled thethe team team and and setset anan example example of ofhow howhard hardwork workand anddetermination determinationcancanachieve achievegreat great results. results. AA great great win. win.
. guestling. east . eastsussex . tn354lt4lt buckswood buckswoodschool school. guestling sussex. tn35
Silly Hat Day
All the students participated in Silly Hat Day (this week 8 November) to raise money for Cancer Research. For each child that wore a silly hat to school (the sillier the better) Mr Sutton gave £1 to the Charity - even the teachers joined in!
school
HASTINGS
NEWS 101 Things To Do at Buckswood – number 55 At the end of October as part of the 101 Things To Do at Buckswood programme, school pupils participated in the good old fashioned seasonal activity of hollowing out a pumpkin for Halloween. Then doing their best at terrifying Matron with them! Upon completion of this task students gained 3 points towards their award certificate.
Dancing Preparation for Buckswood’s Dance Show has been going full steam ahead this term with student’s taking part in street, modern, ballet, jazz, latin, bollywood and cheerleading lessons. Each week students have been rehearsing during private lessons, clubs and evening squads to ensure they are ready for the performance on 4th December. The show will also combine external dance students who have started taking lessons at Buckswood along with Splitz students for an exciting evening of entertainment for all. The show starts at 6.30pm and all proceeds from ticket sales will be going to Help for Heroes. Legs, bums and tums lessons are going ahead now, and along with the fitness focused sports lessons the girls are turning into real athletes. The cheerleading squad have been training hard to ensure when their new uniforms arrive shortly, they are fully prepared to get supporting the Buckswood teams! Ballroom students have been waltzing and dancing the cha cha cha in preparation for the Christmas dinner – watch out for the new Strictly Come Dancing Stars! Please remember it is never too late to join dance lessons, all ages and abilities are welcome and you may be the next Darcy Bussell or Billy Elliot! I look forward to seeing you all at the Help for Heroes Dance Showcase. Miss Stacey Caister (Director of Dance)
The new Astro Pitch
The new astro turf is nearly ready. Rain has quite literally stopped play, but we hope to have the sand down this coming week.
issue 29 . october 2010 .
Paris Christmas Half-Term 2010 Twenty students chose to accompany Mr Rens and Miss Wilson to explore the beauty and culture of Paris for their Christmas half term. The week was full of art, culture, history, intricate architecture, new friendships and lots of laughs! All Buckswood students even took to learning some of the language and practiced with the locals while out and about: ‘Bonjour!’, ‘Excuse moi Madame’, ‘Merci beaucoup’, ‘Sa va?’ The highlight of the week for the students was a full day in the magical world of Disneyland – taking the underground on a double-decked train from the elegant hotel in the centre of Paris right to the grand doors. This was closely followed in popularity by an evening tour up the impressive Eiffel Tower on the River Seine, all the way to the top, overlooking the night-lights of the whole city. Among other highlights were: the Champs Des Elysee, the main designer shopping street land-marked by the Arc de Triomphe at its end; The Louvre art museum, taking in some of the world’s best artworks including the Mona Lisa and surrounding park; The Gothic Notre Dame Cathedral complete with gargoyles; Exploring the trendy artist streets of the Montmartre around the holy Sacre Coeur and a river cruise down the length of the Seine through the heart of Paris itself.
Apple picking day Each year members of the junior school pick apples from the apple trees from around the Broomham estate. The juniors were given a bag each and presented a senior boarder with a bag of fruit at the end of the day.
The Junior School conker competition. OK conker competitions are not PC nowadays (but there isn’t much that we do that is!) The junior school were sent off around the estate to pick their own conkers. Many junior boarders were seen throughout the week baking them and borrowing nail varnish from the girls to paint them with. I gather that this makes them harder? Anyhow one and all continued an age old tradition of the battle of the conkers!
Prefect Training Each year the prefects participate in a days training day by an external company that come in to teach them the importance of being a manager and learning aspects about how to deal with situations they may come up against.
Prefect’s Trip to Buckingham Palace and Wagamamas
Buckswood Boarder’s Go Paintballing | https://issuu.com/wispartan/docs/news_compendium_pt1 | CC-MAIN-2016-40 | refinedweb | 4,742 | 63.93 |
Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Otherwise, your application may be disallowed.
Processing the Application
The IRS will process this application within 90 days from the later of: ● The date you file the complete application; or ● The last day of the month that includes the due date (including extensions) for filing your 1994 income tax return (or, for a claim of right adjustment, the date of the overpayment under section 1341(b)(1)). Before processing certain cases involving abusive tax shelter promotions and before paying refunds, the IRS will reduce refunds of investors when appropriate, and will offset deficiencies assessed under provisions of section 6213(b)(3) against scheduled refunds resulting from tentative carryback adjustments under section 6411(b). See Revenue Procedure 84-84, 1984-2 C.B. 782 and Revenue Ruling 84-175, 1984-2 C.B. 296. The processing of Form 1045 and the payment of the refund requested does not mean the IRS has accepted the items carried back to previous years as correct. If it is later determined from an examination of the tax return for the year of the carryback that the claimed deductions or credits are due to an overstatement of the value of property, negligence, or substantial understatement of income tax, you may have to pay penalties. Any additional tax will also generate interest compounded daily. We may use Form 2848, Power of Attorney and Declaration of Representative.
Instructions for Form 1045
Application for Tentative Refund 26 min. 31 min. 6 hr., 56 min.
When To File
File Form 1045 within 1 year after the end of the year in which the NOL, unused credit, or claim of right adjustment occurred, but only on or after the date you file your 1994 return. When an NOL carryback eliminates or reduces a general business credit in an earlier year, you may be able to carry back the released credit 3 more years. See section 39 and the Instructions for Form 3800, General Business Credit, for more details on credit carrybacks. If you carry back the unused credit to tax years before the 3 years preceding the 1994 tax year, use a second Form 1045 for the earlier year(s). Also, file the second application within 1 year after the 1994 tax year. To expedite processing, file the two Forms 1045 together.
Copying, assembling, and sending the form to the IRS 56-0098), Washington, DC 20503. DO NOT send Form 1045 to either of these offices. Instead, see Where To File on this page.
Where To File
File Form 1045 with the Internal Revenue Service Center where you are required to file your 1994 income tax return. Caution: Do not mail Form 1045 with your 1994 income tax return.
What To Attach
Attach copies of the following, if applicable, to Form 1045 for the year of the loss or credit: ● If you are an individual, pages 1 and 2 of your 1994 Form 1040, and Schedules A and D (Form 1040). ● All Schedules K-1 you received from partnerships, S corporations, estates, or trusts that contribute to the loss or credit carryback. ● Any application for extension of time to file your 1994 income tax return. ● All Forms 8271, Investor Reporting of Tax Shelter Registration Number, attached to your 1994 return. ● Any other form or schedule from which the carryback results, such as Schedule C or F (Form 1040), or Form 3468, Form 3800, etc. ● All forms or schedules for items refigured in the carryback years, such as Form 6251 or Form 3468. Be sure to attach all required forms listed above, and complete all lines on Form 1045 that apply to you.
Cat. No. 13666W
General Instructions
Purpose of Form
Form 1045 is used by an individual, estate, or trust, to apply for: ● A quick refund of taxes from the carryback of a net operating loss (NOL) or an unused general business credit. ● A quick refund of taxes from an overpayment of tax due to a claim of right adjustment under section 1341(b)(1). Note: An NOL may be carried back 3 years and forward 15 years. However, you may elect to carry forward a 1994 NOL instead of first carrying it back by attaching a statement to that effect to your 1994 tax return filed on or before the due date (including extensions). Once you make the election, it is irrevocable and the carryforward is limited to 15 years.
Disallowance of the Application
This application for a tentative carryback adjustment is not a claim for credit or refund. Any application may be disallowed if it has material omissions or math errors that cannot be corrected within the 90-day period. If it is disallowed in whole or in part, no suit may be brought in any court for the recovery of that tax. But you may file a regular claim for credit or refund before the limitation period expires, as explained on page 2 under Form 1040X or Other Amended Return.
Excessive Allowances
Any amount applied, credited, or refunded based on this application that the IRS later determines to be excessive may be billed as if it were due to a math or clerical error on the return.
country. Do not abbreviate the country name.
Line 1a—Net Operating Loss
Figure your net operating loss (NOL) on Schedule A, page 2. You must carry the entire NOL back to the 3rd tax year before the loss. Any loss not used in the 3rd year is carried to the 2nd, and then the 1st preceding year. Any loss not applied in the 3 preceding years can be carried forward up to 15 years. Special rules apply to the part of an NOL related to any specified liability loss, including product liability losses. See section 172(b)(1)(C) for details. If you filed a joint return (or a separate return) for some but not all of the tax years involved in figuring the NOL carryback, special rules apply in computing the NOL deduction. See Pub. 536 for the special rules. Attach a computation showing how you figured the carryback.
Form 1040X or Other Amended Return
Individuals can get a refund by filing Form 1040X, Amended U.S. Individual Income Tax Return, instead of Form 1045. An estate or trust may file an amended Form 1041, U.S. Income Tax Return for Estates and Trusts. Generally, you must file an amended return no later than 3 years after the due date of the return for the applicable tax year. If you use Form 1040X or other amended return, follow the instructions for that return. Attach a computation of your NOL on Schedule A (Form 1045) and, if applicable, your NOL carryover on Schedule B (Form 1045). Complete a separate Form 1040X or other amended return for each year for which you request an adjustment. The procedures for Form 1040X differ from those for Form 1045. The IRS is not required to process your Form 1040X within 90 days. However, if we do not process it within 6 months from the date you file it, you may file suit in court. If we disallow your claim on Form 1040X, you must file suit no later than 2 years after the date we disallow it. Caution: If your qualifying credit carrybacks (that is, general business credits) are affected by intervening nonqualifying credits (such as foreign tax credits), you cannot use Form 1045 to apply for a tentative refund for the earlier carryback years affected by the intervening nonqualifying credits. You must use Form 1040X or other amended return to claim refunds for those years. For details, see Revenue Ruling 82-154, 1982-2 C.B. 394.
percentage of your adjusted gross income must be refigured on the basis of your adjusted gross income determined after you apply the NOL carryback. This includes items such as medical expenses and miscellaneous itemized deductions subject to the 2% limit. Also, this includes the overall limitation on itemized deductions and phaseout of the deduction for personal exemptions. Determine the deduction for charitable contributions without regard to any NOL carryback. Any credits based on or limited by the tax must be refigured using the tax liability as determined after you apply the NOL carryback. See Pub. 536 for more information and examples.
Line 10—Net Operating Loss Deduction After Carryback
In column (b), enter as a positive number the NOL from Schedule A, page 2, line 25. If the NOL is not fully absorbed in the 3rd preceding year, first complete Schedule B on page 3. Then, on line 10, column (d), enter the NOL deduction from Schedule B, line 1, column (b). In column (f), enter the NOL deduction from Schedule B, line 1, column (c).
Line 1b—Carryback of Unused General Business Credit
If you claim a tentative refund based on the carryback of this credit, attach a detailed computation showing how you figured the credit carryback, and a recomputation of the credit after you apply the carryback. Make the recomputation on Form 3800 for the tax year of the tentative allowance. If you filed a joint return (or separate return) for some but not all of the tax years involved in figuring the unused credit carryback, special rules apply in computing the carryback. Get the Instructions for Form 3800. Attach a computation showing how you figured the carryback.
Line 12—Deductions
Individuals.—For columns (a), (c), and (e), enter the amount shown on, or as previously adjusted for, Form 1040, line 34, for 1991, 1992, and 1993. If you used Form 1040A, enter the amount from line 19 for 1991, 1992, and 1993. If you used Form 1040EZ, enter the amount from line 4 (line 5 in 1993) if you checked the “Yes” box. If you checked the “No” box for 1991, enter $3,400. If you checked the “No” box for 1992, enter $3,600. If you checked the “No” box for 1993, enter $3,700 if single ($6,200 if married).
Line 2a—Tax Year
If the year of the loss, unused credit, or overpayment under section 1341(b)(1) is other than the calendar year 1994, enter the required information.
Line 16—Income Tax
For columns (b), (d), and (f), refigure your tax after taking into account the NOL carryback. Include on this line any tax from Forms 4970 and 4972. Attach an explanation of the method used to figure your tax and, if necessary, a detailed computation. For example, write “Tax Rate Schedule—1991” if that is the method used for that year. You do not need to attach a detailed computation of the tax in this case.
Additional Information
For more details on net operating losses, get Pub. 536, Net Operating Losses.
Specific Instructions
Address
P.O. box.—If your post office does not deliver mail to your home or office and you have a P.O. box, show your box number instead of your home or office address. Foreign address.—If your address is outside the United States or its possessions or territories, enter the information on the line for “City, town or post office, state, and ZIP code” in the following order: city, province or state, postal code, and the name of the
Lines 9 through 27— Computation of Decrease in Tax
Enter in columns (a), (c), and (e) the amounts for the applicable carryback year as shown on your original or amended return. If the return was examined, enter the amounts determined as a result of the examination. Computation of deductions, credits, and taxes when the NOL is fully absorbed.—In refiguring your tax for the year to which the NOL is carried and fully absorbed, any income or other deduction based on, or limited to, a
Line 17—General Business Credit
In columns (b), (d), and (f), enter the total of the recomputed general business credits. Attach all Forms 3800 used to redetermine the amount of general business credit.
Page 2
Line 18—Other Credits
See your tax return for the carryback year for any additional credits such as the credit for child and dependent care expenses, credit for the elderly or the disabled, etc., that will apply in that year. If there is an entry on this line, identify the credit(s) claimed. Also, see the Caution under Form 1040X or Other Amended Return on page 2.
Line 9—Nonbusiness Deductions
These are deductions that are not connected with a trade or business. They include the following: ● IRA deduction. ● Deduction for payments on behalf of a self-employed individual to a Keogh retirement plan or a simplified employee pension (SEP) plan. ● Self-employed health insurance deduction. Caution: This deduction expired December 31, 1993. However, at the time these instructions went to print, Congress was considering legislation that would allow a deduction for 1994. Get Pub. 553, Highlights of 1994 Tax Changes, for later information about this deduction. ● Alimony. ● Itemized deductions are usually nonbusiness, except for casualty and theft losses, and any employee business expenses. ● Standard deduction if you do not itemize. Do not enter business deductions on line 9. These are deductions that are connected with a trade or business. They include the following: ● State income tax on business profits. ● Moving expenses. ● Deduction for one-half of self-employment tax. ● Rental losses. ● Loss on the sale or exchange of business real estate or depreciable property. ● Your share of a business loss from a partnership or an S corporation. ● Ordinary loss on the sale or exchange of section 1244 (small business) stock. ● Ordinary loss on the sale or exchange of stock in a small business investment company operating under the Small Business Investment Act of 1958. ● Loss from the sale of accounts receivable if such accounts arose under the accrual method of accounting. ● If you itemized your deductions, casualty and theft losses are business deductions even if they involve nonbusiness property. Employee business expenses such as union dues, uniforms, tools, and educational expenses are also business deductions.
● Salaries and wages. ● Rental income. ● Gain on the sale or exchange of business real estate or depreciable property. ● Your share of business income from a partnership or an S corporation. For more details on business and nonbusiness income and deductions, see Pub. 536.
Line 21—Recapture Taxes
Enter the amount shown on your Form 1040, line 49.
Line 22—Alternative Minimum Tax
A carryback of an NOL may affect your alternative minimum tax. Use Form 6251 to figure this tax, and attach a copy if there is any change to your alternative minimum tax liability.
Schedule B—Net Operating Loss Carryover
Complete and file this schedule to determine the amount of your net operating loss deduction for each carryback year and the amount to be carried forward if not fully absorbed in the carryback years. If your NOL is more than the taxable income of the earliest year to which it is carried, you must figure the amount of the NOL that is to be carried to the next tax year. The amount of the NOL you may carry to the next year, after applying it to an earlier year(s), is the excess, if any, of the NOL carryback over the modified taxable income of that earlier year. Modified taxable income is the amount figured on line 7 of Schedule B. Note: If you carry two or more NOLs to a tax year, you must deduct them, when figuring modified taxable income, in the order in which they were incurred. First, deduct the NOL from the earliest year, then the NOL from the next earliest year, and so on. After you deduct each NOL, there will be a new, lower total for modified taxable income to compare with any remaining NOL.
Line 23—Self-Employment Tax
Do not adjust the self-employment tax because of any carryback.
Line 24—Other Taxes
See your tax return for the carryback year for any other taxes not mentioned above, such as tax on an IRA, that will apply in that year. If there is an entry on this line, identify the taxes that apply.
Line 28—Overpayment of Tax Under Section 1341(b)(1)
If you apply for a tentative refund based on an overpayment of tax under section 1341(b)(1), enter it on this line. Also, attach a computation that shows the information required in Regulations section 5.6411-1(d).
Line 2
The NOL carryback from the 1994 tax year or any later tax year is not allowed. However, net operating losses, otherwise allowable as carrybacks or carryforwards, occurring in tax years before 1994, are taken into account in figuring the modified taxable income for the earlier year.
Signature
Individuals.—Sign and date Form 1045. If Form 1045 is filed jointly, both spouses must sign. Estates.—All executors must sign and date Form 1045. Trusts.—The fiduciary or an authorized representative must sign and date Form 1045.
Line 4—Adjustments to Adjusted Gross Income
If you entered an amount on line 3, you must refigure certain income and deductions based on adjusted gross income. These are: ● The special allowance for passive activity losses from rental real estate activities. ● Taxable social security benefits. ● IRA deductions. ● Excludable savings bond interest. For purposes of figuring the adjustment to each of these items, your
Schedule A—Net Operating Loss (NOL)
Complete and file this schedule to determine the amount of your NOL that is available for carryback or carryover.
Line 10—Nonbusiness Income Other Than Capital Gains
This is income that is not from a trade or business. Examples are dividends, annuities, and interest on investments. Do not enter business income on line 10. This is income from a trade or business and includes the following:
Page 3
adjusted gross income is increased by the amount on line 3. Do not take into account your 1994 NOL carryback. Generally, figure the adjustment to each item of income or deduction in the order listed above and, when figuring the adjustment to each subsequent item, increase 4 the total adjustments made to the listed items. Attach a computation showing how the adjustments were figured.
Line 5—Adjustment to Itemized Deductions
Individuals.—Skip this line if, for all 3 preceding years, you did not itemize deductions or line 3 is zero or blank. Otherwise, complete lines 9 through 33 and enter the amount from line 33 or line 12 of the worksheet below, whichever applies, on line 5. Estates and trusts.—Recompute the miscellaneous itemized deductions you deducted on Form 1041, line 15b, and any casualty or theft losses you claimed on Form 4684, line 18, by substituting modified adjusted gross income (see
below) for the adjusted gross income of the estate or trust. Subtract the recomputed deductions and losses from the deductions and losses previously claimed, and enter the difference on line 5, Schedule B of Form 1045. Modified adjusted gross income. For purposes of figuring miscellaneous itemized deductions subject to the 2% limit, modified adjusted gross income is figured by adding the following amounts to the adjusted gross income previously used to figure these deductions: 1. The amount from line 3, Schedule B of Form 1045, and 2. The exemption amount from Form 1041, line 20. For purposes of figuring casualty or theft losses, modified adjusted gross income is figured by adding the amount from line 3, Schedule B of Form 1045, to the adjusted gross income previously used to figure these losses.
Line 8—Net Operating Loss Carryover
Enter the amounts from line 8, columns (a) and (b), on line 1, columns (b) and (c), respectively. Carry forward to 1995 the amount on line 8, column (c).
or deductions based on adjusted gross income and listed in the instructions for line 4 of Schedule B, do not use the amount on line 19 as your adjusted gross income for refiguring charitable contributions. Instead, figure adjusted gross income as follows: 1. Figure the adjustment to each item that affects and is based on adjusted gross income in the same manner as explained in the instructions for line 4 of Schedule B, except do not take into account any NOL carrybacks when figuring adjusted gross income. Attach a computation showing how the adjustments were figured. 2. Add lines 3, 9, and 18 of Schedule B to the total adjustments you figured in 1 above. Use the result as your adjusted gross income for refiguring charitable contributions. For net operating loss carryover purposes, you must reduce any contributions carryover to the extent your net operating loss carryover on line 8 is increased by any adjustment made to charitable contributions.
Line 33
If Schedule B (Form 1045), line 11, is more than $100,000 for 1991 ($50,000 if married filing separately), more than $105,250 for 1992 ($52,625 if married filing separately), or more than $108,450 for 1993 ($54,225 if married filing separately), complete the worksheet below.
Line 20
If, for any of the preceding years, you entered an amount other than zero on line 18 and you had any items of income
Itemized Deductions Limitation Worksheet—See the Line 33 Instructions (keep for your records) 1991 1992
1. Add the amounts from Schedule B (Form 1045), lines 14, 20, 25, and 30, and the amounts from Schedule A (Form 1040), lines 8, 12, 18, and 25 Add Schedule B (Form 1045), lines 14 and 25; Schedule A (Form 1040), line 11; and any gambling losses included on Schedule A (Form 1040), line 25 Subtract line 2 from line 1. If the result is zero or less, STOP HERE; enter the amount from line 33 of Schedule B (Form 1045) on line 5 of Schedule B (Form 1045) Multiply line 3 by 80% (.80) Enter the amount from Schedule B (Form 1045), line 11 Enter $100,000 for 1991 ($50,000 if married filing separately); $105,250 for 1992 ($52,625 if married filing separately); $108,450 for 1993 ($54,225 if married filing separately) Subtract line 6 from line 5 Multiply line 7 by 3% (.03) Enter the smaller of line 4 or line 8 Subtract line 9 from line 1 Total itemized deductions from Schedule A (Form 1040), line 26 (or as previously adjusted) Subtract line 10 from line 11. Enter difference here and on line 5 of Schedule B (Form 1045)
1993
2.
3.
4. 5. 6.
7. 8. 9. 10. 11. 12.
Page 4
Printed on recycled paper | https://www.scribd.com/document/543912/US-Internal-Revenue-Service-i1045-1994 | CC-MAIN-2018-26 | refinedweb | 3,748 | 60.75 |
Blink, especially drones where cables are not an option. It can run on any operating system, including anything running on your single board computer.
Purchasing and Setup
To purchase the Blink(1) LED, go to their home page and click “Buy”. At this time of writing the cost was around $30.
Their software needs to be downloaded and installed from their web site as well. It is a very small set of files, so it does not take up much storage space and will not take long to download. Follow the link to the home page and click “downloads” and follow the instructions for the blink1-tool command-line. In Linux, the easiest way to install this is by entering the following commands in a terminal from your home directory.
$ git clone $ cd blink1/commandline $ make
Getting Started
Once you have your product and the software is installed, try controlling the LED from the terminal command line. In Linux, navigate to the
/blink1/commandline folder.
$ cd ~/blink1/commandline $ ls
There should be an executable file called blink1-tool shown in green. This is the program that controls the LED. While in this folder, try some of the following commands.
$ ./blink1-tool --on $ ./blink1-tool --off $ ./blink1-tool --green $ ./blink1-tool --red --blink 5
These command can be run even if you are not navigated to the
~/blink1/commandline folder by adding the entire path to the command as follows.
$ ~/blink1/commandlin/blink1-tool --on
For a full list of commands and options, see the Blink1 Tool Tutorial.
Integration with ROS
In any ROS node, you can write text out to a command line. Here is a guide on how to do this in c++.
First, make sure you import this package at the top of your file.
#include <stdlib.h>
With this package included, you can simply output a string using the system() command and it will execute that string as if you typed it into a command line. For example:
std::string output_string; output_string = "~/blink1/commandline/blink1-tool --on"; char* output_char_string = new char [output_string.length()+1]; std::strcpy(output_char_string, output_string.c_str()); system(output_char_string);
Running this inside your node will turn the LED on. Note that
system() accepts a C-type string, as apposed to a C++ string. It is likely easier to manipulate a C++ string in your code (using the “+” operator to concatenate, for example), so I suggest converting it to a C-type char string just before calling the
system() command.
In your code, I would suggest making a node that subscribes to any topics that include information that you would like to check. For our project, we subscribed to our state estimator, vision node, and the flight mode flag. Create a function that contains a series of
if() statements that checks all of the conditions you would like to visualize. Then assign a color to each condition and create a string based on that color code. At the end of this function, send that string to
system(). | https://roboticsknowledgebase.com/wiki/interfacing/blink-1-led/ | CC-MAIN-2020-29 | refinedweb | 501 | 71.85 |
PyTorch List to Tensor: Convert A Python List To A PyTorch Tensor
PyTorch List to Tensor - Use the PyTorch Tensor operation (torch.tensor) to convert a Python list object into a PyTorch Tensor
< > Code:
Transcript:
This video will show you how to convert a Python list object into a PyTorch tensor using the tensor operation.
First, we import PyTorch.
import torch
Then we check the PyTorch version we are using.
print(torch.__version__)
We are using PyTorch version 0.4.1.
Next, let’s create a Python list full of floating point numbers.
py_list = [[1.,2.,3.,4.],[5.,6.,7.,8.],[9.,10.,11.,12.]]
We can tell that they’re floating point because each number has a decimal point.
To confirm that it’s a Python list, let’s use the Python type operation.
type(py_list)
We can see that it’s a class list.
Next, let’s use the PyTorch tensor operation torch.Tensor to convert a Python list object into a PyTorch tensor.
In this example, we’re going to specifically use the float tensor operation because we want to point out that we are using a Python list full of floating point numbers.
pt_tensor_from_list = torch.FloatTensor(py_list)
So we use torch.FloatTensor and we pass our py_list variable which contains our original Python list.
And the result from this operation is going to be assigned to the Python variable pt_tensor_from_list.
Let’s check what kind of object the Python variable pt_tensor_from_list is holding using the Python type operation, and we see that it is a class of torch.Tensor.
type(pt_tensor_from_list)
Next, let’s check to see the data type of the data inside of the tensor by using the PyTorch dtype operator.
pt_tensor_from_list.dtype
So we have the variable, and then we have dtype.
When we evaluate it, we see that the data type inside of it is torch.float32.
So these are floating point numbers.
Finally, let’s print out the tensor to see what we have.
print(pt_tensor_from_list)
We print pt_tensor_from_list, and we have our tensor.
That is 1x3x4.
We see that all of our original numbers are inside of it and we also know that they are being evaluated as floating32 numbers.
Perfect - We were able to use the PyTorch tensor operation torch.Tensor to convert a Python list object into a PyTorch tensor. | https://aiworkbox.com/lessons/convert-list-to-pytorch-tensor | CC-MAIN-2020-40 | refinedweb | 393 | 66.03 |
Are Container (Standards) Wars Over?
when relevant content is
added and updated.
The open source world is different than the proprietary world in that there really aren’t formalized standards bodies (e.g. IEEE, IETF, W3C, etc.). That world is mostly defacto standards, with some governance provided by foundations like The Linux Foundation, Cloud Native Computing Foundation, Apache Foundation and several others.
Container Standards – Migrating to OCI
In the world of containers, there have been many implementations for customers to choose from over the past 5-7 years. On the container side, there was native Linux capabilities like cgroups and namespaces, and then simplified implementations like LXC, docker, rkt, appc, runc. A couple years ago, a new group was formed (the Open Container Initiative – OCI) to try and unified around a common container format and runtime. It took a couple years, but the OCI has finally come out with OCI 1.0. We discussed those details with one of the project leads, Vincent Batts from Red Hat. We dug into the list of requirements, and how they created a standard that works across Windows, Linux and Solaris operating systems.
Container Orchestration Standards – Kubernetes is Leading the Pack
Around the same time that OCI was getting started, several options were emerging for container orchestration. The PaaS platforms had all created their own homegrown orchestrators several years before. But maintaining your own orchestrator is a very difficult engineering task. The game changed when some of the web scale companies, specifically Google and Twitter, released implementations of their internal systems (Kubernetes and Mesos, respectively) into the open source communities. In addition, Docker created the Swarm orchestrator for the docker container format/runtime.
For several years, companies and customers made investments around each of these standards. Dozens of comparative charts were made, trying to position one vs. the others. But over time, more and more developers started focusing their efforts on Kubernetes. By the current count, Kubernetes now has 4-5x as many developers as all the other projects combined. Early adopters like Google, Red Hat and CoreOS jumped into the Kubernetes project, and recently almost every major vendor has gotten in line. From VMware to Microsoft to Oracle to IBM, and a list of startups such as Heptio, Distelli, and many others.
One important thing to note about Kubernetes is that CRI-O, the Container Runtime Implementation for OCI, is now the default. In the past, the docker runtime was the default, but now Kubernetes is allow more standard options to be used. Google’s Kelsey Hightower has an excellent write-up, as well as making it part of his “Kubernetes the Hard Way” tutorial.
And beyond Kubernetes, we’re also seeing the most popular ecosystem projects being focused on Kubernetes first. From monitoring projects like Prometheus, to application service-mesh projects like Istio and Linkerd. As CNCF CTO Chris Aniszczyk recently said, “CNCF is becoming the hub of enterprise tech.”
2 Comments on this Post | https://itknowledgeexchange.techtarget.com/cloud-computing-enterprise/container-standards-wars/ | CC-MAIN-2018-30 | refinedweb | 493 | 54.63 |
- NAME
- DESCRIPTION
- WHY PLUGINS?
- WHAT'S NEXT?
- INTEGRATING YOUR PLUGIN
- EXAMPLE
- SEE ALSO
- THANKS TO
- AUTHOR
NAME
Catalyst::Manual::WritingPlugins - An introduction to writing plugins with NEXT.
DESCRIPTION
Writing an integrated plugin for Catalyst using NEXT.
WHY PLUGINS?
A Catalyst plugin is an integrated part of your application. By writing plugins you can, for example, perform processing actions automatically, instead of having to
forward to a processing method every time you need it.
WHAT'S NEXT?
NEXT is used to re-dispatch a method call as if the calling method doesn't exist at all. In other words: If the class you're inheriting from defines a method, and you're overloading that method in your own class, NEXT gives you the possibility to call that overloaded method.
This technique is the usual way to plug a module into Catalyst.
INTEGRATING YOUR PLUGIN
You can use NEXT for your plugin by overloading certain methods which are called by Catalyst during a request.
The request life-cycle
Catalyst creates a context object (
$context or, more usually, its alias
$c) on every request, which is passed to all the handlers that are called from preparation to finalization.
For a complete list of the methods called during a request, see Catalyst::Manual::Internals. The request can be split up in three main stages:
- preparation
When the
preparehandler is called, it initializes the request object, connections, headers, and everything else that needs to be prepared.
prepareitself calls other methods to delegate these tasks. After this method has run, everything concerning the request is in place.
- dispatch
The dispatching phase is where the black magic happens. The
dispatchhandler decides which actions have to be called for this request.
- finalization
Catalyst uses the
finalizemethod to prepare the response to give to the client. It makes decisions according to your
response(e.g. where you want to redirect the user to). After this method, the response is ready and waiting for you to do something with it--usually, hand it off to your View class.
What Plugins look like
There's nothing special about a plugin except its name. A module named
Catalyst::Plugin::MyPlugin will be loaded by Catalyst if you specify it in your application class, e.g.:
# your plugin package Catalyst::Plugin::MyPlugin; use warnings; use strict; ... # MyApp.pm, your application class use Catalyst qw/-Debug MyPlugin/;
This does nothing but load your module. We'll now see how to overload stages of the request cycle, and provide accessors.
Calling methods from your Plugin
Methods that do not overload a handler are available directly in the
$c context object; they don't need to be qualified with namespaces, and you don't need to
use them.
package Catalyst::Plugin::Foobar; use strict; sub foo { return 'bar'; } # anywhere else in your Catalyst application: $c->foo(); # will return 'bar'
That's it.
Overloading - Plugging into Catalyst
If you don't just want to provide methods, but want to actually plug your module into the request cycle, you have to overload the handler that suits your needs.
Every handler gets the context object passed as its first argument. Pass the rest of the arguments to the next handler in row by calling it via
$c->NEXT::handler-name( @_ );
if you already
shifted it out of
@_. Remember to
use
Storage and Configuration
Some Plugins use their accessor names as a storage point, e.g.
sub my_accessor { my $c = shift; $c->{my_accessor} = ..
but it is more safe and clear to put your data in your configuration hash:
$c->config->{my_plugin}{ name } = $value;
If you need to maintain data for more than one request, you should store it in a session.
EXAMPLE
Here's a simple example Plugin that shows how to overload
prepare to add a unique ID to every request:
package Catalyst::Plugin::RequestUUID; use warnings; use strict; use Catalyst::Request; use Data::UUID; use NEXT; our $VERSION = 0.01; { # create a uuid accessor package Catalyst::Request; __PACKAGE__->mk_accessors('uuid'); } sub prepare { my $class = shift; # Turns the engine-specific request into a Catalyst context . my $c = $class->NEXT::prepare( @_ ); $c->request->uuid( Data::UUID->new->create_str ); $c->log->debug( 'Request UUID "'. $c->request->uuid .'"' ); return $c; } 1;
Let's just break it down into pieces:
package Catalyst::Plugin::RequestUUID;
The package name has to start with
Catalyst::Plugin:: to make sure you can load your plugin by simply specifying
use Catalyst qw/RequestUUID/;
in the application class. warnings and strict are recommended for all Perl applications.
use NEXT; use Data::UUID; our $VERSION = 0.01;
NEXT must be explicitly
used. Data::UUID generates our unique ID. The
$VERSION gets set because it's a) a good habit and b) ExtUtils::ModuleMaker likes it.
sub prepare {
These methods are called without attributes (Private, Local, etc.).
my $c = shift;
We get the context object for this request as the first argument.
Hint!:Be sure you shift the context object out of
@_ in this. If you just do a
my ( $c ) = @_;
it remains there, and you may run into problems if you're not aware of what you pass to the handler you've overloaded. If you take a look at
$c = $c->NEXT::prepare( @_ );
you see you would pass the context twice here if you don't shift it out of your parameter list.
This line is the main part of the plugin procedure. We call the overloaded
prepare method and pass along the parameters we got. We also overwrite the context object
$c with the one returned by the called method returns. We'll return our modified context object at the end.
Note that that if we modify
$c before this line, we also modify it before the original (overloaded)
prepare is run. If we modify it after, we modify an already prepared context. And, of course, it's no problem to do both, if you need to. Another example of working on the context before calling the actual handler would be setting header information before
finalize does its job.
$c->req->{req_uuid} = Data::UUID->new->create_str;
This line creates a new Data::UUID object and calls the
create_str method. The value is saved in our request, under the key
req_uuid. We can use that to access it in future in our application.
$c->log->debug( 'Request UUID "'. $c->req->{req_uuid} .'"' );
This sends our UUID to the
debug log.
The final line
return $c;
passes our modified context object back to whoever has called us. This could be Catalyst itself, or the overloaded handler of another plugin.
SEE ALSO
Catalyst, NEXT, ExtUtils::ModuleMaker, Catalyst::Manual::Plugins, Catalyst::Manual::Internals.
THANKS TO
Sebastian Riedel and his team of Catalyst developers as well as all the helpful people in #catalyst.
This program is free software, you can redistribute it and/or modify it under the same terms as Perl itself.
AUTHOR
Robert Sedlacek,
[email protected] with a lot of help from the people on #catalyst. | https://metacpan.org/pod/release/MRAMBERG/Catalyst-Runtime-5.7002/lib/Catalyst/Manual/WritingPlugins.pod | CC-MAIN-2016-40 | refinedweb | 1,161 | 55.34 |
The new C++11 standard includes many language and library features that make programming in C++ more enjoyable, such as lambdas, the auto keyword and smart pointers (in the STL). Visual Studio 2010 already supports some of these features out of the box, and Visual Studio 2012 implements even more (detailed list of features implemented by different compilers).
Here is how it works with Xcode 4.5 under OS X 10.8 Mountain Lion. All one needs to do in Xcode is:
- Enable the C++11 language features in the “build settings” pane. Under “Apple LLVM compiler 4.1 – Language” (select “All” instead of “Basic” settings first), set “C++ Language Dialect” to “C++11 [-std=c++11]”.
- Select ‘libc++’ as the standard library by setting “C++ Standard Library” to “libc++ (LLVM C++ standard library with C++11 support)” in the same section.
This second step is necessary because the new C++11 features depend on library support in the standard library, and the default standard library ‘libstdc++’ does not implement them, but ‘libc++’ does.
For a standalone project, all is well now. However, real projects have external dependencies, which is where it gets complicated. When libraries such as OpenCV are compiled from scratch with default settings, they link to the default standard library ‘libstdc++’. This causes linking errors when one wants to use it in a project that uses the new C++11 features and links to ‘libc++’.
Compiling OpenCV with libc++
The solution is to compile OpenCV with the same standard library (that is ‘libc++’). To do this, first download the latest version of OpenCV from opencv.org (I used OpenCV 2.4.2) and uncompress it.
Next, one could run CMake as usual to create Xcode projects, and then change the project settings mentioned above manually. However, one would need to redo this every time CMake is run, because it overwrites the Xcode project files.
A better approach is thus to apply these settings inside of the root
CMakeLists.txt file of OpenCV. Find the following line (at line 348 in OpenCV 2.4.2):
add_definitions(-DHAVE_ALLOCA -DHAVE_ALLOCA_H -DHAVE_LIBPTHREAD -DHAVE_UNISTD_H)
and insert the following after it:
message("Setting up Xcode for C++11 with libc++.") set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LANGUAGE_STANDARD "c++0x") set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX_LIBRARY "libc++")
This sets up the projects for C++11 with the libc++ standard library.
Now configure OpenCV using CMake: choose the “Xcode” generator with the default native compilers.
Unfortunately, building the projects in Xcode fails in project “opencv_ts” (related OpenCV bug):
OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:1657:13: 'tr1/tuple' file not found OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:9801:34: No member named 'tr1' in namespace 'std' OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:9801:44: Expected ')' OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:9802:16: Use of undeclared identifier 't' [...]
It turns out that these problems are caused by Google Test which is not compatible with C++11 yet. Sylvain Pointeau has a simple workaround for that (adding the line
#define GTEST_USE_OWN_TR1_TUPLE 1 to the top of
ts_gtest.h). However, this doesn’t help with subsequent problems in “opencv_perf_imgproc” and “opencv_perf_video”, which confuse C++11’s
std::tuple with the built-in
std::tr1::tuple:
OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:10847:24: No matching conversion for static_cast from 'const std::__1::tuple<double, double>' to 'std::tr1::tuple<double, double, void, void, void, void, void, void, void, void>' OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:10847:45: No matching conversion for static_cast from 'const std::__1::tuple<double, double>' to 'std::tr1::tuple<double, double, void, void, void, void, void, void, void, void>' OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:10848:9: No matching conversion for static_cast from 'const std::__1::tuple<double, double>' to 'std::tr1::tuple<double, double, void, void, void, void, void, void, void, void>' OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:10828:24: No matching conversion for static_cast from 'const std::__1::tuple<int, int>' to 'std::tr1::tuple<int, int, void, void, void, void, void, void, void, void>' OpenCV-2.4.2/modules/ts/include/opencv2/ts/ts_gtest.h:10828:45: No matching conversion for static_cast from 'const std::__1::tuple<int, int>' to 'std::tr1::tuple<int, int, void, void, void, void, void, void, void, void>'
Instead, it is better to patch Google Test for libc++-compatibility. A simplified version of this patch is to replace the following line (in line 1657 in OpenCV 2.4.2):
# include <tr1/tuple> // NOLINT
with
// C++11 puts its tuple into the ::std namespace rather than ::std::tr1. // gtest expects tuple to live in ::std::tr1, so put it there. #include <tuple> // NOLINT namespace std { namespace tr1 { using ::std::get; using ::std::make_tuple; using ::std::tuple; using ::std::tuple_element; using ::std::tuple_size; } }
This leaves one more compilation error:
OpenCV-2.4.2/modules/flann/include/opencv2/flann/lsh_table.h:196:14: Use of undeclared identifier 'use_speed_'
The suggested solution is to just delete the
if (!use_speed_), which works fine.
Everything else should now compile and link cleanly.
I’m trying to apply your c++11 libc++ fix to my OpenCV 2.4.9 build for ios 5.1. The set(CMAKE_XCODE_ATTRIBUTE_CLANG_CXX*) flags don’t seem to be taking. Have you tried this with later versions? I outlined my problem in the following post, but realized it was probably worth asking you directly
I just replied on your question (as cr333). In short: C++11 seems to be working. Try to fix the error and see where that takes you.
This probably has another compilation error if you compile with CUDA. nvcc does not support -std=c++11 flag.
I think you’re right. I didn’t compile with CUDA, so I don’t know what happens. But as CUDA doesn’t appear to support C++11 anyway, you can probably leave out that flag. However, I have no idea if it will link in the end.
Looks like OpenCV-2.4.7 has this sorted, yay. No special options needed for building with XCode 5, which defaults to libc++.
That’s good news 🙂
[…] […] | http://richardt.name/blog/opencv-with-cxx11-on-os-x-10-8/ | CC-MAIN-2019-43 | refinedweb | 1,048 | 58.08 |
Localize: Translate Text
As a function
localize.msg('lit-html-example:body'); // for en-GB: I am from England localize.msg('lit-html-example:body'); // for nl-NL: Ik kom uit Nederland // ...
Web Component
For use in a web component we have
LocalizeMixin that lets you define namespaces and awaits loading of those translations.
class MessageExample extends LocalizeMixin(LitElement) { render() { return html` <div aria- <p>${localize.msg('lit-html-example:body')}</p> </div> `; } }
Google Translate integration
When Google Translate is enabled, it takes control of the html[lang] attribute. Below, we find a simplified example that illustrates this.
The problem
A developer initializes a page like this (and instructs localize to fetch data for
en-US locale)
<html lang="en-US"></html>
If Google Translate is enabled and set to French, it will change html[lang]:
to
<html lang="fr">
Now
localize will fetch data for locale
fr. There are two problems here:
- There might be no available data for locale
- Let's imagine data were loaded for
fr. If Google Translate is turned off again, the page content will consist of a combination of different locales.
How to solve this
To trigger support for Google Translate, we need to configure two attributes
<html lang="en-US" data-</html>
- html[data-localize-lang] will be read by
localizeand used for fetching data
- html[lang] will be configured for accessibility purposes (it will makes sure the page is accessible if localize would be lazy loaded).
When Google Translate is set to French, we get:
<html lang="fr" data-
The page is accessible and
localize will fetch the right resources | https://lion-web.netlify.app/docs/systems/localize/text/ | CC-MAIN-2021-17 | refinedweb | 268 | 51.89 |
Create zip file in Java
Create zip file in Java
.... You
will also learn how to create a zip file from any file through
the java... applications. It is also possible to zip and unzip the files from your Java
Creating a ZIP file in Java
Creating a ZIP file in Java
Zip File: Zip file format is the popular method
of data compression. Zip file contains many files in compressed format. You can
Import java IO - Java Beginners
Import java IO for example i know java IO is for input and output. I am using Netbeans5.5.1.
How can i see all the classes related to java IO for example; stream reader, buffer reader
How to create a zip file and how to compare zip size with normal text file
by Roseindia. to create a zip file and how to compare zip size with normal text file Hi,
how are you?I hope you are fine.I want program like how to create file zip
Java file zip
In this section, you will learn how to create a zip file.
The ZIP file format is used for the distribution and
storage of files... the zip file.
Now to create a zip file, we have used the package
java..util.zip.
schedule zip
schedule zip Java Code to automatically get and zip only .txt files from folders for every 5 days and also to send the zip files to database tables
How to make a zip file in java
Description:
In this example we will discuss about how to create zip file from... zip file.
Code:
import java.io.*;
import java.util.zip....;}
}
Output:
When you compile and run this program it will create the zip
Changes in Jar and Zip
:
Number of open ZIP files - Prior to Java SE
6, we faced the limitation... in the file you get the different result.
ZIP File Names - Java SE 6...
Changes in Jar and Zip
Encode 5 digit zip code - Java Beginners
as well as a | at the beginning and a | at the end. For example, the zip code 95014...Encode 5 digit zip code I have an assignment to read in a 5 digit zip code, sum the digits, and come up with the check digit. I then need
J2ME
J2ME how to get the color of a pixel in J2ME but not in Java... J2ME.
I am using Canvas. I needed to know, if there is any method to get... line using J2ME
Unzip a ZIP File
the following example. This program shows you how to extract files from a zip
file in which... the zip file.
ZipEntry:
This is the class of java.util.zip.*; package of Java... Unzip a ZIP
Zip File Using Java
;
This example shows how to create a zip file in
java. In other words, we..., WinZip etc. It is also possible to zip and unzip the files from your Java
applications. This example shows how we zip a file through a java
program
Java Util Zip. Zip files/folders inside a folder, without zipping the original source folder.
Java Util Zip. Zip files/folders inside a folder, without zipping the original...) throws
Exception {
ZipOutputStream zip = null;
FileOutputStream fileWriter = null;
fileWriter = new FileOutputStream(destZipFile);
zip = new
Listing Contents of a ZIP File
of a zip file through the java code. Following program helps you for the
appropriate. You can directly copy and paste the code in your java application
for showing...
Listing Contents of a ZIP File
J2ME
J2ME i wann source code to find out the difference between two dates... (ParseException e) {
e.printStackTrace();
}
}
}
i wann j2me code not java code my dear sirs/friends ZipOutputStream
:/
Example: Java Zip
Read at
:
http:/...
ZipOutputStream for writing ZIP files. In order to compress data to a ZIP file,
Java provides
Read file zip in J2ME - MobileApplications
Read file zip in J2ME I would like to read file.zip in J2ME,I don't know how to extract it. Please help me
Map | Business Software
Services India
J2ME Tutorial Section
Java
Platform Micro Edition |
MIDlet Lifecycle J2ME
|
jad and properties file...
|
J2ME Item State Listener |
J2ME Image Example
| J2ME Image Item
Java IO SequenceInputStream Example
Java IO SequenceInputStream Example
In this tutorial we will learn about...; started form the offset 'off'.
Example :
An example... or concatenate the contents of two files. In this
example I have created two text
J2ME Java Editor plugin
J2ME Java Editor plugin
Extends Eclipse Java Editor support ing J2ME Polish directives, variables and styles.
Edit java files using
convert a file .doc to .zip
with extension .doc now i required with .zip
Hi Friend,
Try the following code:
1)page.jsp:
<%@ page language="java" %>
<HTml>
<HEAD><... + ".zip";
byte[] buffer = new byte[18024];
try{
ZipOutputStream zos
Java Zip Package
Java Zip Package
In Java, the java.util.zip package provides classes for reading... to
include classes for manipulating ZIP files as part of the standard Java APIs
Zip Code Validation - Java Interview Questions
Zip Code Validation Hi,
Anyone can please send me a javascript function of the following
--> Indian postal Zip Code Validation (Only 6 digits... should not be greater than to date Hi friend,
Code for Validate Zip
J2ME - Java Beginners
J2ME I want know about J2ME with coding examples
Java IO StringReader
Java IO StringReader
In this section we will discuss about the StringReader... in Java. In this example I have created a Java class
named...;
String str = "\n Java IO StringReader \n";
try
How to convert multiple files in java to .zip format
How to convert multiple files in java to .zip format i receive...);
}
i want to convert the data from inputStream into Zip Stream and so...();
}
out.close();
fout.close();
System.out.println("Zip File is created
how to create a zip by using byte array
how to create a zip by using byte array hi,
How to convert byte array to zip by using java program.can u plz provide it......
Thanks,
krishna
Create zip file
j2me - Java Beginners
j2me hi, i am new to j2me. can any one say how to write a simple program using j2me. and what r the requirements to develop an application using j2me. please its urgent
Java IO StringWriter
Java IO StringWriter
In this section we will discussed about the StringWriter... in the Java program. In this example I have created a Java
class named...[])
{
String str = "Java StringWriter Example";
try
Java IO PrintWriter
Java IO PrintWriter
In this tutorial we will learn about the the PrintWriter class in Java.
java.io.PrintWriter class is used for format printing of objects...
the PrintWriter in the Java applications. In this example I have created
Zip Folder Using Java
;
In this example we are going to create a zip folder and
all files contained in this folder are zipped by using java. Here we will learn
how to compress files... which have to compress and in the second
object we pass the name of zip folder
Show all the entries of zip file.
Show all the entries of zip file.
In this tutorial, We will discuss how to show all the entries of a zip file. The ZipFile
class is used to read entries of zip files. The entries()
methods of ZipFile class
j2me Hi,
In my j2me application I have used canvas to display an image in fullscreen.In the image there are four points( rectangular areas ). Now I...?
give a sample example.
Please help me giving some idea.
Thanks in advance
Java IO OutputStream
Java IO OutputStream
In this section we will read about the OutputStream class...)
throws IOException
Example :
An example... to the specified file. In this example I have tried to read the
stream of one - Java Beginners
J2ME I have to Create a currency conversion application that converts between three currencies..............pls show me the coding or mail me the code to [email protected] you | http://www.roseindia.net/tutorialhelp/comment/93001 | CC-MAIN-2015-06 | refinedweb | 1,321 | 65.83 |
how to out the current running thread in a java program???
Post your Comment there are two threads running at a time.. when am updating a values in database. both thread halt and stop for moment till it get updated into database... so i dnt want thread to get halts for tht moment of period. whats
Java Current Thread
Java Current Thread
In this tutorial, we are using Thread.currentThread() method to
find the current thread name.
Thread.currentThread() :
Thread class provides method to display the current running thread. It is
static Thread
Get Current Thread
Get Current Thread
... to describe you a code that helps you in
understanding Get Current Thread. For this we have a class" Get Current Thread" implements
Runnable interface.
1
Create Thread by Extending Thread
info of current thread
System.out.println(Thread.currentThread().getName...Create Thread by Extending Thread
This section explain how to create thread by extending Thread class in java.
Extending Thread :
You can create thread
Count Active Thread in JAVA
() method of
thread to count the current active threads.
Thread activeCount() :
Thread class provides you to check the current active thread by providing... of active threads in the current thread group.
Example :
class ThreadCount
Thread Priorities
;
This method is used to get the priority of thread.
... scheduling ? If the new thread is a higher priority thread then current running... time the current thread sends to runnable state.
You can also set
Thread Priorities
is used to get the priority of thread.
When... ? If the new thread
has a higher priority then current running thread...
the time, current thread indicates to the another thread to enter
Java :Thread Methods
number of
active threads in your current thread group.
static Thread... getName() - t returns name of the current
thread.
int getPriority...(int newPriority) - This method is used
to set the priorty of the current thread
Java :Thread setPriority Example
to the current thread.
class ThreadSetPriority implements Runnable {
Thread...Java :Thread setPriority Example
In this tutorial you will learn how to set thread priority in java thread.
Thread setPriority() :
Thread scheduler uses Context
Thread Context
The Thread Context is required by the current thread from
the group... for
the development of the thread pools
Every Thread
required a context
Interrupting a thread.
to interrupt the current thread. If current thread is blocked by wait(),
join... and InterrrupedException. Interrupting a thread that is not alive have
no effect...;= new Thread(new MyThread1(), "First Thread"
thread
thread can parent thread be dead if child thread is not dead
Thread
Thread what is the use of thread
Thread
Thread Thread Life Cycle
Java : Runnable Thread
*/
thread.start();
}
public void run() {
/* Display info about current thread... (InterruptedException e) {
}
/* Display info about current thread...Java : Runnable Thread
In this tutorial we are describing Runnable Thread
Java :Thread dumpStack
void dumpStack() : This method prints
a stack trace of your current thread...("Current thread: " + thread);
int count = Thread.activeCount... of the current thread to the standard error
* stream, used for debugging
Java Thread setName() Example
*/
System.out.println("Current Thread name : " + thread.getName...(String[] args) {
new ThreadSetName();
}
}
Output :
Current Thread...Java Thread setName() Example
In this section we are going to describe
Java Thread Interrupted
to interrupt any thread is to abort the current operation then
its application... interrupts
the specified thread. It throws SecurityException if the current...Java Thread Interrupted
In this tutorial, you will learn how to interrupt
Java Thread Join
Java Join method join the next thread at the end of the current
thread
After current thread stops execution then next thread executes.
Java Join Thread Example
public class join implements Runnable {
@Override
public
Thread
Thread Write a Java program to create three theads. Each thread should produce the sum of 1 to 10, 11 to 20 and 21to 30 respectively. Main thread....
Java Thread Example
class ThreadExample{
static int
Java :Thread Enumeration
enumerate(Thread[] tarray) - This
method copies all active threads in the current...Java :Thread Enumeration
In this tutorial you will learn about java thread enumeration.
Thread Enumeration :
For enumeration, thread uses two methods
Java Exception Thread
execution or used to delay the current thread.
Source Code
class... Java Exception Thread
Thread is the independent path of execution run inside
main thread's child thread Akshay March 23, 2012 at 11:12 AM
i want the solution for itrajesh kumar swain April 3, 2012 at 11:31 AM
how to out the current running thread in a java program???
Post your Comment | http://www.roseindia.net/discussion/22497-Get-Current-Thread.html | CC-MAIN-2014-52 | refinedweb | 746 | 57.16 |
33903/how-to-use-try-&-catch-block-in-java
Hey @Jino,
Using try catch block for your code is always a good practice. Using try catch block you come to know about the different types of error and exceptions.
Here is the synatx:
try
{
your block of code
}
catch(Exception e)// here if you know the type of exception going to happen you can mention that.
{
System.out.println(e);
}
Try block
The try block contains set of statements where an exception can occur. A try block is always followed by a catch block, which handles the exception that occurs in associated try block. A try block must be followed by catch blocks or finally block or both.
Syntax of try block
try{
//statements that may cause an exception
}
While writing a program, if you think that certain statements in a program can throw a exception, enclosed them in try block and handle that exception..
Syntax of try catch in java
try
{
//statements that may cause an exception
}
catch (exception(type) e(object))
{
//error handling code
}
super() is used to call immediate parent.
super() can be ...READ MORE
You can refer the following code:
public class ...READ MORE
If you always want to use the ...READ MORE
keytool does not provide such basic functionality ...READ MORE
There is no way to basically ignore a ...READ MORE
Yes; the Java Language Specification writes:
In the Java ...READ MORE
You should check whether the client program is ...READ MORE
We can use external libraries:
org.apache.commons.lang.ArrayUtils.remove(java.lang.Object[] array, int ...READ MORE
Here are two ways illustrating this:
Integer x ...READ MORE
You can achieve that concisely in Java:
Random ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/33903/how-to-use-try-&-catch-block-in-java | CC-MAIN-2020-34 | refinedweb | 292 | 66.23 |
hello.cfile, and run the following command:
emcc hello.c -s WASM=1 -o hello.html
The options we’ve passed in with the command are as follows:
-s WASM=1— Specifies that we want wasm output. If we don’t specify this, Emscripten will just output asm.js, as it does by default.
-o hello.html— Specifies that we want Emscripten to generate an HTML page to run our code in (and a filename to use), as well as the wasm module and the JavaScript "glue" code to compile and instantiate the wasm so it can be used in the web environment.
At this point in your source directory you should have:
-
Now all that remains is for you to load the resulting
hello.html in a browser that supports WebAssembly. It is enabled by default in Firefox 52+ and Chrome 57+/latest Opera (you can also run wasm code in Firefox 47+ by enabling the
javascript.options.wasm flag in about:config, or Chrome (51+) and Opera (38+) by going to chrome://flags and enabling the Experimental WebAssembly flag.)
If everything has worked as planned, you should see "Hello world" output in the Emscripten console appearing on the web page, and your browser’s JavaScript console. Congratulations, you’ve just compiled C to WebAssembly and run it in your browser! your previous new directory.
Now navigate into your new directory (again, in your Emscripten compiler environment terminal window), and run the following command:
emcc -o hello2.html hello2.c -O3 -s WASM=1 --shell-file html_template/shell_minimal.html
The options we've passed are slightly different this time:
-
hello2.html, which will have much the same content as the template with some glue code added into load the generated wasm, run it, etc. Open it in your browser and you'll see much the same output as the last example.
Note: You could specify outputting just the JavaScript "glue" file* rather than the full HTML by specifying a .js file instead of an HTML file in the
-o flag, e.g.
emcc -o hello2.js hello2.c -O3 -s WASM=1. You could then build your custom HTML completely from scratch, although this is an advanced approach; it is usually easier to use the provided HTML template.
- Emscripten requires a large variety of JavaScript "glue" code to handle memory allocation, memory leaks, and a host of other problems
Calling a custom function defined in C
If you have a function defined in your C code that you want to call as needed from JavaScript, you can do this using the Emscripten
ccall() function, and the
EMSCRIPTEN_KEEPALIVE declaration (which adds your functions to the exported functions list (see Why do functions in my C/C++ source code vanish when I compile to JavaScript, and/or I get No functions to process?)). Let's look at how this works.
To start with, save the following code as
hello3.cin a new directory:
#include <stdio.h> #include <emscripten/emscripten.h> int main(int argc, char ** argv) { printf("Hello World\n"); } #ifdef __cplusplus extern "C" { #endif void EMSCRIPTEN_KEEPALIVE myFunction(int argc, char ** argv) { printf("MyFunction Called\n"); } #ifdef __cplusplus } #endif
By default, Emscripten-generated code always just calls the
main()function, and other functions are eliminated as dead code. Putting
EMSCRIPTEN_KEEPALIVEbefore a function name stops this from happening. You also need to import the
emscripten.hlibrary to use
EMSCRIPTEN_KEEPALIVE.
Note: We are including the
#ifdefblocks so that if you are trying to include this in C++ code, the example will still work. Due to C versus C++ name mangling rules, this would otherwise break, but here we are setting it so that it treats it as an external C function if you are using C++.
Now add
html_template/shell_minimal.htmlinto this new directory too, just for convenience (you'd obviously put this in a central place in your real dev environment).
Now let's run the compilation step again. From inside your latest directory (and while inside your Emscripten compiler environment terminal window), compile your C code with the following command. (Note that we need to compile with
NO_EXIT_RUNTIME, which is necessary as otherwise when
main()exits the runtime would be shut down — necessary for proper C emulation, e.g., atexits are called — and it wouldn't be valid to call compiled code.)
emcc -o hello3.html hello3.c -O3 -s WASM=1 --shell-file html_template/shell_minimal.html -s NO_EXIT_RUNTIME=1 -s "EXTRA_EXPORTED_RUNTIME_METHODS=['ccall']"
If you load the example in your browser again, you'll see the same thing as before!
Now we need to run our new
myFunction()function from JavaScript. First of all, open up your hello3.html file in a text editor.
Add a
<button | https://developer.mozilla.org/pt-BR/docs/WebAssembly/C_to_wasm | CC-MAIN-2020-10 | refinedweb | 786 | 64.3 |
![if gte IE 9]><![endif]>
In Rollbase v4.3, we have made few changes to the language resource files. If you are using or doing local modifications to these files, you need to know about these changes.
Encoding changes
The change we made for v4.3 is the file encoding. From now on, every resource file is UTF-8 encoded. UTF-8 is the preferred encoding for web pages or communication over a network. This makes it more convenient to you as it allows to directly translate in your editor of choice. Prior to v4.3, Rollbase read all European languages in ISO-8859 encoding, but now it is reading all the resource files in UTF-8 only. The reason behind this is the incompatibility of ISO-8859 in a multilingual supported environment like Rollbase. We are in an era where we need to support multiple languages in the same sentence. For example, you can see all of English, العربية, 汉语, עִבְרִית, ελληνικά, and ភាសាខ្មែរ on the same web page, and to render all this correctly in the UI, we need to have an encoding standard that honors all of these character sets. Unicode is that standard and UTF-8 is the format we need. For customers who used to add their own resource files or update the resource translations, please make sure that you save the file as UTF-8 with BOM encoded only, otherwise, you might start seeing some encoding issues in the UI.
Namespacing language resource keys
We namespaced language resources so that they can be segregated as admin user resources and non-admin user resources. This was done so that one can gain some cost and performance benefit over the existing convention. We prefixed the admin user resources with ‘newui.admin.’ and the non-admin user resources with ‘newui.eu’. If you want to add support for a particular language, but only for apps developed for end users, then you only need to add a language resource property file containing only entries for ‘newui.eu.’ prefixed resources. This will greatly reduce the cost and effort involved in the translation of resources.
Customers Upgrading to v4.3
This section applies only to customers who have created their own translation.
Because the resources in the Rollbase language files have so many changes, customers upgrading to Rollbase 4.3 need to upgrade their existing language resources to integrate the namespacing changes to their language resource files. If you haven’t created any translations, you can simply take the latest from Rollbase shared/res directory, but if you have your own translations in place, you need to update your existing resource files.
Important Notes:
To update your file with these namespacing changes, please reach out to Rollbase support and send a copy of your language resource files. We will update these files and send them back to you. Please make sure to back up your existing resources.
Also note that, to take the v4.3 changes (newly added resources), you still need to continue with the regular process you used to follow and make sure to maintain the encoding of the files as UTF-8 only. | https://community.progress.com/community_groups/rollbase/f/25/t/26879 | CC-MAIN-2018-17 | refinedweb | 533 | 63.9 |
This article by Belen Cruz Zapata, the author of the book Android Studio 2 Essentials - Second Edition, focuses on the creation of the user interfaces using layouts. The layouts can be created using a graphical view or a text-based view. Since there are over 18,000 Android device types, you will learn about fragmentation on different screen types and will discuss how to prepare our application for this issue. We will end this article with basic notions of handling events on our application.
(For more resources related to this topic, see here.)
These are the topics we'll be covering in this article:
- Supporting different screens
- Changing the UI theme
- Handling events
Supporting multiple screens
When creating Android applications, we have to take into account the existence of multiple screen sizes and screen resolutions. It is important to check how our layouts are displayed in different screen configurations. To accomplish this, Android Studio provides a functionality to change the virtual device that renders the layout preview when we are in the Design mode.
We can find this functionality in the toolbar and click on it to open the list of available device definitions, as shown in the following screenshot:
Try some of them. The difference between a tablet device and a device like those from the Nexus line is very notable. We should adapt the views to all the screen configurations our application supports to ensure that they are displayed optimally. Note that there are device definitions for Android Wear (square, round, and round chin designs) and for Android TV.
The device definitions indicate the screen size, resolution, and screen density. Android screen densities include ldpi, mdpi, tvdpi, hdpi, xhdpi, and even xxhdpi. Let's see what their values are:
- ldpi : This is low-density dots per inch, and its value is about 120 dpi
- mdpi: This is medium-density dots per inch, and its values is about 160 dpi
- tvdpi: This is medium-density dots per inch, and its value is about 213 dpi
- hdpi: This is high-density dots per inch, and its value is about 240 dpi
- xhdpi: This is extra-high-density dots per inch, and its value is about 320 dpi
- xxhdpi: This is extra-extra-high-density dots per inch, and its value is about 480 dpi
- xxxhdpi: This is extra-extra-extra-high-density dots per inch, and its value is about 640 dpi
The last dashboards published by Google show that most devices have high-density screens (42.3 percent), followed by xhdpi (24.8 percent) and xxhdpi (15.0 percent). Therefore, we can cover 82.1 percent of all the devices by testing our application using these three screen densities. If you want to cover a bigger percentage of devices, test your application using mdpi screens (12.9 percent) as well so the coverage will be 95.0 percent of all devices. The official Android dashboards are available at.
Another issue to keep in mind is the device orientation. Do we want to support the landscape mode in our application? If the answer is yes, then we have to test our layouts in landscape orientation. On the toolbar, click on the layout state option to change the mode either from portrait to landscape or from landscape to portrait.
If our application supports landscape mode and the layout does not get displayed as expected in this orientation, we might want to create a variation of the layout. Click on the first icon of the toolbar, that is, the Configuration to render this layout with inside the IDE option, and select the Create Landscape Variation option as shown in the next screenshot:
A new layout will be opened in the editor. This layout has been created in the resources folder, under the layout-land directory, and it uses the same name as the portrait layout - /src/main/res/layout-land/activity_main.xml. The Android system will decide which version of the layout needs to be used depending on the current device orientation. Now, we can edit the new layout variation such that it perfectly conforms to landscape mode.
Similarly, we can create a variation of the layout for extra-large screens. Select the Create layout-xlarge Variation option. The new layout will be created in the layout-xlarge folder using the same name as the original layout at /src/main/res/layout-xlarge/activity_main.xml. Android divides into the actual screen sizes small, normal, large, and extra large:
- Small: Screens classified in this category are at least 426 dp x 320 dp.
- Normal: Screens classified in this category are at least 470 dp x 320 dp.
- Large: Screens classified in this category are at least 640 dp x 480 dp.
- Extra large: Screens classified in this category are at least 960 dp x 720 dp.
A density-independent pixel (dp), equivalent to one physical pixel on a 160 dpi screen. The last dashboards published by Google show that most devices have a normal screen size (85.1 percent), followed by large screen size (8.2 percent). The official Android dashboards are available at.
To display multiple device configurations at the same time, click on the Configuration to render this layout with inside the IDE option in the toolbar and select the Preview All Screen Sizes option, or click on the Preview Representative Sample option to open only the most important screen sizes, as shown in the following screenshot. We can also delete any of the samples by right-clicking on them and selecting the Delete option from the menu. Another useful action of this menu is the Save screenshot option. It allows us to take a screenshot of the layout preview.
If we create different layout variations, we can preview all of them by selecting the Preview Layout Versions option. If we want to preview what the layout looks like for different Android versions, we can use the Preview Android Versions option.
Now that we have seen how to add different components and optimize our layout for different screens, let's start working with themes.
Changing the UI theme
Layouts and widgets are created using the default UI theme of our project. We can change the appearance of the elements of the UI by creating styles. Styles can be grouped to create a theme, and a theme can be applied to an activity or to the whole application. Some themes are provided by default, such as the Material Design or Holo style. Styles and themes are created as resources under the /src/res/values folder.
To continue our example, we are going to change the default colors of the theme that we are using in our app. Using the graphical editor, you can see that the selected theme for our layout is indicated as AppTheme in the toolbar. This theme was created for our project and can be found in the styles file at /src/res/values/styles.xml.
Open the styles file. Android Studio suggests us to use the Theme Editor. You can click on the message link or you can navigate to Tools | Android | Theme Editor to open it. You can see the Theme Editor in the next screenshot:
The left panel shows what different UI components look like. For example, you can view the appearance of the app bar, different types of buttons, text views, or the appearance of the status bar. The right panel of the Theme Editor contains the settings of the theme. You can change the values from the right panel and see how the components change on the left panel of Theme Editor.
In the configuration right panel, you can change the Theme to modify, you can change the Theme parent of the selected theme, and you can change the theme colors. You will note that AppTheme is by default an extension of another theme, Theme.AppCompat.Light.DarkActionBar.
Let's try to change the main color of our app. Follow the next steps:
- Look for the colorPrimary property on the right panel of the Theme Editor.
- Click on the color square of the colorPrimary property. The color selector of the following screenshot will be opened:
- Select a different color and click on the OK button. Note that the theme has changed and now the app bar has the new color in Theme Editor.
- Open your main layout file. The preview of the layout has also changed its color. This theme primary color will be applied to all our layouts due to the fact that we configured it in the theme and not just in the layout.
The specification of the colors is saved in the colors file at /src/res/values/colors.xml. This is the current content of the colors file:
<resources> <color name="colorPrimary">#009688</color> <color name="colorPrimaryDark">#303F9F</color> <color name="colorAccent">#FF4081</color> </resources>
You can also change the colors from this file. Modify the colorPrimaryDark, save the file, and note that in the Theme Editor, the status bar color has changed to the new color. Switch to your main layout file and observe that the preview of your layout has also changed to show the new color in the status bar.
To change the layout theme completely, click on the theme option from the toolbar in the graphical editor. The theme selector dialog is now opened, displaying a list of the available themes, as shown in the following screenshot:
The themes created in our own project are listed in the Project Themes section. The Manifest Themes section shows the theme configured in the application manifest file (/src/main/AndroidManifest.xml). The All section lists all the available themes.
Handling events
The user interface would be useless if the rest of the application could not interact with it. Events in Android are generated when the user interacts with our application. All the UI widgets are children of the View class, and they share some events handled by the following listeners:
- OnClickListener: This captures the event when the user clicks on the view element. To configure this listener in a view, use the setOnClickListener method. The OnClickListener interface declares the following method to receive the click event:
public abstract void onClick(View v)
- OnCreateContextMenu: This captures the event when the user performs a long click on the view element and we want to open a context menu. To configure this listener in a view, use the setOnCreateContextMenu method. The OnCreateContextMenu interface declares the following method to receive the long-click event:
public abstract void onCreateContextMenu(ContextMenu menu, View v, ContextMenu.ContextMenuInfo menuInfo)
- OnDragListener: This captures the event when the user drags and drops the event element. To configure this listener in a view, use the setOnDragListener method. The OnDragListener interface declares the following method to receive the drag event:
public abstract boolean onDrag(View v, DragEvent event)
- OnFocusChangedListener: This captures the event when the user navigates from an element to another in the same view. To configure this listener in a view, use the setOnFocusChangedListener method. The OnFocusChangedListener interface declares the following method to receive the change of focus event:
public abstract void onFocusChange(View v, boolean hasFocus)
- OnHoverListener: This captures the event when the user is moving over an element. To configure this listener in a view, use the setOnHoverListener method. The OnHoverListener interface declares the following method to receive the hover event:
public abstract boolean onHover(View v, MotionEvent event)
- OnKeyListener: This captures the event when the user presses any key while the view element has the focus. To configure this listener in a view, use the setOnKeyListener method. The OnKeyListener interface declares the following method to receive the key event:
public abstract boolean onKey(View v, int keyCode, KeyEvent event)
- OnLayoutChangeListener: This captures the event when the layout of a view changes its bounds due to layout processing. To configure this listener in a view, use the setOnLayoutChangeListener method. The OnLayoutChangeListener interface declares the following method to receive the layout change event:
public abstract void onLayoutChange(View v, int left, int top, int right, int bottom, int oldLeft, int oldTop, int oldRight, int oldBottom)
- OnLongClickListener: This captures the event when the user touches the view element and holds it. To configure this listener in a view, use the setOnLongClickListener method. The OnLongClickListener interface declares the following method to receive the long click event:
public abstract boolean onLongClick(View v)
- OnScrollChangeListener: This captures the event when the scroll position of a view changes. To configure this listener in a view, use the setOnScrollChangeListener method. The OnScrollChangeListener interface declares the following method to receive the scroll change event:
public abstract void onScrollChange(View v, int scrollX, int scrollY, int oldScrollX, int oldScrollY)
- OnTouchListener: This captures the event when the user touches the view element. To configure this listener in a view, use the setOnTouchListener method. The OnTouchListener interface declares the following method to receive the touch event:
public abstract boolean onTouch(View v, MotionEvent event)
In addition to these standard events and listeners, some UI widgets have some more specific events and listeners. Checkboxes can register a listener to capture when its state changes (OnCheckedChangeListener), and spinners can register a listener to capture when an item is clicked (OnItemClickListener).
The most common event to capture is when the user clicks on the view elements. There is an easy way to handle it—using the view properties. Select the Accept button in our layout and look for the onClick property. This property indicates the name of the method that will be executed when the user presses the button. This method has to be created in the activity associated with the current layout, our main activity (MainActivity.java) in this case. Type onAcceptClick as the value of this property.
Open the main activity to create the method definition. When a view is clicked, the event callback method when has to be public with a void return type. It receives the view that has been clicked on as parameter. This method will be executed every time the user clicks on the button:
public void onAcceptClick(View v) { // Action when the button is pressed }
From the main activity, we can interact with all the components of the interface, so when the user presses the Accept button, our code can read the text from the name field and change the greeting to include the name in it.
To get the reference to a view object, use the findViewById method inherited from the Activity class. This method receives the ID of the component and returns the View object corresponding to that ID. The returned view object has to be cast to its specific class in order to use its methods, such as the getText method of the EditText class, to get the name typed by the user:
public void onAcceptClick(View v) { TextView tvGreeting = (TextView) findViewById(R.id.textView_greeting); EditText etName = (EditText) findViewById(R.id.editText_name); if(0 < etName.getText().length()) { tvGreeting.setText("Hello " + et_name.getText()); } }
In the first two lines of the method, the references to the elements of the layout are retrieved: the text view that contains the greeting and the text field where the user can type a name. The components are found by their IDs, the same ID that we indicated in the properties of the element in the layout file. All the IDs of resources are included in the R class. The R class is autogenerated in the build phase and therefore we must not edit it. If this class is not autogenerated, then probably some file of our resources contain an error.
The next line is a conditional statement used to check whether the user typed a name. If they typed a name, the text will be replaced by a new greeting that contains that name.
If the event we want to handle is not the user's click, then we have to create and add the listener by code to the onCreate method of the activity. There are two ways to do this:
- Implementing the listener interface in the activity and then adding the unimplemented methods. The methods required by the interface are the methods used to receive the events.
- Creating a private anonymous implementation of the listener in the activity file. The methods that receive the events are implemented in this object.
Finally, the listener implementation has to be assigned to the view element using the setter methods, such as setOnClickListener, setOnCreateContextMenu, setOnDragListener, setOnFocusChange, setOnKeyListener, and so forth. The listener assignment is usually included in the onCreate method of the activity. If the listener is implemented in the same activity, then the parameter indicated to the setter method is the own activity using the this keyword, as shown in the following code:
Button bAccept = (Button) findViewById(R.id.button_accept); bAccept.setOnClickListener(this);
The activity should then implement the listener and the onClick method required by the listener interface:
public class MainActivity extends Activity implements View.OnClickListener { @Override public void onClick(View view) { // Action when the button is pressed }
If we implement it using a private anonymous class, the code would be the following:
bAccept.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Action when the button is pressed } });
Summary
In this article, you saw different styles, screen sizes, and screen resolutions. You also learned about the different available UI themes. Finally, you learned about events and learned how to handle them using listeners.
Resources for Article:
Further resources on this subject:
- The Art of Android Development Using Android Studio[article]
- Introducing an Android platform[article]
- Android Fragmentation Management[article] | https://www.packtpub.com/books/content/creating-user-interfaces-0 | CC-MAIN-2017-13 | refinedweb | 2,924 | 51.78 |
java.lang.Object
org.netlib.lapack.SSYGVDorg.netlib.lapack.SSYGVD
public class SSYGVD
SSYGVD is a simplified interface to the JLAPACK routine ssygGVD. * * A (input/output) REAL) REAL array, dimension (LDB, N) * On entry, the symmetric (input) INTEGER * The leading dimension of the array B. LDB >= max(1,N). * * W (output) REAL array, dimension (N) * If INFO = 0, the eigenvalues in ascending order. * * WORK (workspace/output) REAL array, dimension (LWORK) * On exit, if INFO = 0, WORK(1) returns the optimal LWORK. * * LWORK (input) INTEGER * The dimension of the array WORK. * If N <= 1, LWORK >= 1. * If JOBZ = 'N' and N > 1, LWORK >= 2*N+1. * If JOBZ = 'V' and N > 1, LWORK >= 1 + N <= 1, LIWORK >= 1. * If JOBZ = 'N': SPOTRF or SSYEVD returned an error code: * <= N: if INFO = i, SSYEVD * * ===================================================================== * * .. Parameters ..
public SSYGVD()
public static void SSYGVD(int itype, java.lang.String jobz, java.lang.String uplo, int n, float[][] a, float[][] b, float[] w, float[] work, int lwork, int[] iwork, int liwork, intW info) | http://icl.cs.utk.edu/projectsfiles/f2j/javadoc/org/netlib/lapack/SSYGVD.html | CC-MAIN-2017-51 | refinedweb | 167 | 67.45 |
2004
Filter by week:
1
2
3
4
5
using MVC methodology with forms and remoting
Posted by cwinston at 1/31/2004 8:24:51 PM
hi, I am trying to develop an application in which I am attempting to implement MVC programming with Forms, but I have yet to be able to get it to work the right way with FLash remoting. I created a custom class for my data model which should be able to handle all the data manipulation for ...
more >>
Gateway class constructor, EVERYTIME!
Posted by bjc160 at 1/31/2004 1:40:13 AM
Is there a way to keep the .net gateway controller from constructing my class everytime there is a method call? I am constructing a large XML doc in the constructor, and building it everytime a session calls for a small part is killing my server! Let me know if you need code, I can't find anything...
more >>
Service Call using a Dynamic Method name
Posted by MacroProd at 1/28/2004 9:48:49 PM
Hello; I have a service set up, and I want to be able to call a method on my service using a variable that has been typed into a text field... so let's say that the following line produces a result from the server: _parent.flashService.getData("tcas", "SELECT RVT_DESC FROM ACU_RVT", "0", "ALL"...
more >>
Accessing results of webServiceConnector with script?
Posted by cdrabik at 1/28/2004 5:46:17 PM
Hello. I am using a webServiceCOnnector component to run a web service behind the scenes to retrieve infoirmation from Cold Fusion. What I would like to do is use the results of this call in script, but when I try to read from the webServiceConnector.results I get undefined. However, if I bind th...
more >>
FIRST TIME (CF + FLASH MX) PROBLEMS!
Posted by wade1 at 1/28/2004 5:35:29 PM
Hello I would imagine >>
_result query
Posted by phil ashby at 1/28/2004 2:44:52 PM
Hi, I'm using Flash Remoting and everything is great...except... I'm receiving the data back from the CF component in the _result handler, and giving it a name e.g. getremotedata.cff_getdirreports({var_nt:+_root.NTname}); function cff_getdirreports_result(dirreps) { trace(d...
more >>
Remoting Problems
Posted by wade1 at 1/28/2004 2:18:22 PM
Hello I would image >>
Remoting Access Error
Posted by lc_kman at 1/27/2004 8:54:30 PM
Hello, I have the problem that my flash movie won't work from a remote location. If I am on the computer that hosts the server and use: -It runs fine If I'm on the SAME COMPUTER, but type in the web address for that same server: -It ...
more >>
Don't see what you're looking for? Search DevelopmentNow.com.
one netconnection for multiple movies...
Posted by targetplanet at 1/27/2004 5:12:14 PM
if I am loading swfs into a main movie, can I just create a netconnection in the main movie, and use it for call in the loaded swfs? I haven't tried this yet, but I just was wondering if anyone knows about this.. ...
more >>
datagrid and default responder error -- please help
Posted by Niemo at 1/26/2004 9:14:39 PM
I'm trying to populate a data grid from the click handler of a button. My code looks like this: callHistory_pb.populateGrid = function() { contractService.getCallHistory(historyResponder,contracts_lb.getSelectedItem().label); } historyResponder.onResult(history_rs) { callLog_dg.alt...
more >>
problem in working with flash remoting on MAC OSX
Posted by abhuj at 1/23/2004 3:27:37 PM
hi friends, i am in very big problem, can any one help me out......... problem is, i have develope one application with intefgration of java , flash mx and remoting, so my problem is when i am testing the application on the Mac i have installed the flash remoting components fo...
more >>
Problem if Flash RemotingMX
Posted by MarceloDias at 1/23/2004 2:14:30 PM
HI I am with this problem in the RemotingMX. I use the Windows 2000 Server-sp3. Somebody can help me? escription: "Flash Remoting MX threw an exception during method invocation: Request denied. Macromedia Flash Remoting MX is operating as a developer edition. To upgrade Macromedia Flash...
more >>
Flash remoting & CF
Posted by DWrookie at 1/23/2004 1:02:36 PM
I'm new to Flash Remoting & CF. If I understand correctly, you have to use CF with FR in order to connect and retrieve data from a Database on a server. I'm interested in FR but, I'm just small time and find the cost of FR and CF to be way too high. Are there any alternatives for Flash Web Appl...
more >>
how to send user's input in CF page to flash movie?
Posted by xh_yue at 1/22/2004 5:41:44 PM
Hi, guys, In my coldfusion page i ask user to enter their id and check Access DB to verify its existance, then I want to pass that id in my flash movie and display it. How can i do it? Here's my CFC file: <cfcomponent> <cffunction name="sendID" access="remote"> <cfreturn <h>1500</h>> ...
more >>
Include "NetServices.as" not found
Posted by WorkRequest at 1/20/2004 8:11:26 PM
I am using Flash MX Professional 2004. I got the following error when I test my Flash. I didn't install Flash Remote, I think it is already included in the Flash MX Pro 2004, isn't it? **Error** Screen=application, layer=Layer 1, frame=1:Line 1: Error opening include file NetServices.as: File no...
more >>
What this code meaning?
Posted by Yashnoo at 1/20/2004 7:00:18 PM
Hi everyone: I want to run an example of dataGrid with Flash remoting.I use Tomcat (jsp server). The Remote doc says: "13. Inside the parenthesis, type ?? for the location of the Flash Remoting servlet that handles all remote requests. Use the port n...
more >>
remoting and active directory
Posted by Anton_FA at 1/20/2004 1:17:56 AM
Hi there I was wondering if anyone had used Flash remoting to display and query the Active Directory? if so, does anyone have any examples? thanks ...
more >>
Problem debugging a flash remoting program
Posted by arjunurs at 1/16/2004 8:46:33 PM
Hi, The flash remoting application does not display any output and it doesn't show any error either. I have a apache server with php and mysql running on a windows 2000 machine. I have checked the configuration of each of these and they seem to be working fine. Flash Source /*** Section 1 *...
more >>
Can Flash remoting respond to server events?
Posted by zip_2 at 1/16/2004 7:26:09 PM
In the documentation I see that Flash Remoting MX can call web services or other services on the server. But can it respond to server-side events? Can the server call all listening Flash applications to perform some action? This could be a group chat application, or updating a data set that chang...
more >>
Put a CF struct into a Flash Datagrid
Posted by fragnrock at 1/16/2004 4:03:36 PM
I have some problems putting the content of a coldfusion structure into a datagrid component. If I use this AS code, Nothing appears : function freestruct_Result(result){ //mydrid is my datagrid component mygrid.dataProvider = result; } My cfc component looks like this : <cf...
more >>
Problem with remoting
Posted by fragnrock at 1/15/2004 5:14:29 PM
I've got a problem with remoting. I'm using flashMX pro 2004 and coldFusion's CFC. It's work great under FlashMX environment but it does'nt work when I publish my flash on my web server. Source of my flash // Include the required classes for Flash Remoting #include "NetServices.as" // Set...
more >>
Passing array from CF to Flash
Posted by lshake at 1/15/2004 4:07:06 PM
This is my first attempt at flash remoting. I can't seem to find a simple answer anywhere for this. It may be that the answers are there, but just over my head. I want my Flash application to pass a variable to a CF page, which then runs a query using that variable, then passes the results back...
more >>
What needed?
Posted by fragnrock at 1/15/2004 9:38:19 AM
Hi, I have coldfusionMX Pro server installed on my server. I want to make some flash remoting pages (FlashMX and Coldfusion). I'd like to know what product I have to install on my server to do flash remoting. Thanks ...
more >>
Flash remoting only works for localhost on PetMarket application on ASP.NE
Posted by scarpenter at 1/15/2004 1:25:42 AM
Hi, I have installed the Macromedia PetMarket application on the ASP.Net platform, and I have gotten it to work using localhost in the url: http:/localhost/petmarket I have a licensed copy of Flash Remoting, and tried to access the petmarket application using the actual IP address, instead of l...
more >>
Netconnection Debugger is missing from mx2004
Posted by rabble at 1/13/2004 2:12:25 PM
I don't seem to be able to find the netconnection debugger in Flash mx professional 2004, although i have no problem finding it in flash mx. anyone know where it is? i purchased an upgrade. does it use something else? robert butz [email protected] ...
more >>
Flash Remoting The Deffinative Guide .NET assemblies
Posted by peterR_H at 1/12/2004 4:42:25 PM
Hi all, bit of a remoting /.NET question here. using System; using FlashGateway.IO; namespace com.oreilly.frdg { public class BookAssembly { public ASObject GetBook() { ASObject aso = new ASObject(); aso.ASType = "Book"; aso.Add("title", "FRDG"); aso.Add("author", "Tom Mu...
more >>
Help: Adding SOAP Headers!!
Posted by MartinRandall at 1/12/2004 9:37:53 AM
I'm trying to write ActionScript that will use a webservice and add a soap header to the function calls. I've tried addHeader() as described in the Help section, but when I monitor the SOAP requests from my webservice, there is no header added. Does anyone have any examples of addHeader actually ...
more >>
Can't "create()"...
Posted by KarstenTS at 1/11/2004 1:22:45 PM
Hey, everybody! I got a strange problem... I can establish a connection to my EJB, and I can call methods from that, but I can't call the create() - Method. That's something I don't understand. Why is that? A Servlet I built to check for being correct works fine and the WeatherSample works fine...
more >>
The legal status of AMFPHP
Posted by Pavils Jurjans at 1/9/2004 12:24:18 PM
Hello, Prior to pursuing some PHP project, I'd need to know the official status of AMFPHP in the eyes of MacroMedia - is it sort of "illegal", since it is developed by hacking into the undocumented AMF format, or is it "Okay" to use and there are no kind of bitterness feelings about it from MM...
more >>
Problems with Remoting in a Flash Form Based Application
Posted by Flich at 1/7/2004 8:00:25 PM
Has anyone experienced any trouble in using the new Remoting Components with Flash MX 2004 Pro in a form based application. When I go to compile the application it throws over 100 errors??? If I build the same application using the timeline, it compiles correctly?? Is there some special way...
more >>
"Hello World" with Flash-CFC
Posted by akerkur at 1/7/2004 5:04:02 PM
Hi all, I am trying to get a "hello world" working from CFC to Flash. Here is my code : ========== CFC CODE [helloWorld.cfc under /Components] <cfcomponent> <cffunction name="sayHello" access="remote" returntype="any"> <cfset FLASH. <cfreturn FLASH.result> </...
more >>
Do you want to run flash under LINUX???
Posted by atiol NO[at]SPAM iol.pt at 1/7/2004 3:06:50 AM
Hello all! Are you a flash developer and/or flash designer? Do you want to use flash under your Linux OS?? If so, please say it out loud! :) Reply to this message. Maybe if you all say YES to this, Macromedia could think about developing Flash for Linux. What do you think?? Say wha...
more >>
flash server control interacting with .net web controls on the same page?
Posted by sokon at 1/6/2004 1:30:16 PM
I've got a problem with the fact that on one hand you've got a html page with an swf file and on the other hand there's the aspx-file which provides the code for the swf. What if i want to use the swf file on the same page with some asp.net web controls? How is it possible to interact between the...
more >>
Need Remoting to pull in SQL data?
Posted by c1natra at 1/6/2004 1:27:45 AM
Hi - Quick and easy question. Do I have to drop the $1000 to get Flash Remoting if I want to pull in data from my SQL Server. I'm running Coldfusion 5 on a windows server, and I'd like to make some SWFs that pull in the same data my CF pages are. Thanks for your time...bb ...
more >>
amfphp IIS installation problems - noobie
Posted by bleepbloop at 1/5/2004 3:38:45 PM
HI I am trying to setup my (home) window xp machine for flash remoting. I am running IIS in xp pro and I have succesully got php installed and running correctly, and I have opted for mysql as a database server and that seems to be running fine. However I know that I have to install the amfphp ...
more >>
Switching to 2004 has done some strange things
Posted by sjf at 1/5/2004 3:35:59 PM
First the app I was testing moving from MX to 2004 was a finished peice that works. I was making some editions and decided to try 2004. I am very pleased I backed everything up. The production process was totally smooth, connected, sent and retreived with no problems. I made my changes and moved ...
more >>
Reading a DataSet from a .NET Webservice
Posted by MartinRandall at 1/5/2004 3:17:28 PM
Hi, I'm trying to read a .Net dataset from a webservice in Flash 2004 MX Pro. I've set up a test web service using the code below... [WebMethod] public DataSet GetDataSet() { DataSet ds = new DataSet("MySet"); DataTable dt = new DataTable("AgeTable"); ds.Tables.Add( dt ); dt.Column...
more >>
Flash Remoting Help - Will pay.
Posted by parallel2 at 1/4/2004 3:32:45 AM
A newbie question, but, nonetheless, I am in the process of publishing a swf, via flash mx2004 professional. I am calling a cfc via remoting... it works locally, it doesn't work when i publish to the server. My mapping is correct. I am able to call the cfc from coldfusion successfully, but whe...
more >>
Flash remoting with PHP
Posted by Olaf at 1/2/2004 2:46:29 PM
Hallo, What is the best choice for Flash remoting using PHP and Mysql? I have tested AMFPHP, and it looks OK. But what is the reason that the last news are from august this year? Do they stop the development? And what is with PHP-object, anyone with good points about this project? Thanks ...
more >>
Method of extracting pictures
Posted by gripper01007 NO[at]SPAM yahoo.com at 1/1/2004 5:39:17 AM
Is there a way of extracting the pics from an on-line flash presentation, and how is it done ? I'm wondering because ther are some really nice pics taken, but there are some excellent ones as well, and those would be nice to have....
more >>
·
·
groups
Questions? Comments? Contact the
d
n | http://www.developmentnow.com/g/72_2004_1_0_0_0/macromedia-flash-flash-remoting.htm | crawl-001 | refinedweb | 2,673 | 74.08 |
let
a=[1,2,3], then i let
b=torch.Tensor(a) , my pycharm’s background become yellow like that
is there exist a elegent way to convert a list to a tensor? or is my ide’s fault?
let
a=[1,2,3], then i let
b=torch.Tensor(a) , my pycharm’s background become yellow like that
is there exist a elegent way to convert a list to a tensor? or is my ide’s fault?
Convert list to tensor using this
a = [1, 2, 3] b = torch.FloatTensor(a)
Your method should also work but you should cast your datatype to float so you can use it in a neural net
Hi,
First of all, PyCharm or most of IDEs cannot really analysis libraries like PyTorch which has C++ backend and Python frontend so it is normal to get warning or missing errors but your codes works fine.
But about your question:
When you are on GPU,
torch.Tensor() will convert your data type to
Float. Actually,
torch.Tensor and
torch.FloatTensor both do same thing.
But I think better way is using
torch.tensor() (note the case of ‘t’ character). It converts your data to tensor but retains data type which is crucial in some methods. You may know that PyTorch and numpy are switchable to each other so if your array is int, your tensor should be int too unless you explicitly change type.
But on top of all these,
torch.tensor is convention because you can define following variables:
device,
dtype,
requires_grad, etc.
Note: using
torch.tensor() allocates new memory to copy the data of tensor. So if you want to avoid copying, use
torch.as_tensor(numpy_ndarray).
Bests
Nik
1000 thx! You reply really help me a lot!
You are welcome mate!
Something about @zimmer550 answer that you need to convert to float to use your tensors in NN, is a rule of thumb, so in some cases like methods available in
functional package etc, you need
Long data type etc. So best approach is to retain the data type as it is and change it explicitly when you to enable you debug much faster when data type inconsistency exists.
In PyCharm,
Ctrl+Q (I use my config from 2017 so it may be changed.) shows the documentation within your editor and always I recommend you to read it fully. There are many notes that can save you a lot of memory or runtime computation by only using a argument or triggering a function etc.
I am doing something very similar but I have a (nested) list of tensors. The simplest example I have is the following:
import torch # trying to convert a list of tensors to a torch.tensor x = torch.randn(3, 1) xs = [x, x] # xs = torch.tensor(xs) xs = torch.as_tensor(xs)
but I get the following error:
x = torch.randn(3, 1) xs = [x, x] # xs = torch.tensor(xs) xs = torch.as_tensor(xs) Traceback (most recent call last): File "/Users/brando/anaconda3/envs/automl-meta-learning/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3343, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-30-f846b7bfae21>", line 4, in <module> xs = torch.as_tensor(xs) ValueError: only one element tensors can be converted to Python scalars
any ideas what is going on? Btw, both give the same error.
I guess the following works but I am unsure what is wrong with this solution:
# %% import torch # trying to convert a list of tensors to a torch.tensor x = torch.randn(3) xs = [x.numpy(), x.numpy()] # xs = torch.tensor(xs) xs = torch.as_tensor(xs) print(xs) print(xs.size()) # %%())
output:()) tensor([[]]]) torch.Size([2, 3, 3])
Hi,
I think torch.tensor — PyTorch 1.7.0 documentation and torch.as_tensor — PyTorch 1.7.0 documentation have explained the difference clearly but in summary,
torch.tensor always copies the data but
torch.as_tensor tries to avoid that! In both cases, they don’t accept sequence of tensors.
The more intuitive way is stacking in a given dimension which you can find here: How to turn a list of tensor to tensor? - PyTorch Forums
The problem with your approach is that you convert your tensors to numpy, then you will lose grads and break computational graph but stacking preserves it.
Actually you have posted an answer similar that issue too!
How to turn a list of tensor to tensor? - #10 by Brando_Miranda
hahaha, yea I see I didn’t make a very solid memory/understanding when I wrote that last year (or it’s been to long since?). Perhaps I can finally sort this out in my head, though in my defence, there does seem to be a lot of discussions surrounding this topic (which isn’t helping to digest this):
it seems that the best pytorchthoning solution comes from either knowing
torch.cat or
torch.stack. In my use case I generate tensors and conceptually need to nest them in lists and eventually convert that to a final tensor (e.g. of size
[d1, d2, d3]). I think the easiest solution to my problem append things to a list and then give it to
torch.stack to form the new tensor then append that to a new list and then convert that to a tensor by again using
torch.stack recursively.
For a non recursive example I think this works…will update with a better example in a bit:
# %% import torch # stack vs cat # cat "extends" a list in the given dimension e.g. adds more rows or columns x = torch.randn(2, 3) print(f'{x.size()}') # add more rows (thus increasing the dimensionality of the column space to 2 -> 6) xnew_from_cat = torch.cat((x, x, x), 0) print(f'{xnew_from_cat.size()}') # add more columns (thus increasing the dimensionality of the row space to 3 -> 9) xnew_from_cat = torch.cat((x, x, x), 1) print(f'{xnew_from_cat.size()}') print() # stack serves the same role as append in lists. i.e. it doesn't change the original # vector space but instead adds a new index to the new tensor, so you retain the ability # get the original tensor you added to the list by indexing in the new dimension xnew_from_stack = torch.stack((x, x, x, x), 0) print(f'{xnew_from_stack.size()}') xnew_from_stack = torch.stack((x, x, x, x), 1) print(f'{xnew_from_stack.size()}') xnew_from_stack = torch.stack((x, x, x, x), 2) print(f'{xnew_from_stack.size()}') # default appends at the from xnew_from_stack = torch.stack((x, x, x, x)) print(f'{xnew_from_stack.size()}') print('I like to think of xnew_from_stack as a \"tensor list\" that you can pop from the front') print() lst = [] print(f'{x.size()}') for i in range(10): x += i # say we do something with x at iteration i lst.append(x) # lstt = torch.stack([x for _ in range(10)]) lstt = torch.stack(lst) print(lstt.size()) print()
def tensorify(lst): """ List must be nested list of tensors (with no varying lengths within a dimension). Nested list of nested lengths [D1, D2, ... DN] -> tensor([D1, D2, ..., DN) :return: nested list D """ # base case, if the current list is not nested anymore, make it into tensor if type(lst[0]) != list: if type(lst) == torch.Tensor: return lst elif type(lst[0]) == torch.Tensor: return torch.stack(lst, dim=0) else: # if the elements of lst are floats or something like that return torch.tensor(lst) current_dimension_i = len(lst) for d_i in range(current_dimension_i): tensor = tensorify(lst[d_i]) lst[d_i] = tensor # end of loop lst[d_i] = tensor([D_i, ... D_0]) tensor_lst = torch.stack(lst, dim=0) return tensor_lst
here is a few unit tests (I didn’t write more tests but it worked with my real code so I trust it’s fine. Feel free to help me by adding more tests if you want):
def test_tensorify(): t = [1, 2, 3] print(tensorify(t).size()) tt = [t, t, t] print(tensorify(tt)) ttt = [tt, tt, tt] print(tensorify(ttt)) if __name__ == '__main__': test_tensorify() print('Done\a')
related for variable length:
I didn’t read those very carefully but I assume they must be padding somehow (probably need to calculate the largest length/dimension to padd is my guess and then do some sort of recursion like I did above).
If someone is looking into the performance aspects of this, I’ve done a small experiment. In my case, I needed to convert a list of scalar tensors into a single tensor.
import torch torch.__version__ # 1.10.2 x = [torch.randn(1) for _ in range(10000)] torch.cat(x).shape, torch.stack(x).shape # torch.Size([10000]), torch.Size([10000, 1]) %timeit torch.cat(x) # 1.5 ms ± 476 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit torch.cat(x).reshape(-1,1) # 1.95 ms ± 534 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) %timeit torch.stack(x) # 5.36 ms ± 643 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
My conclusion is that even if you want to have the additional dimension of
torch.stack, using
torch.cat and then
reshape is better. | https://discuss.pytorch.org/t/best-way-to-convert-a-list-to-a-tensor/59949 | CC-MAIN-2022-27 | refinedweb | 1,530 | 68.36 |
Sorry for the confusing title, as I am new to C++
Basically, what I am trying to do is, to read a simple test.txt file's content, which are just two words actually: "Hello World" and desplay thier hexadecimal content in a shell, CMD or command line.(Whatever it is called), if the text thing is confusing you here, just think of it, as if I am trying to open a simple .wav file and read it's data and display it on CMD, like you would using
#include <iostream> #include <fstream> using namespace std; int main(){ fstream myFile; myFile.open("test.txt"); //reading goes here, the problem is displaying the hex data on the editor (CMD ) return 0; }
I am new to this, and downloaded codeblocks, so I can only work with the CMD as of now. I can open files with the fstream class, and write in it, but sadly I can not read it's hexa content.
Is there anyway to do this by simple means in C++? | https://www.daniweb.com/programming/software-development/threads/456871/reading-hexadecimal-numbers-into-shell | CC-MAIN-2019-04 | refinedweb | 172 | 73.1 |
I bet you thought I was done! Well, this is the last one for tonight, I promise, and believe me, I saved the best for last. Mantissa 0.4.1 brings some really fantastic features.
First up is the Tabular Data Browser. This is some pretty awesome stuff. Basically it is not even worth trying to describe with words. Go signup for a free ClickChronicle account and see for yourself. After you have accumulated a few clicks, you'll be able to browse them and page around and so forth: the HTML interface which allows you to do this is the TDB Controller. It is an Athena Widget that lets one page through the results of an Axiom query. The columns are customizable. The actions are customizable. The skin is customizable. All you have to do to get one of these things in your Mantissa application is instantiate TabularDataView with the customizations you desire and drop it onto a page someplace. Bam.
So now you're really excited about using the TDB. You need to write a Mantissa application before you can take advantage of it, though. Fortunately, the other big improvement in this release is that writing a Mantissa application has gotten way easier. What you do now is write an IOffering plugin. It looks something like this:
from axiom import iaxiom, scheduler, userbase
from xmantissa import website, offering, provisioning
from clickchronicle import clickapp, publicpage, prods
import clickchronicle
chronicler = provisioning.BenefactorFactory(
name = u'clickchronicle',
description = u'An application with which to chronicle the clicks of you.',
benefactorClass = clickapp.ClickChronicleBenefactor)
clicks = provisioning.BenefactorFactory(
name = u'clickchronicle-clicks',
description = u'Add some clicks to the click limit',
benefactorClass = prods.ClickIncreaser,
dependencies = [chronicler])
plugin = offering.Offering(
name = u"ClickChronicle",
description =.
""",
siteRequirements = [
(iaxiom.IScheduler, scheduler.Scheduler),
(userbase.IRealm, userbase.LoginSystem),
(None, website.WebSite)],
appPowerups = [
clickapp.StaticShellContent,
publicpage.ClickChroniclePublicPage],
benefactorFactories = [chronicler, clicks])
Actually, it might look exactly like this. This is the actual ClickChronicle IOffering plugin. Note that it is essentially all content: the boilerplate consists almost exclusively of importing modules and appeasing the Gods of Python syntax. This plugin assembles all the pieces which make up ClickChronicle into a coherent unit which can be deployed on a Mantissa server.
siteRequirements enumerate which must be available from the "site store", which contains such things as the credentials database and the webserver configuration.
appPowerups defines which Powerups are required by the "app store", at least one of which is created for each application (ie, ClickChronicle); these can (and in this case, do) define the look of the public page for this application, as well as any other behavior which is not clearly attributable to a particular user.
benefactorFactories, predictably, defines a list of factories which create benefactor objects. A benefactor in Mantissa is an Item responsible for endowing a user with new functionality. In the case of ClickChronicle, there are two kinds of benefactors: one endows a user with the application itself, letting them record clicks and later browse them; the other raises the limit on the number of clicks the user is allowed to store. Note also that the latter depends on the former, indicating that benefactors produced by
clicks cannot endow a user who has not first been endowed by a benefactor produced by
chronicler.
Confused yet? If not, awesome. Go write some kick-ass Mantissa application. If so, don't worry. I'm going to be writing some more in-depth documentation about the Offering system in the days to come.
> Confused yet?
Well, not exactly, but...
> If not, awesome. Go write some kick-ass Mantissa application.
...been trying for a while now. The relationship between the axiomatic plugins (clickcmd.py) and the Mantissa IOffering ones (clickoff.py) is not clear.
> If so, don't worry. I'm going to be writing some more in-depth
> documentation about the Offering system in the days to come.
Still waiting. | http://as.ynchrono.us/2005/12/mantissa-041-released_20.html | CC-MAIN-2016-18 | refinedweb | 646 | 50.02 |
We know that string is a collection of characters. Let's again have a look at string and learn more about it.
There are two different types of strings in C++.
- C-style string
- std::string (part of the standard library)
In this chapter, we will focus on C-style string.
C-style String
We can think of string as an array of characters, like "Sam" is a string and it is an array of characters 'S', 'a', 'm' and '\0'.
Look at the character at the 3rd index. It represents null character. ASCII value of '\0' is 0 and that of normal 0 is 48. It represents the termination of a string. So, if we declare :-
char name[ ]= "Sam";
It is :-
['S','a','m','\0']
We can also declare a string variable using characters as follows.
char name[ ]= { 'S', 'a', 'm', '\0'};
Let's see two examples to print a string, one without and the other with for loop.
#include <iostream> using namespace std; int main(){ char str[ ] = "Hello"; cout << str << endl; return 0; }
In the above example, we printed the whole string at once. Now, let's see the same example of printing individual characters of the string using for loop.
#include <iostream> using namespace std; int main(){ char str[ ] = "Hello"; int i; for( i=0; i<6; i++) { cout << str[i]; } return 0; }
In the first example, we printed the whole string at once. Whereas in the second example, we printed a character each time.
Taking string input
Now let's see how to input string from the user with an example.
#include <iostream> int main() { using namespace std; char name[20]; //declaring string 'name' cin >> name; //taking string input cout << name << endl; //printing string return 0; }
Peter
char name[20]; - By writing this statement, we declared an array of characters named 'name' and gave it an array size of 20 because we don't know the exact length of the string. Thus, it occupied a space in the memory having a size that is required by 20 characters. So, our array 'name' cannot store more than 20 characters.
cin >> name; - This is used to simply input a string from the user as we do for other datatypes and there is nothing new in this.
For example, if in the above example, we input Sam Brad as the name, then the output will only be Sam because the code considers only one word and terminates after the first word (after a whitespace).
Taking multi-word string input
We can take input of a string that consists of more than one word by using cin.getline. Let's see an example:
#include <iostream> int main() { using namespace std; char name[20]; //declaring string 'name' cin.getline(name, sizeof(name)); //taking string input cout << name << endl; //printing string return 0; }
Sam Bard
cin.getline(name, sizeof(name)); - cin.getline takes two arguments, the string variable and the size of that variable. We have used sizeof operator to get the size of string variable 'name'.
Pointers and String
Strings can also be declared using pointers. Let's see an example.
#include <iostream> using namespace std; int main(){ char name[ ]= "Sam"; char *p; p = name; /* for string, only this declaration will store its base address */ while( *p != '\0') { cout << *p; p++; } return 0; }
In the above example, since p stores the address of name[0], therefore the equal to '\0'.
Passing Strings to Functions
This is the same as we do with other arrays. The only difference is that this is an array of characters. That's it !
Let's see an example.
#include <iostream> using namespace std; void display( char ch[] ){ cout << ch; } int main(){ char arr[30]; cout << "Enter a word" << endl; cin >> arr; display(arr); return 0; }
cpp
cpp
Predefined string functions
We can perform different kinds of string functions like joining of 2 strings, comparing one string with another or finding the length of the string. Let's have a look at the list of such functions.
These predefined functions are part of the cstring library. Therefore, we need to include this library in our code by writing
#include <cstring>
We will see some examples of strlen, strcpy, strcat and strcmp as these are the most commonly used.
strlen(s1) calculates the length of string s1.
White space is also calculated in the length of the string.
#include <iostream> #include <cstring> using namespace std; int main(){ char name[ ]= "Hello"; int len1, len2; len1 = strlen(name); len2 = strlen("Hello World"); cout << "Length of " << name << " = " << len1 << endl; cout << "Length of " << "Hello World" << " = " << len2 << endl; return 0; }
Length of Hello World = 11
strcpy(s1, s2) copies the second string s2 to the first string s1.
#include <iostream> #include <cstring> using namespace std; int main(){ char s2[ ]= "Hello"; char s1[10]; strcpy(s1, s2); cout << "Source string " << s2 << endl; cout << "Target string " << s1 << endl; return 0; }
Target string Hello
strcat(s1, s2) concatenates(joins) the second string s2 to the first string s1.
#include <iostream> #include <cstring> using namespace std; int main(){ char s2[ ]= "World"; char s1[20]= "Hello"; strcat(s1, s2); cout << "Source string " << s2 << endl; cout << "Target string " << s1 << end pairs of characters.
#include <iostream> #include <cstring> using namespace std; int main(){ char s1[ ]= "Hello"; char s2[ ]= "World"; int i, j; i = strcmp(s1, "Hello"); j = strcmp(s1, s2); cout << i << endl; cout << j << endl; return 0; }
-15
2 D Array of Characters
Same as 2 D array of integers and other data types, we have 2 D array of characters also.
For example, we can write
char names[4][10] = {
"Andrew",
"Kary",
"Brown",
"Lawren"
};
Since string is used extensively, C++ provides a built-in string data type which you will learn in the next chapter.
Without practice, your knowledge is poison.
-Chanakya | https://www.codesdope.com/cpp-string/ | CC-MAIN-2021-25 | refinedweb | 964 | 68.2 |
“CSS3” was a massive success for CSS. A whole bunch of stuff dropped essentially at once that was all very terrific to get our hands on in CSS. Gradients,
animation/
transition,
border-radius,
box-shadow,
transform… woot! And better, the banner name CSS3 (and the spiritual umbrella “HTML5”) took off and the industry was just saturated in learning material about it all. Just look at all the “CSS3”-dubbed material that’s been published around here at CSS-Tricks over the years.
No doubt loads of people boned up on these technologies during that time. I also think there is no doubt there are lots of people that haven’t learned much CSS since then.
So what would we tell them?
Some other folks have speculated similarly. Scott Vandehey in “Modern CSS in a Nutshell” wrote about his friend who hasn’t kept up with CSS since about 2015 and doesn’t really know what to learn. I’ll attempt to paraphrase Scott’s list and what’s changed since the days of CSS3.
Preprocessors are still widely used since the day of CSS3, but the reasons to use them have dwindled, so maybe don’t even bother. This includes more newfangled approaches like polyfilling future features. This also includes Autoprefixer. CSS-in-JS is common, but only on projects where the entire workflow is already in JavaScript. You’ll know when you’re on a relevant project and can learn the syntax then if you need to. You should learn Custom Properties, Flexbox, and Grid for sure.
Sounds about right to me. But allow me to make my own list of post-CSS3 goodies that expands upon that list a smidge.
What’s new since CSS3?What’s new since CSS3?
And by “CSS3” let’s say 2015 or so.
.card { display: grid; grid-template-columns: 150px 1fr; gap: 1rem; } .card .nav { display: flex; gap: 0.5rem; }
LayoutLayout
You really gotta learn Flexbox and Grid if you haven’t — they are really cornerstones of CSS development these days. Even more so than any feature we got in CSS3.
Grid is extra powerful when you factor in subgrid and masonry, neither of which is reliable cross-browser yet but probably will be before too long.
html { --bgColor: #70f1d9; --font-size-base: clamp(1.833rem, 2vw + 1rem, 3rem); --font-size-lrg: clamp(1.375rem, 2vw + 1rem, 2.25rem); } html.dark { --bgColor: #2d283e; }
CSS Custom PropertiesCSS Custom Properties
Custom properties are also a big deal for several reasons. They can be your home for design tokens on your project, making a project easier to maintain and keep consistent. Color theming is a big use case, like dark mode.
You can go so far as designing entire sites using mostly custom properties. And along those lines, you can’t ignore Tailwind these days. The approach of styling an entire site with classes in HTML strikes the right chord with a lot of people (and the wrong chord with a lot of people, so no worries if it doesn’t jive with you).
@media (prefers-reduced-motion: reduce) { * { animation-duration: 0.001s !important; } } @media (prefers-color-scheme: dark) { :root { --bg: #222; } }
Preference QueriesPreference Queries
Preference queries are generally
@media queries like we’ve been using to respond to different browsers sizes for year, but now include ways to detect specific user preferences at the OS level. Two examples are
prefers-reduced-motion and
prefers-color-scheme. These allow us to build interfaces that more closely respect a user’s ideal experience. Read Una’s post.
.block { background: hsl(0 33% 53% / 0.5); background: rgb(255 0 0); background: /* can display colors no other format can */ color(display-p3 0.9176 0.2003 0.1386) background: lab(52.2345% 40.1645 59.9971 / .5);} background: hwb(194 0% 0% / .5); }
Color ChangesColor Changes
The color syntax is moving to functions that accept alpha (transparency) without having the change the function name. For example, if you wanted pure blue in the CSS3 days, you might do
rgb(0, 0, 255). Today, however, you can do it no-comma style (both work):
rgb(0 0 255), and then add alpha without using a different function:
rgb(0 0 255 / 0.5). Same exact situation for
hsl(). Just a small nicety, and how future color functions will only work.
Speaking of future color syntaxes:
- P3 color or “display-p3” is coming via the
color()function.
lab()color is another new color space. It “represents the entire range of color that humans can see.”
lch()is another new color space.
hwb()is another new color space for humans.
body { font-family: 'Recursive', sans-serif; font-weight: 950; font-variation-settings: 'MONO' 1, 'CASL' 1; }
Variable FontsVariable Fonts
Web fonts became a big thing in CSS3. Now there are variable fonts. You might as well know they exist. They both unlock some cool design possibilities and can sometimes be good for performance (like no longer needing to load different font files for bold and italic versions of the same font, for example). There is a such thing as color fonts too, but I’d say they haven’t seen much popularity on the web, despite the support.
.cut-out { clip-path: polygon(25% 0%, 75% 0%, 100% 50%, 75% 100%, 25% 100%, 0% 50%); }
.mask { mask: url(mask.png) right bottom / 100px repeat-y; }
.move-me { offset-path: path('M 5 5 m -4, 0 a 4,4 0 1,0 8,0 a 4,4 0 1,0 -8,0'); animation: move 3s linear infinite; } @keyframes move { 100% { offset-distance: 100%; } }
PathsPaths
SVG has also exploded since CSS3. You can straight up crop any element into shapes via
clip-path, bringing SVG-like qualities to CSS. Not only that, but you can animate elements along paths, float elements along paths, and even update the paths of SVG elements.
These all feel kind of spirtually connected to me:
clip-path— allows us to literally crop elements into shapes.
masks — similar to clipping, but a mask can have other qualities like being based on the alpha channel of the mask.
offset-path— provides a path that an element can be placed on, generally for the purpose of animating it along that path.
shape-outside— provides a path on a floated element that other elements wrap around.
d— an SVG’s
dattribute on a
<path>can be updated via CSS.
.disable { filter: blur(1px) grayscale(1); } .site-header { backdrop-filter: blur(10px); } .styled-quote { mix-blend-mode: exclusion; }
CSS FiltersCSS Filters
There is a lot of image manipulation (not to mention other DOM elements) that is possible these days directly in CSS. There is quite literally
filter, but its got friends and they all have different uses.
These all feel kind of spiritually connected to me:
filter— all sorts of useful Photoshop-like effects like brightness, contrast, grayscale, saturation, etc. Blurring is a really unique power.
background-blend-mode— again, evocative of Photoshop in how you can blend layers. Multiply the layers to darken and combine. Overlay to mix the background and color. Lighten and darken are classic effects that have real utility in web design, and you never know when a more esoteric lighting effect will create a cool look.
backdrop-filter— the same abilities you have with
filter, but they only apply to the background and not the entire element. Blurring just the background is a particularly useful effect.
mix-blend-mode— the same abilities you have with
background-blend-mode, but for the entire element rather than bring limited to backgrounds.
import "";
body { background: paint(extra-confetti); height: 100vh; margin: 0; }
HoudiniHoudini
Houdini is really a collection of technologies that are all essentially based around extending CSS with JavaScript, or at least at the intersection of CSS and JavaScript.
- Paint API — returns an image that is built from
<canvas>APIs and can be controlled through custom properties.
- Properties & Values API / Typed OM — gives types to values (e.g.
10px) that would have otherwise been strings.
- Layout API — create your own
displayproperties.
- Animation API
Combined, these make for some really awesome demos, though browser support is scattered. Part of the magic of Houdini is that it ships as Worklets that are pretty easy to import and use, so it has the potential to modularize powerful functionality while making it trivially easy to use.
my-component { --bg: lightgreen; } :host(.dark) { background: black; } my-component:part(foo) { border-bottom: 2px solid black; }
Shadow DOMShadow DOM
The Shadow DOM comes up a bit if you’ve played with
<svg> and the
<use> element. The “cloned” element that comes through has a shadow DOM that has limitations on how you can select “through” it. Then, when you get into
<web-components>, it’s the same ball of wax.
If you find yourself needing to style web components, know there are essentially four options from the “outside.” And you might be interested in knowing about native CSS modules and constructible stylesheets.
The CSS Working GroupThe CSS Working Group
It’s notable that the CSS working group has its own way of drawing lines in the sand year-to-year, noting where certain specs are at a given point in time:
These are pretty dense though. Sure, they’re great references and document things where we can see what’s changed since CSS3. But no way I’d send a casual front-end developer to these to choose what to learn.
Yeah — but what’s coming?Yeah — but what’s coming?
I’d say probably don’t worry about it. ;)
The point of this is catching up to useful things to know now since the CSS3 era. But if you’re curious about what the future of CSS holds in store…
- Container queries will be a huge deal. You’ll be able to make styling choices based on the size of a container element rather than the browser size alone. And it’s polyfillable today.
- Container units will be useful for sizing things based on the size of a container element.
- Independant transforms, e.g.
scale: 1.2;, will feel more logical to use than always having to use
transform.
- Nesting is a feature that all CSS preprocessor have had forever and that developers clearly like using, particularly for media queries. It’s likely we’ll get it in native CSS soon.
- Scoping will be a way to tell a block of CSS to only apply to a certain area (the same way CSS-in-JS libraries do), and helps with the tricky concept of proximity.
- Cascade layers open up an entirely new concept of what styles “win” on elements. Styles on higher layers will beat styles on lower layers, regardless of specificity.
- Viewport units will greatly improve with the introduction of “relative” viewport lengths. The super useful ones will be
dvhand
dvw, as they factor in the actual usable space in a browser window, preventing terrible issues like the browser UI overlapping a site’s UI.
- The
:has()selector is a like a parent selector plus.
- Scroll timelines will be awesome.
Bramus Van Damme has a pretty good article covering these things and more in his “CSS in 2022” roundup. It looks like 2022 should be a real banner year for CSS. Perhaps more of a banner year than the CSS3 of 2015.
resetand
revertkeywords, allowing real removal of previously set values without breaking stuff
New functions like
clamp()which along with grid and flex help building responsive layout without many media queries
Content-based values for width and height, like
min-contentand
fit-content
aspect-ratioand
object-fit
Presentional content via the
content:property extending to allow content replacement on real elements, as well as new pseudo-elementals like ::mark
upcoming :has()
Good ones.
Great article…The new aspect-ratio property is a new addition to CSS which is very handy… | https://css-tricks.com/whats-new-since-css3/?utm_source=swlinks-tw | CC-MAIN-2022-40 | refinedweb | 1,978 | 65.12 |
Dependency was not found using a npm package
I wanted to use the npm nomnoml package in Quasar and got an error during build.
This dependency was not found: fs in ./node_modules/nomnoml/dist/nomnoml.js To install it, you can run: npm install --save fs
my script part of the page looks like this:
<script> var nomnoml = require('nomnoml'); var src = '[<start>st]->[<state>plunder]' export default { mounted(){ console.log(nomnoml.renderSvg(src)); } } </script>
Did i miss something? I’ve read that is has to do something with webpack and that i have to add something like…
node: {
fs: ‘empty’
}
I hope you can help
- benoitranque last edited by
you should be using ES6 import instead of require
ok thanks. i’ve changed to import * as nomnoml from ‘nomnoml’;
Now i got an WEBPACK_IMPORTED_MODULE_0_nomnoml__.color255 is not a function :). I should say that i’ve also got this error by using require()
It could be that I’m just to stupid to use this library :D
btw. the fs error i solved with
extendWebpack (cfg) {
cfg.node = {
fs: ‘empty’
}
}
Hey, from looking at the package source (I assume it is this one) my guess is that for waht ever reason webpack loads the cli version.
There are two releases, one for the web and one for nedjs and the cli:
In your error it says
This dependency was not found: fs in ./node_modules/nomnoml/dist/nomnoml.js
This means that webpack tries to resolve the
fspackage, which is a default package for nodejs to access the filesystem and which is not usable in the browser, only in nodjs apps.
The strange thing is that the path in the error points to the non cli version.
Also your use of import is wrong.
import * as ...is used when multiple exports are defined on the package. In this case you should use
import nomnoml from 'nomnoml'.
Please try and report if this fixes the error.
P.S.: This forum supports markdown. If you want to post code blocks, please escape them with thre backticks (```) at the beginning and at the end of the code block
Hi,
thanks for your help I changed my code and think that it’s nearly correct.
import nomnom from 'nomnoml' export default { name:"App", methods:{ click(){ var canvas = document.getElementById('target'); var src = '[nomnoml] is -> [awesome]'; nomnom.draw(canvas,src) } } }
There is just one error left
Error in event handler for "click": "TypeError: _.object is not a function"
I’ve tried the same code in a simple vue template from this git account and it works, but in quasar I encounter the error.
_.object is not a functionlooks like it is referring to underscore.js which is a dependency of nomnoml.
Do you happen to include underscore.js as dependency on your project? Maybe it is a version mismatch
Due to this problem I just created a new quasar project for testing, which has no underscore.js as a module dependency. I will take a closer look into the dependencies of the template from the repo and the .quasar deps. Maybe I’ll find some difference.
I’ve just noticed the lodash dependency… I will look into that
Ok i solved the problem by downgrading the lodash package using
npm install [email protected]
I hope this will not interfere with some other quasar functionalities :D
Now it’s working fine and i can use the package.
thanks for your help :)
- rstoenescu Admin last edited by
Quasar doesn’t uses lodash, so you’re safe with whatever version that you need. | http://forum.quasar-framework.org/topic/1977/dependency-was-not-found-using-a-npm-package/2 | CC-MAIN-2018-13 | refinedweb | 598 | 71.85 |
C#; a producer that sends a single message, and a consumer that receives messages and prints them out. We'll gloss over some of the detail in the .NET client .NET client library
RabbitMQ speaks multiple protocols. This tutorial uses AMQP 0-9-1, which is an open, general-purpose protocol for messaging. There are a number of clients for RabbitMQ in many different languages. We'll use the .NET client provided by RabbitMQ.
The client supports .NET Core as well as .NET Framework 4.5.1+. This tutorial will use RabbitMQ .NET client 5.0 and .NET Core so you will ensure you have it installed and in your PATH.
You can also use the .NET Framework to complete this tutorial however the setup steps will be different.
RabbitMQ .NET client 5.0 and later versions are distributed via nuget.
This tutorial assumes you are using powershell on Windows. On MacOS and Linux nearly any shell will work.
First lets verify that you have .NET Core toolchain in PATH:
dotnet --help
should produce a help message.
Now let's generate two projects, one for the publisher and one for the consumer:
dotnet new console --name Send mv Send/Program.cs Send/Send.cs dotnet new console --name Receive mv Receive/Program.cs Receive/Receive.cs
This will create two new directories named Send and Receive.
Then we add the client dependency.
cd Send dotnet add package RabbitMQ.Client dotnet restore cd ../Receive dotnet add package RabbitMQ.Client dotnet restore
Now we have the .NET project set up we can write some code.
We'll call our message publisher (sender) Send.cs and our message consumer (receiver) Receive.cs. The publisher will connect to RabbitMQ, send a single message, then exit.
In Send.cs, we need to use some namespaces:
using System; using RabbitMQ.Client; using System.Text;
Set up the class:
class Send { public static void Main() { ... } }
then we can create a connection to the server:
class Send { public static void Main() { var factory = new ConnectionFactory() { HostName = "localhost" }; using (var connection = factory.CreateConnection()) { using (var channel = connection.CreateModel()) { ... } } } }:
using System; using RabbitMQ.Client; using System.Text; class Send { public static void Main() { var factory = new ConnectionFactory() { HostName = "localhost" }; using(var connection = factory.CreateConnection()) using(var channel = connection.CreateModel()) { channel.QueueDeclare(queue: "hello", durable: false, exclusive: false, autoDelete: false, arguments: null); string message = "Hello World!"; var body = Encoding.UTF8.GetBytes(message); channel.BasicPublish(exchange: "", routingKey: "hello", basicProperties: null, body: body); Console.WriteLine(" [x] Sent {0}", message); } Console.WriteLine(" Press [enter] to exit."); Console.ReadLine(); } }
Declaring a queue is idempotent - it will only be created if it doesn't exist already. The message content is a byte array, so you can encode whatever you like there.
When the code above finishes running, the channel and the connection will be disposed. That's it for our publisher.
Here's the whole Send.cs 50 MB free) and is therefore refusing to accept messages. Check the broker logfile to confirm and reduce the limit if necessary. The configuration file documentation will show you how to set disk_free_limit.
As for the consumer, it is pushed messages from RabbitMQ. So unlike the publisher which publishes a single message, we'll keep the consumer running continuously to listen for messages and print them out.
The code (in Receive.cs) has almost the same using statements as Send:
using RabbitMQ.Client; using RabbitMQ.Client.Events; using System; using System.Text;
Setting up is the same as the publisher; we open a connection and a channel, and declare the queue from which we're going to consume. Note this matches up with the queue that send publishes to.
class Receive { public static void Main() { var factory = new ConnectionFactory() { HostName = "localhost" }; using (var connection = factory.CreateConnection()) { using (var channel = connection.CreateModel()) { channel.QueueDeclare(queue: "hello", durable: false, exclusive: false, autoDelete: false, arguments: null); ... } } } } is what EventingBasicConsumer.Received event handler does.
using RabbitMQ.Client; using RabbitMQ.Client.Events; using System; using System.Text; class Receive { public static void Main() { var factory = new ConnectionFactory() { HostName = "localhost" };); }; channel.BasicConsume(queue: "hello", autoAck: true, consumer: consumer); Console.WriteLine(" Press [enter] to exit."); Console.ReadLine(); } } }
Here's the whole Receive.cs class.
Open two terminals.
Run the consumer:
cd Receive dotnet run
Then run the producer:
cd Send dotnet run
The consumer will print the message it gets from the publisher via RabbitMQ. The consumer will keep running, waiting for messages (Use Ctrl-C to stop it), so try running the publisher from another terminal.
Time to move on to part 2 and build a simple work queue. | https://www.rabbitmq.com/tutorials/tutorial-one-dotnet.html | CC-MAIN-2018-13 | refinedweb | 764 | 53.98 |
ok in this program I'm trying to make the pointer variable totalW a function parameter that will return the number of w's found in the string entered by the user but it doesn't seem to work ....I know it has to do something with the pointers but I'm at a loss as to how to fix it
output:output:Code:#include <cstdlib> #include <iostream> #include <cstring> #include <cctype> using namespace std; const int SIZE = 1001; int numberOfW(int *totalW); int main() { int total = 0; numberOfW(&total); cout << total << " yay did it work?"; system("PAUSE"); return 0; } int numberOfW(int *totalW){ char letter[SIZE]; cout << "Input a string of characters: " << endl; cin.getline(letter, 1000); for (int i = 0; i <= letter[i]; i++){ if(isalpha (letter[i]) && letter[i] == 'w'){ totalW++; } else cout <<"X"; } return *totalW; system("PAUSE"); }
Input a string of characters:
wewe
XX0 yay did it work?Press any key to continue . . .
any suggestions:? | http://cboard.cprogramming.com/cplusplus-programming/115476-little-lost-confused-my-pointers.html | CC-MAIN-2015-11 | refinedweb | 159 | 51.72 |
This notebook was put together by [Jake Vanderplas]() for PyCon 2015. Source and license info is on [GitHub]().
In this section, we'll look at model evaluation and the tuning of hyperparameters, which are parameters that define the model.
from __future__ import print_function, division %matplotlib inline import numpy as np import matplotlib.pyplot as plt # Use seaborn for plotting defaults import seaborn as sns; sns.set()
One of the most important pieces of machine learning is model validation: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.
Consider the digits example we've been looking at previously. How might we check how well our model fits the data?
from sklearn.datasets import load_digits digits = load_digits() X = digits.data y = digits.target
Let's fit a K-neighbors classifier
from sklearn.neighbors import KNeighborsClassifier knn = KNeighborsClassifier(n_neighbors=1) knn.fit(X, y)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski', metric_params=None, n_neighbors=1, p=2, weights='uniform')
Now we'll use this classifier to predict labels for the data
y_pred = knn.predict(X)
Finally, we can check how well our prediction did:
print("{0} / {1} correct".format(np.sum(y == y_pred), len(y)))
1797 / 1797 correct
It seems we have a perfect classifier!
Question: what's wrong with this?
Above we made the mistake of testing our data on the same set of data that was used for training. This is not generally a good idea. If we optimize our estimator this way, we will tend to over-fit the data: that is, we learn the noise.
A better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility:
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) X_train.shape, X_test.shape
((1347, 64), (450, 64))
Now we train on the training data, and validate on the test data:
knn = KNeighborsClassifier(n_neighbors=1) knn.fit(X_train, y_train) y_pred = knn.predict(X_test) print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test)))
438 / 450 correct
This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine:
from sklearn.metrics import accuracy_score accuracy_score(y_test, y_pred)
0.97333333333333338
This can also be computed directly from the
model.score method:
knn.score(X_test, y_test)
0.97333333333333338
Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors:
for n_neighbors in [1, 5, 10, 20, 30]: knn = KNeighborsClassifier(n_neighbors) knn.fit(X_train, y_train) print(n_neighbors, knn.score(X_test, y_test))
1 0.973333333333 5 0.982222222222 10 0.971111111111 20 0.955555555556 30 0.96
We see that in this case, a small number of neighbors seems to be the best option.
X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0) X1.shape, X2.shape
((898, 64), (899, 64))
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1)) print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))
0.983296213808 0.982202447164
Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help:
from sklearn.cross_validation import cross_val_score cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10) cv.mean()
0.97614938602520218
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the
cv parameter above. Let's do 10-fold cross-validation:
cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
array([ 0.93513514, 0.99453552, 0.97237569, 0.98888889, 0.96089385, 0.98882682, 0.99441341, 0.98876404, 0.97175141, 0.96590909])
This gives us an even better idea of how well our model is doing. we'll use this to fit a quadratic curve to the data.
model = PolynomialRegression(2) model.fit(X, y) y_test = model.predict(X_test) plt.scatter(X.ravel(), y) plt.plot(X_test.ravel(), y_test) plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
model = PolynomialRegression(30) model.fit(X, y) y_test = model.predict(X_test) plt.scatter(X.ravel(), y) plt.plot(X_test.ravel(), y_test) plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y))).
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
from IPython.html.widgets import interact def plot_fit(degree=1, Npts=50): X, y = make_data(Npts, error=1) X_test = np.linspace(-0.1, 1.1, 500)[:, None] model = PolynomialRegression(degree=degree) model.fit(X, y) y_test = model.predict(X_test) plt.scatter(X.ravel(), y) plt.plot(X_test.ravel(), y_test) plt.ylim(-4, 14) plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y))) interact(plot_fit, degree=[1, 30], Npts=[2, 100]);.learning_curve import validation_curve def rms_error(model, X, y): y_pred = model.predict(X) return np.sqrt(np.mean((y - y_pred) ** 2)).
We've gone over several useful tools for model validation
These tools are powerful means of evaluating your model on your data. | https://nbviewer.ipython.org/github/jakevdp/sklearn_pycon2015/blob/master/notebooks/05-Validation.ipynb | CC-MAIN-2022-33 | refinedweb | 937 | 53.78 |
Do!
Step 1: Components and Tools
To build this project you will need the following components and tools;
Components:
- 1x 405nm Laser Module
- 2x 180 degree Servos
- 1x 9V Wall Wart
- 1x MMA7361 Accelerometer
- 1x 5V Regulator
- 5x 22uF Capacitors
- 1x Microcontroller (I used an Arduino Mini)
- 1x Breadboard (long or short is fine)
- 1x Protoboard (Breadboard style)
- 1x 2.1mm Female Barrel Jack
- Glow in the dark material (I used this)
- Wire
Tools:
- Soldering station
- Pliers
- Micro USB to standard USB
- Optional: Multimeter to check voltages
Step 2: Design and Simulate
To paint with light, we need a device to read and display user input, a power system, a device to actually move the laser, and a Arduino to manage it all:
- Motion Controller: An accelerometer will measure tilting gestures. This is how the user chooses where the laser points. A neo-pixel ring will display how the controller is being tilted. A pushbutton turns the laser on and off.
- Servos: Two servos will move the laser pointer. One will move left and right (x axis). The other servo will be glued sideways onto the other servo, allowing it to move up and down (y axis). This allows us to point the laser anywhere in front of us.
- Power Systems: A wall wart and regulator will power the project. The wall wart powers the Arduino, which in turn powers the accelerometer (3V3), neo-pixel (5v), and laser (5V). The wall wart also powers the regulator, which in turn powers the servos. Servos, and motors in general, induce a lot of noise. They create spikes of power and cause signal issues. The regulator helps separate the servos from the rest of the system so everything runs smoothly.
- Microntroller: An Arduino will read in the accelerometer values. It will then move the servos accordingly and light up the corresponding neo-pixel LED.
The simulation below shows the entire system working together. The top two power supplies and four resistors are to fake the circuit into thinking we are tilting the accelerometer. Move the voltage dials to 'fake tilt' the motion controller. The servos will move all the way down or up because the acceleromter-fake is constantly telling the Arduino to make the servos move up or down. You will also see the neo-pixel ring display where the device is 'tilted'.
Step 3: Building the Controller
The controller contains the accelerometer, neo-pixel ring, and pushbutton. This is what the user interacts with. We want the accelerometer to be in the center of the protoboard controller so everything is balanced. The neo-pixel ring can be placed anywhere on the controller. I choose to have it in the center of the controller, encircling the accelerometer. The button can go to the right of left depending on your preference.
Soldering the accelerometer and neo-pixel ring to the protoboard can be challenging because we want the devices to look nice and make solid connections. There are only a few thru holes on the neo-pixel display and we don't want these connection to short with the connections on the accelerometer.
I first soldered the neo-pixel V+ pin to the + rail on the protoboard. You can use a single male header, or a small piece of wire to make this connection. When soldering, make sure to not burn the neo-pixel! Next solder the neo-pixel GND and IN pins to the protoboard.
I then soldered the accelerometer to the protoboard. Make sure the two ends of the accelerometer are not electrically connected. Use the gap in the middle of the protoboard to make an electrical gap between both sides of the accelerometer.
Next we need to solder the button. Solder the four leads from the button to an empty spot on the protoboard.
Now that the three components are soldered to the protoboard, we need to get the connections wired to the arduino. The connection we need to focus on are:
- Neo-Pixel
- V+ pin
- GND
- IN
- Vcc
- GND
- XOUT
- YOUT
To get these connections to the Arduino, I first routed the wires to line-up at the bottom right corner of the protoboard. In other words, solder a wire from each connection to an open spot on the protoboard. All the connections should now be next to each other on an open spot on the protoboard. I then soldered these connections to a long set of wires (5ft or so). This set of wires could then be connected to the Arduino (through the breadboard).
Step 4: Servos!
Now we need to setup the servos. For the laser to turn left and right, we have the x axis servo. I hot glued the servo to a piece of solid cardboard (so the rotating part of the servo is perpendicular to the ground). For the laser to go up and down, we have the y axis servo. This servo is hot glued sideways onto the rotating part of the x axis servo. Gluing one servo onto another means that both axis of motion can be achieved at the same time. Once the laser is connected it will be able to pan up and down, and left and right. This allows a spherical 180 degree of motion. In other words it can point the laser at anything in front of it.
Step 5: Code
Now that we have the hardware setup, lets figure out how to control it all!
We start our Arduino sketch by including the libraries that let us control the servos and neo-pixel ring. To get the neo-pixel library working, you will also need to download this libraryand place it in the Libraries folder in your directory. For window machines this can be found at: 'C:\Program Files (x86)\Arduino\libraries'. The servo library was preinstalled in the library folder when you first downloaded the IDE. For more info on servos with Arduino, look here.
#include <Adafruit_NeoPixel.h> //install this library to easily control the neo-pixel ring #include <Servo.h> //include this library to easily control servos
Next we need to initialize the neo-pixel ring and the servos. This means we are creating an object that is a neo-pixel ring and a servo respectively. Whenever we use 'strip' the computer knows we are dealing with a neo-pixel ring and it will use the Adafruit_NeoPixel library. Using the objects xServo or yServo will use the Servo library. The #define lines simply means that wherever there is the word PIN, it will be replaced with A2, and STRIPSIZE will be replaced with 16. #define is a pre-compiler directive. Learn more about #define here.
#define PIN A2 //neo-pixel control ring #define STRIPSIZE 16 //tell your code the size of the ring Adafruit_NeoPixel strip = Adafruit_NeoPixel(STRIPSIZE, PIN, NEO_GRB + NEO_KHZ800); //inialize the ring, call it 'strip' Servo xServo; //initalize the x servo and call it xServo Servo yServo; //initalize the y servo and call it yServo
Now we can add the global variables, the things we will use and change throughout the entire program. The first four variables are used in the mapping function later in the code. They can be increased or decreased to calibrate the controller.
int brightness = 50; //set the brightness to 50 (out of 255);"> //THE ACCELEROMETER IS NOT VERY ACCURATE AND NEEDS CALIBRATION //ADJUST THESE VALUES TO CALIBRATE YOUR CONTROLLER //These values tell the mapping function how translate the values int xLeft = 280; int xRight = 365; int yForward = 295; int yBack = 377; int brightness = 50; //set the brightness to 50 (out of 255) int xRaw; //x axis from accelerometer before being mapped int yRaw; //y axis from accelerometer before being mapped int x; //x axis from accelerometer after being mapped int y; //y axis from accelerometer after being mapped //start the servos at position 0 int servoX = 0; int servoY = 0;
Lets write the setup function now. Remember that the void setup() runs once when powered on. You can choose to initialize the serial monitor or leave it out. It can be incredibly helpful in debugging though! The next three lines of code start the ring, set its brightness, and turn off all the LEDs. It sets the stage for us to use the ring later in the code. The next couple lines of code declare the x and y axis pins (analog 5 and analog 4 respectively) as inputs. The couple lines of code after that use '.attach()' which is used to tell the Arduino which pin to find the servo. '.write()' is what we use to actually move the servos (from 0, to the midpoint 90, to 180).
void setup() { Serial.begin(9600); strip.begin(); //start the neo-pixel ring strip.setBrightness(20); //set the brightness to 20 (out of 255) strip.show(); // Initialize all pixels to 'off' pinMode(A5,INPUT); //tell the ardino to get x axis measurements from analog pin 5 pinMode(A4,INPUT); //tell the ardino to get y axis measurements from analog pin 4 xServo.attach(8); //tell the arduino that the x servo is attached to digital pin 8 yServo.attach(9); //tell the arduino that the y servo is attached to digital pin 9 //start both servos at the middle position (0 is left, 90 is middle, 180 is right) xServo.write(90); yServo.write(90); }
Next we will look at the void loop() function. This continuously runs while the board is powered. In this program, the loop does nothing but call other functions. Breaking your code up into other functions that do specific tasks can be a great way to organize your code. getAccelValues() will read the controller tilt, map the results to easy-to-use variables, and update the associated global variables. displayTilt() takes the newly updated variables and decides which LEDs need to be lit up to display the controller tilt. It then lights up the corresponding LEDs. writeServos() take the values from getAccelValues and moves the servos accordingly.
void loop() { //the loop does nothing but call the functions that do the important stuff getAccelValues(); //start by getting the accelerometer values and mapping them to easy to use numbers displayTilt(); //determine how the conrtoller is tilted and light up the corresponding pixel writeServos(); //move the servos depending on how the controller is tilted }
Lets take a deeper look into the getAccelValues() function. It starts by reading in the values from analog pin 5 and analog pin 4. The couples lines after that takes the raw values and maps them to easy to use values from 6 to -6. Using integers from 6 to -6 makes it easy to interface with the neo-pixel ring since there are a discrete number of LEDs (16 in our case). To learn more about the useful mapping function, read this. The servoX and servoY lines of code do the same thing but are meant for controlling the servos. Changing these mapping values will change how quickly the servos move.
void getAccelValues(){ xRaw = analogRead(A5); //read in the x tilt from the accelerometer yRaw = analogRead(A4); //read in the y tilt from the accelerometer x = map(xRaw,xLeft,xRight,6,-6); //map the raw x value to work well with the ring, it goes from 6 to -6 because of the orientation of the accelerometer on the controller y = map(yRaw,yForward,yBack,6,-6); //map the raw y value to work well with the ring, it goes from 6 to -6 because of the orientation of the accelerometer on the controller servoX = map(xRaw,xLeft,xRight,-2,2); //map the raw x value to work well with the servo, the smaller the number the slower and more accurate the servo will be servoY = map(yRaw,yForward,yBack,2,-2); //map the raw y value to work well with the servo, the smaller the number the slower and more accurate the servo will be }
displayTilt() was the tedious part of the code. How does one get the ring to emulate gravity? How will the red LED always be at the lowest point? While it might not be the most efficient method, I decided to connect each LED to a specific x and y axis behavior. Refer to the second picture on this step. Each LED corresponds to a specific amount of tilt of the controller. Based on this I wrote 16 if statements that would control which LED to turn red. An else statement turned all the LEDs green if the controller is held flat. The last two lines pushes the results to the ring. In other words, it lights up the ring. The delay gives the system time to 'do its job'.
void displayTilt(){ strip.clear(); //start by clearing everything /* THE FOLLOWING LINES OF CODE CHECK THE X AND Y VALUES, IT MAPS EACH PIXEL ON THE RING TO A RANGE OF X AND Y VALUES setPixel(); TAKES THE CORRESPONDING PIXEL AND LIGHTS THAT ONE UP AS RED AND THE REST AS BLUE */ //top right if(x>0 && y>0 && x==y)setPixel(4); else if(x>0 && y>0 && x<y)setPixel(5); else if(x>0 && y>0 && x>y)setPixel(3); //top left else if(x<0 && y>0 && -x==y)setPixel(8); else if(x<0 && y>0 && -x<y)setPixel(7); else if(x<0 && y>0 && -x>y)setPixel(9); //bottom right else if(x>0 && y<0 && x==-y)setPixel(0); else if(x>0 && y<0 && x<-y)setPixel(15); else if(x>0 && y<0 && x>-y)setPixel(1); //bottom left else if(x<0 && y<0 && -x==-y)setPixel(12); else if(x<0 && y<0 && -x<-y)setPixel(13); else if(x<0 && y<0 && -x>-y)setPixel(11); //top middle else if(x==0 && y>0)setPixel(6); //left middle else if(y==0 && x<0)setPixel(10); //right middle else if(y==0 && x>0)setPixel(2); //bottom middle else if(x==0 && y<0)setPixel(14); //no tilt, light everything up green else{ int i=0; for(i=0;i<16;i++){ strip.setPixelColor(i, 0, brightness, 0); } delay(200); } strip.show(); //light up the neo-pixel! delay(30); //wait before displaying the next configuration }
You may have noticed setPixel(); This is another function I wrote. Its job is to set one of the ring's LEDs to red and the rest blue.
void setPixel(int pixel){ strip.setPixelColor(pixel, brightness, 0, 0); //set the passed in pixel to be red //then fill in the rest of the ring with blue int i=0; for(i=0;i<16;i++){ if(i!=pixel)strip.setPixelColor(i, 0, 0, brightness); } }
Lastly we want to take all of the information we have gathered, and actually move the servos. We first read in the current position of each servo (somewhere between 0 and 180). We will move each servo depending on the direction and magnitude of the tilt (which we found in getAccelValues()). In other words, if you tilt the controller really far to the left, the x servo will quickly move the left. Tilt the controller slightly upwards and the servo will slowly pan upwards. We also want to make sure we don't ask the servos to move more than they can. The if statement before each .write() command is used to keep the system from trying to move less than 0 or more than 180.
void writeServos(){ int xServoCurrent = xServo.read(); //read in the current position of the x servo if(((xServoCurrent + servoX) > 5) && ((xServoCurrent + servoX) < 175) && servoX) xServo.write(xServoCurrent + servoX); //^as long as the value won't be out of the range of the servo, move the servo according to the tilt of the controller int yServoCurrent = yServo.read(); //read in the current position of the y servo if(((yServoCurrent + servoY) > 90) && ((yServoCurrent + servoY) < 175) && servoY) yServo.write(yServoCurrent + servoY); //^as long as the value won't be out of the range of the servo, move the servo according to the tilt of the controller }
That's it! The full sketch is included in this step. Its a lot of information to take in but its best to look at it one function at a time. If you are looking for a challenge, try writing a more efficient program to get the same result! You could vastly improve the program if you found an algorithm to light up the ring!
Step 6: Final Integration and Testing
We now have the controller and the servo setup. Lets put it all together. Start by connecting the wall wart to the breadboard's positive and negative rails using the 2.1mm female barrel jack. Connect the Arduino to the breadboard, with one side going to either rows a-e, and the other side going to either f-j. Tape the laser pointer to the y servo. Then make the following connections:
Motion Controller:
- Power from motion controller -> 5V on Arduino
- GND from motion controller -> GND rail
- IN from motion controller -> A2 on Arduino
- Button leads from motion controller-> one in 5V rail, the other to the Laser's + (red) wire.
- Vcc from accelerometer -> 3V3 on Arduino
- GND from accelerometer -> GND rail
- XOUT from accelerometer -> A4 on Arduino
- YOUT from accelerometer -> A3 on Arduino
Arduino:
- Positive rail -> Vin on Arduino
- GND rail -> GND on Arduino
Regulator:
- IN pin on regulator -> Positive Rail
- GND pin on regulator -> GND rail
Servos:
- Signal wire on x servo -> D9 on Arduino
- Signal wire on y servo -> D8 on Arduino
- GND wire on x and y servo -> GND rail
- Positive (red) wire on x and y servo -> OUT pin on regulator
Almost done! Lastly, add decoupling capacitors (around 4x 22uF) to the voltage rails on the breadboard.
Step 7: Laser Painting and Future Improvments
You did it, you now have a motion controlled laser painter! You could paint a glow in the dark wall, cover your bike in glow in the dark vinyl and paint it with laser light, or drive your pets and family crazy with the motion-controlled laser pointer! Besides being an endless amount of fun to play with, its a good exploration of accelerometers, neo-pixel rings, servos, and control systems. Try changing various parts of the code and see how it changes the project, but most importantly, remember to enjoy making a cool project!
5 Discussions
1 year ago
This is a great project that I would like to pursue. What would the next steps be to make it possible to trace the lines of an SVG file? I want to trace the outline of something like this with the laser from a certain position and then mark the trace on the walls to paint (fill in) after I am done. The laser don't need to move fast (just as fast as I can follow) but it would need to be quite smooth for curved lines. How would one do that?
2 years ago
Those patterns are awesome!
2 years ago
This is pretty cool.
2 years ago
Nice man!
Reply 2 years ago
Thanks, it was a fun project! | https://www.instructables.com/id/Laser-Painting-With-Motion-Control-and-Arduino/ | CC-MAIN-2019-18 | refinedweb | 3,180 | 69.72 |
Hello Friends ……..Recently a tiny bug in the Linux USB Device Driver had crashed my kernel , Its very common that bugs in the Device Drivers written brings down the system by crashing the kernel.The reason being that unlike the usual programs we write Drivers run in kernel space.When programs run in user space and when we commit common programming snags like segmentation faults or any invalid memory reference the kernel handles it to maintain the system stability , but in case of drivers which run in kernel space there is no one to monitor since the kernel forms the core of the OS hierarchy .
Because of the above reasons its not safe to code in kernel space and better not unless there are no alternative ways to achieve the purpose.In driver writing , A simple unnoticed bug in a line of code could compromise the whole system.
Recently i came across a open source project by name LIBUSB . At first glance i assumed LIBUSB to be not so handy library to use and program with , but when i realized that using LIBUSB Linux USB Drivers can be written completely from user space i was amused to start using it to write USB Device Drivers under LINUX.
What is LIBUSB ???
The official site quotes that “Libusb is a C library that gives applications easy access to USB devices on many different operating systems. libusb is an open source project, the code is licensed under the GNU LESSER GENERAL PUBLIC LICENSE or later.”
We’ll after realizing the importance of LIBUSB i was very curious on starting off with my first USB driver using LIBUSB libraries. So i installed LIBUSB libraries by issuing the following command on the terminal of my KUBUNTU machine
sudo apt-get install libusb-dev
we’ll the first of installing the LIBUSB library is over , now i wrote my HELLO WORLD libusb driver .
#include <stdio.h> #include <stdlib.h> #include <usb.h> int main(){ usb_init(); printf("Hello world!\n"); return 0; }
Writing the hello world LIBUSB program was a piece of cake , a trivial task 😀 But the smile didnt remain longer since i couldnt find a way to compile the code 😦
During compiling the code using gcc/g++ the LIBUSB library had to be linked by using -L option . But i couldnt find the libusb libraries under /usr/lib where usually all the libraries reside.All the online resources i referred stated the libraries to reside in /usr/lib .
Later i found out that the path where LIBUSB libraries (.a) and headers and their names differ with few versions of LIBUSB . So the best way out was to search where the current LIBUSB package installed its headers and library.
Since Debain based distributions (ubuntu , kubnutu , MINT ) use .deb packages which gets installed using the `dpkg` mechanism , i tried the following command to know the list of directories used by the libusb-dev package
dpkg -L libusb-dev
Now focus on the last few lines of the output and find the path where the library libusb.a and the header file usb.h/libusb.h is installed.
Now make sure that you add the header name in the code and , during compilation just add the parent directory of libusb.so along with the -L option of gcc/g++ .Also dont miss to add -lusb option during compilation.
gcc basiclusb.c -o basic -L/usr/lib/x86_64-linux-gnu/ -lusb
This succesfully compiles the Hello World LIBUSB code to give out the executable 😀 I hope this post would come pretty handy for those who want to start of with Writing USB device drivers in Linux using LIBUSB libraries.Catch u folks with more insights on wrting USB drivers under LINUX using LIBUSB in the posts to come 🙂 | https://hackintoshrao.com/2013/02/03/libusb-1intro-to-libusb-writing-and-compiling-the-hello-world-libusb-code-in-ubuntu/ | CC-MAIN-2018-17 | refinedweb | 629 | 59.74 |
Advanced Argument Parser for WikiMacros
Table of Contents
Description
This plug-in provides an advanced version of the parse_args function for WikiMacros.
This function is used in WikiMacros to parse the macro arguments. This enhanced version is meant as a replacement of trac.wiki.macros.parse_args and supports several advanced options (see section #Parameters). The most important feature is the support for quoting the delimiter, e.g. 'key1=val1,key2="some,text",key3=val3' will correctly return 'some,text' as the value of key2. The original parse_args function would return '"some' and handle 'text"' as separate argument.
Documentation
Definition
def parse_args (args, strict = True, multi = False, listonly = False, minlen = 0, quotechar = '"', escchar = '\\', delim = ',', delquotes = False)
Usage Example
# Instead of: from trac.wiki.macros import parse_args # Use: from tracadvparseargs import parse_args class SomeMacro(WikiMacroBase): def expand_macro(self, formatter, name, args): largs, kwargs = parse_args( args, <options> )
Parameters
- args
- The argument string; 'content' in `expand_macro.
- strict
- Enables strict checking of keys.
- multi
- Enables folding of muliple given keys into list.
If set to True, values of multiple given keys will be returned as list, but single given keys will return a scalar.
If set to a list, only the values of the listed keys will be returned as list, but always as list even when there is only one value.
If this list contains '*', all values are always returned as list.
- listonly
- If true only a list is returned, no directionary.
- minlen
- Extend returned list to given minimum length. Only used when listonly=True.
Parser parameters
- quotechar
- The quote character to be used.
- escchar
- The escape character to be used.
- delim
- The delimiter character to be used.
- delquotes
- Selects if quotes should be removed.
Contributors: | http://trac-hacks.org/wiki/AdvParseArgsPlugin?version=6 | CC-MAIN-2015-11 | refinedweb | 282 | 58.69 |
Potentiometer
You have learned a lot about digital signals and tried many projects with digital devices. Now let’s know something about analog signals. The potentiometer is a typical component to give you analog readings. You'll connect a potentiometer to your circuit and read analog values.
Learning goals
- Learn about analog signals and distinguish them from digital signals.
- Understand the working principle of potentiometers.
- Know how to use the serial monitor.
🔸Background
What is an analog signal?
As mentioned before, the signals can be divided into two types: digital signal and analog signal. Unlike the digital signal changing among several values, the analog signal is continuous. Its voltages change smoothly with time, which means there are infinite possible values from minimum to maximum voltage (0 - 3.3V).
But the microcontrollers are digital components, how do they read the analog voltage at a specific time?
Indeed, the microcontrollers cannot directly read analog values. It needs an analog to digital converter (ADC). The ADC converts analog voltages on a pin to corresponding digital values. In this way, the microcontrollers can finally read analog voltages.
The ability to measure analog values depends on the ADC resolution. The resolution tells the count of possible values. For example, if the resolution is 2-bit, there can be 4 possible values: 0, 1, 2, 3. These values are called raw values. And if it's is 3-bit, there are 8 values (0-7).
You will not look into the details on how ADC works. That's too complex. What you need is the raw values it gives you after reading from an analog signal. These values may be not easy to understand. Then they can be mapped to voltage values between the minimum and maximum voltage.
The SwiftIO Feather board has a built-in 12-bit ADC. So there are 4096 (2^12) analog levels, from 0 to 4095. For example, if the raw value equals 0, the voltage would be 0V; if the raw value equals 4095, the voltage would be 3.3V; and 1365 corresponds to (3.3 / 4095) * 1365 = 1.1V.
Here is the equation for the conversion:
voltage = 3.3 / 4095 * raw value
As you can see, higher resolution means more possible values, so the voltage obtained will be more accurate. Fo instance, as shown in the image above, if you get a raw value of 6 with a 3-bit ADC, the voltage should be around 3.3 / 7 * 6 ≈ 2.8V. But the same voltage might be considered as 3.3V with a 2-bit ADC.
🔸New component
Potentiometer
Potentiometer (pot for short, also called knob) is a kind of variable resistor. You can rotate it to change its resistance and thus change the voltage in a circuit. It is commonly used to control the volume on audio devices, as a menu selector knob on home appliances, etc.
The potentiometer has a movable wiper inside it. As you rotate it clockwise or anticlockwise, the wiper moves with it to change the available resistance in the circuit.
Symbols: (international), (US)
As shown in the image above, it has three terminals. The resistance between two ends (a and c) represents the maximum resistance. The middle one is connected to a wiper. As you rotate the knob, the position of the wiper changes and the actual resistance in the circuit changes proportionally.
For example, if you connect the outer pin a to the ground, the other outer pin c to power, and the middle pin b to the input pin, let’s see how it works. When you turn it clockwise, the wiper moves gradually far away from a, the actual resistance between a and b in the circuit increases. And the voltage measured on pin b would increase with it, gradually to 3.3V. If you turn it anticlockwise, the result is the opposite.
🔸New concept
Voltage divider
A voltage divider circuit is commonly used to get some smaller voltage levels that are a fraction of the input voltage. Usually, there are resistors connected in series. As the current flows through the resistors, the voltage drops.
The voltage divider calculation is based on the ohm’s law. Here’s the equation: VR1 = Vin * R1/(R1+R2)
- Vin: input voltage
- VR1: voltage drop on R1
Since the two resistors are in series, the current flows through them are the same. The sum of VR1 and VR2 should equal Vin. So the ratio of R1’s resistance to the total resistance equals the ratio of voltage drop to the incoming voltage supply.
The potentiometer is a typical example of it. The whole resistor is cut into two parts by the wiper. As the wiper goes, the ratio of the resistor changes, thus the voltage drop changes accordingly.
🔸Circuit - potentiometer module
There are two potentiometers on your playgrounds. They are connected respectively to A0 and A11.
note
The circuits above are simplified for your reference.
🔸Preparation
Class
AnalogIn - this class allows you to read the analog input.
Global function
print(_:) - print the value out. You can view it on any serial monitor.
🔸Projects
1. Read input value
In this project, you will rotate the potentiometer to change the input value. You will view the voltage value on the serial monitor.
After you connect the board to the computer through the serial port, the value is printed on the serial monitor window one by one per second. When you turn the potentiometer, the value changes. It gradually goes to the maximum or minimum value according to the direction you twist.
Example code
// Import SwiftIO library to control input and output, and MadBoard to use the pin name.
import SwiftIO
import MadBoard
// Initialize the analog pin A0 for the potentiometer.
let knob = AnalogIn(Id.A0)
// Read the voltage value and print it out every second.
while true {
let value = knob.readVoltage()
print(value)
sleep(ms: 1000)
}
Code analysis
let value = knob.readVoltage()
Declare a constant to store the returned voltage value.
There are three different methods in
AnalogIn class to read analog input. The only difference is the forms of the return value: raw value, voltage value, or a percentage. The method
readVoltage() returns the voltage reading.
print(value)
It allows you to see the value on a serial monitor.
It is a useful tool to debug your code if there is something unexpected in your program. You could add it after each statement that changes a value. Then you could infer which step goes wrong according to the printed results.
Serial monitor
A serial monitor is like a link between your board and the computer. You could view the value transmitted between them by using the function
print(_:).
When you download the code, you connect the download port on the board to the computer. Now you need to connect the serial port on the shield.
When you install the MadMachine extension for VS Code, a serial tool Serial Port Helper will installed automatically. Of course you cound use any other tool that you're familiar with.
Click the icon as shown below to open the tool.
It will show all available ports. In this case, the second port is the one needed. The port name may be different for your board. If you are not sure which is the right one, you can disconnect your board and then connect again to see which port disappears and reappears.
Click the port to connect it. As you see, the red dot next to the port name changes to green. The port is successfully connected.
Check the serial setting. Click the right caret to show the settings. Set the Baud Rate to 115200.
BTW, if View Mode is in Hex Mode, you can click on it again to change to String Mode, since hex values are not easy to understand.
The serial communication is now set up. If there is data transmission, you should see the messages on OUTPUT window.
2. Control buzzer sound output
You will use a potentiometer to change the sound of the buzzer. The pitch becomes higher when you turn it clockwise and lower when you turn it anticlockwise.
Example code
// Import the SwiftIO to control input and output and the MadBoard to use the pin name.
import SwiftIO
import MadBoard
// Initialize the analog pin for the potentiometer and PWM pin for the LED.
let knob = AnalogIn(Id.A0)
let buzzer = PWMOut(Id.PWM5A)
// Read the input value in percentage.
// Then calculate the value into the frequency.
// Set the PWM with the frequency and a duty cycle.
while true {
let value = knob.readPercent()
let f = 50 + Int(1000 * value)
buzzer.set(frequency: f, dutycycle: 0.5)
sleep(ms: 20)
}
Code analysis
let value = knob.readPercent()
Here you use another method
readPercent() to get the input value represented as a percentage.
let f = 50 + Int(1000 * value)
The reading values are too small to serve as frequencies. You then map it to bigger values to get frequencies from 50 to 1050.
buzzer.set(frequency: f, dutycycle: 0.5)
The PWM pins will output signals with the specified frequencies to drive the buzzer. So when you turn the potentiometer clockwise, the buzzer pitch will increase gradually.
3. Change LED brightness
The potentiometer can also be used to change the brightness of LEDs. In this project, you'll turn the potentiometer to brighten and dim an LED.
Example code
// Import SwiftIO library to control input and output, and MadBoard to use the pin name.
import SwiftIO
import MadBoard
// Initialize the analog pin for the potentiometer and PWM pin for the LED.
let knob = AnalogIn(Id.A0)
let led = PWMOut(Id.PWM4A)
// Set the PWM to control the LED.
led.set(frequency: 1000, dutycycle: 0)
// Read the input value.
// The value is represented in percentage, while the duty cycle is also between 0 and 1,
// so you can directly use the reading value to set the PWM.
while true {
let dutycycle = knob.readPercent()
led.setDutycycle(dutycycle)
sleep(ms: 20)
} | https://docs.madmachine.io/tutorials/swiftio-circuit-playgrounds/modules/potentiometer | CC-MAIN-2022-21 | refinedweb | 1,664 | 67.86 |
Introduction: ESP8266-01 Temp/RH Sensor Readings Over JSON/MQTT
Overview
As part of my Home Automation I wanted to monitor various Rooms mostly temperature, however for a couple of rooms I wanted to monitor relative humidity as well so I purchased a couple of DHT22 sensors which will provide both in a single package. This room would go on to form part of my energy saving project with respect to the average house temperature, for this to work I needed to send the readings to my Domoticz Home Automation system so they can be logged but also sent to other interest clients to use the data, this meant I had to send the data in JSON format in accordance to Domoticz' API over MQTT IoT Protocol via the WiFi connection.
What is JSON?
JSON or Java Script Object Notation is a way structuring text to be exchanged as data. So rather just sending a temperature value to a device you could send an id and a time stamp all in the same message as object, this is useful for home automation systems when there are many temperature sensors. For Example in this project I use the message : {"idx":11, "nvalue":0, "svalue":"19.70;44.00;0"} Object "idx", is the unique id of this sensor in my system. Object "nvalue" isn't used and needs a default 0. Object "svalue" is the temp, hum, and humidity stat values in the prescribed order. The home automation system will receive this and parse the object into the database.
Data presented in the Domoticz Dashboard
What is MQTT?
Taken from MQTT.org : 'MQTT is a machine-to-machine (M2M)/"Internet of Things" connectivity protocol. It was designed as an extremely lightweight publish/subscribe messaging transport.' I use it for the protocol that is end messages over my wifi network. The Raspberry Pi Zero running my Home Automation System is the MQTT broker for the network and runs 'mosquitto' for this service.
Step 1: Building the Sensor
As always I build the prototype on a breadboard and soak test before the final soldering into an install-able solution. Apologies I have no pictures of these steps!! The ESP8266-01 requires a 3.3v power supply so I use a step down converter which has a committed 5vdc Power Supply unit connected, my installation location has local socket outlets so I can plug this sensor in without considerations of powering it too much (for example do I use batteries? how long will they last?). The CH_PD pin of the ESP needs to be pulled high for the ESP to start running so I pulled this via a 10 Kohm resistor. The DHT22 requires a power supply of 3.3-5vdc I chose to power from my 5vdc side additionally a pull-up resistor is advised across the VCC and data pin so I placed a 10 Kohm resistor in this place.
Step 2: Coding
The software is written in Arduino IDE and designed to connect to my local MQTT broker on startup then utilizes Chrono.h library to read and send the temperature and humidity readings once every 60 seconds.
Below is a summary of some aspects of the attached code:
#include "ESPHelper.h"
#include "Chrono.h"
#include "ArduinoJson.h"
#include "DHT.h"
Add Appropriate Libraries
Chrono myChrono(Chrono::SECONDS);
This sets up the Chrono library to work in seconds, I don't need any scaling below this. used in the initial setup
myChrono.restart();
This restarts the timer at 0 seconds used in the void setup()
if (myChrono.hasPassed(60, true)) {
Used in the void loop() after 60 seconds run the code in the if statement, the addition of the true reset the timer to 0 when 60 seconds has elapsed.
#define DHTPIN 0
DHT22 Pin Definition
#define DHTTYPE DHT22
The library can support other sensors eg DHT11 so a simple definition to tell the library what sensor I am using
DHT dht(DHTPIN, DHTTYPE);
supply the above info to the library
dht.begin();
used in void setup(); to start the DHT software
float h = dht.readHumidity();
take a Humidity reading and place in the float variable 'h' float
hum = h - HCal;
I noticed the readings needed calibrating so I have an offset that I can change, the result being the value I send for use
float t = dht.readTemperature();
take a Temperature reading and place in the float variable 't'.
float tmp = t - TCal;
again I noticed the readings needed calibrating so I have an offset that I can change, the result being the value I send for use 'ArduinoJson.h'
String mystring1;
mystring1 = String(hum);
String mystring2;
mystring2 = String(tmp);
As mentioned above the JSON message needs to be text so I convert the float values into strings that can be added the message.
if (mystring1 != mystring2){
LastKnownT = mystring2;
LastKnownH = mystring1; }
else if (mystring1 == mystring2){
Serial.println("Sensor Error");
mystring2 = LastKnownT;
mystring1 = LastKnownH; }
I put it this piece in case of sensor errors so I don't send 'NaN' and other unreasonable values to the log. When the sensor is missing the values returned are NaN so if both temp and humidity equals NaN then I send the last good value else if it does not equal 'Nan' then the last good reading is sent.
StaticJsonBuffer<300> JSONbuffer;
firstly create a buffer to store the message
JsonObject& JSONencoder = JSONbuffer.createObject();
JSONencoder["idx"] = 11;
Start adding the objects with text associations, the idx number is the devices individual ID on my home automation system.
JSONencoder["nvalue"] = 0;
JSONencoder["svalue"] = LastKnownT+";"+LastKnownH+";"+HumStat;
char JSONmessageBuffer[100];
JSONencoder.printTo(JSONmessageBuffer, sizeof(JSONmessageBuffer));
The above builds the JSON message and adds the last known readings and creates a JSON message like this: {"idx":11, "nvalue":0, "svalue":"19.70;44.00;0"}
Finally, I send it to the MQTT broker in the topic "domoticz/in" :
myESP.publish(svrtopic, JSONmessageBuffer, false);
Attachments
Participated in the
Microcontroller Contest
Be the First to Share
Recommendations
2 Comments
2 years ago
This would be great for tracking what the temperature in your house actually does. Also you could use as many sensors as you want so that you could compare different rooms.
Reply 2 years ago
that’s right, I have three upto now, this one was the most interesting to show on here. I need some more and hope to create a heat map for the house at some point. Also plan to make a battery one as getting power to these can be challenging. | https://www.instructables.com/ESP8266-01-TempRH-Sensor-Readings-Over-JSONMQTT/ | CC-MAIN-2021-10 | refinedweb | 1,088 | 60.35 |
Automating AWS Lambda Deployments Using Bitbucket Pipelines and Bitbucket Pipes
Automating AWS Lambda Deployments Using Bitbucket Pipelines and Bitbucket Pipes
Check out how you can integrate your favorite vendor-supplied pipeline using Bitbucket Pipes.
Join the DZone community and get the full member experience.Join For Free
Today we’ll talk about Bitbucket Pipes. It is a new feature in Bitbucket which can automate Lambda deployments on AWS.
So before we get our hands dirty, here’s a basic overview.
Lambda is the AWS-managed service running Functions-as-a-Service. Lambdas work like other managed services on AWS. We define a Python/Node/Java function and an API endpoint, and upload it to the Lambda service. Our function then handles the request-response cycle. AWS manages the underlying infrastructure resources for our function. This frees up time to focus on building our applications and not managing our infrastructure.
Bitbucket Pipelines is the Continuous Integration/Continuous Delivery pipeline integrated into Bitbucket. It works by running a sequence of steps after we merge or review code. Bitbucket executes these steps in an isolated Docker container of our choice. Here is my past tutorial on Pipelines deployments.
Bitbucket Pipes is the new feature we’ll test drive today. It is a marketplace of third-party integrations. A Pipe is a parameterized Docker which contains ready-to-use code. It will look something like this:
- pipe: <vendor>/<some-pipe> variables: variable_1: value_1 variable_2: value_2 variable_3: value_3
Pipes by AWS, Google Cloud, SonarCube, Slack, and others are available already. They are a way to abstract away repeated steps. This makes code reviews easier and deployments more reliable. And it lets us focus on what is being done rather than how it is being done. If a third-party pipe doesn’t work for you, you can even write your own custom pipe.
These are some of the providers that provide Pipes today:
Goal: Deploy a Lambda Using Pipes
So our goal today is as follows: we want to deploy a test Lambda function using the new Pipes feature.
To do this, we’ll need to:
- Create a test function.
- Configure AWS credentials for Lambda deployments.
- Configure credentials in Bitbucket.
- Write our pipelines file which will use our credentials and a Pipe to deploy to AWS.
Step 1: Create a Test Function
Let’s start with a basic test function. Create a new repo, and add a new file called
lambda_function.py with the following contents:
def lambda_handler(a, b): return "It works :)"
Step 2: Configure AWS Credentials
We’ll need an IAM user with the
AWSLambdaFullAccess managed policy.
Add this user’s access and secret keys to the
Repository variables of the repo. Make sure to mask and encrypt these values.
Add the keys either at the Account level, the Deployment level, or the Repository level. You can find more information about these here.
Step 3: Create Our Pipelines file
Now create a
bitbucket-pipelines.yml file and add the following:
pipelines: default: - step: name: Build and package script: - apt-get update && apt-get install -y zip - zip code.zip lambda_function.py artifacts: - code.zip - step: name: Update Lambda code script: - pipe: atlassian/aws-lambda-deploy:0.2.1 variables: AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} AWS_DEFAULT_REGION: 'us-east-1' FUNCTION_NAME: 'my-lambda-function' COMMAND: 'update' ZIP_FILE: 'code.zip'
The first
step: in the pipeline will package our Python function in a zip file and pass it as an artifact to the next step.
The second
step: is where the magic happens.
atlassian/aws-lambda-deploy:0.2.1 is a Dockerized Pipe for deploying Lambdas. Its source code can be found here. We call this Pipe with six parameters: our AWS credentials, the region where we want to deploy, the name of our Lambda function, the command we want to execute, and the name of our packaged artifact.
Step 4: Executing Our Deployment
Committing the above changes in our repo will trigger a pipeline for this deployment. If all goes well, we should see the following:
Wrapping It Up
With the above pipeline ready, we can use other Bitbucket features to improve it. Features like merge checks, branch permissions, and deployment targets can make deployments smoother. We can also tighten the IAM permissions to ensure it has access to only the resources it needs.
Using Pipes in this way has the following advantages:
- They simplify pipeline creation and abstract away repeating details. Just paste in a vendor-supplied pipeline, pass in your parameters, and that’s it!
- Code reviews become easier. Ready-to-use Pipes can abstract away complex workflows.
- Pipes use semantic versioning, so we can lock the Pipe version to major or minor versions as we choose. Changing a Pipe version can go through a PR process, making updates safer.
- Pipes can even send Slack and PagerDuty alerts after deployments.
And that’s all. I hope you’ve enjoyed this demo. You can find more resources below.
Happy coding!
Resources
Published at DZone with permission of Ayush Sharma , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/automating-aws-lambda-deployments-using-bitbucket?fromrel=true | CC-MAIN-2020-40 | refinedweb | 862 | 58.38 |
What
awt
Java AWT Applet example how to display data using JDBC in awt/applet
java - Swing AWT
What is Java Swing AWT What is Java Swing AWT
Java AWT
Java AWT What is the relationship between the Canvas class and the Graphics class
awt list item* - Swing AWT
information.
Thanks...awt list item* how do i make an item inside my listitem... frame=new Frame("Add list");
Label label=new Label("What is your Choice package tutorial
Java AWT Package
In Java, Abstract Window Toolkit(AWT) is a platform independent
widget toolkit for windowing, graphics, and user-interface.
As the Java... is used by each AWT component to display itself on the
screen. For example
what is the example of wifi in java
what is the example of wifi in java i want wi fi programs codings
What is AWT in java
What is AWT in java
.../api/java/awt/package-summary.html... available with JDK. AWT stands for Abstract
Windowing Toolkit. It contains all classes
Java Swings problem - Swing AWT
Java Swings problem Sir, I am facing a problem in JSplitPane. I want... be short. As a whole, what I want is a divider displaying only partial in split pane. For example, if the split pane is of dimension (0,0,100, 400), then divider - Program - Swing AWT
Java Program A java Program that send message from one computer to another and what are the requirements
Java - Swing AWT
Java Hi friend,read for more information,
query - Swing AWT
java swing awt thread query Hi, I am just looking for a simple example of Java Swing
AWT code for popUpmenu - Swing AWT
for more information. code for popUpmenu Respected Sir/Madam,
I am writing a program in JAVA/AWT.My requirement is, a Form consists of a "TextBox" and a "Button - Swing AWT
:
Thanks...java swing how to add image in JPanel in Swing? Hi Friend,
Try the following code:
import java.awt.*;
import java.awt.image.
What is a vector in Java? Explain with example.
What is a vector in Java? Explain with example. What is a vector in Java? Explain with example.
Hi,
The Vector is a collect of Object... related to Vector in Java program - Swing AWT
java what will be the code for handling button event in swing? Hi Friend,
Try the following code:
import java.awt.*;
import javax.swing.*;
import java.awt.event.*;
class ButtonEvent extends JFrame
swings - Swing AWT
swings What is Java Swing Technologies? Hi friend,import javax.swing.*;import java.awt.*;import javax.swing.JFrame;public class...://
java awt calender
java awt calender java awt code for calender to include beside a textfield
JAVA AWT BASE PROJECT
JAVA AWT BASE PROJECT suggest meaningful java AWT-base project
JList - Swing AWT
JList May i know how to add single items to JList. What is the method for that? You kindly explain with an example. Expecting solution as early... example");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE
Create a Container in Java awt
Create a Container in Java awt
Introduction
This program illustrates you how to create...; }
}
Download this example
Java AWT
Java AWT How can the Checkbox class be used to create a radio button
SWINGS - Swing AWT
more information,Examples and Tutorials on Swing,AWT visit to :
what is meant by persistence in java???? give a programming example
what is meant by persistence in java???? give a programming example what is meant by persistence in java???? give a programming example
awt jdbc
awt jdbc programm in java to accept the details of doctor (dno,dname,salary)user & insert it into the database(use prerparedstatement class&awt
Java Swings-awt - Swing AWT
Java Swings-awt Hi,
Thanks for posting the Answer... I need to design a tool Bar which looks like a Formating toolbar in MS-Office Winword(Standard & Formating) tool Bar.
Please help me... Thanks in Advance
JFrame Components Printing - Swing AWT
...
but i go through the link that you have specified
and downloaded the codes and compiled it got...)
at java.awt.EventDispatchThread.run(Unknown Source)
so what can i do to avoid
JFrame components printing - Swing AWT
...
but i go through the link that you have specified
and downloaded the codes and compiled it got...)
at java.awt.EventDispatchThread.run(Unknown Source)
so what can i do to avoid and function it properly
JTable Cell Validation? - Swing AWT
://
Thanks it's not exactly...JTable Cell Validation? hi there
please i want a simple example...(table);
JLabel label=new JLabel("JTable validation Example",JLabel.CENTER:
What is composition? - Java Beginners
What is composition? Hi,
What is composition? Give example of composition.
Thanks
Program for Calculator - Swing AWT
Program for Calculator write a program for calculator? Hi Friend,
Please visit the following link:
Hope that it will be helpful
What is Java Platform
As a beginner you must first understand the about Java Platform and to get
the answer of the frequently asked question "What is Java Platform?". The Java Platform
is actually responsible for running the Java program. This article
unicode - Swing AWT
........when i do so and print the fetched values in java control (JTextArea) or IN doc or RTF or WEB page
it get the UNICODE as ?????
what can i do so
Language JAVA
platform XP
database SQL Server and MS ACcess
What is Abstraction - Java Beginners
What is Abstraction What is abstraction in java? How is it used in java..please give an example also Hi Friend,
Abstraction is nothing... of a particular object or a concept and hide the other information.For example, a car
What is Locale - Java Beginners
What is Locale Hi,
What is Locale? Write example to show the use... the following links:
scrolling a drawing..... - Swing AWT
information.
What is singleton? - Java Beginners
What is singleton? Hi,
I want to know more about the singleton in Java. Please explain me what is singleton in Java? Give me example of Singleton class.
Thanks
hello,
Singleton is a concept in which you
java
java what is procedure to run client and server in java(awt
what is a indentifier - Java Beginners
what is a indentifier What is an identifier and what is the definition? An identifier is an unlimited-length sequence of Java letters and Java digits, the first of which must be a Java letter. An identifier cannot
java - Swing AWT
java How can my java program run a non-java application. Will it be possible to invoke a gui application using java
DrawingCircle - Swing AWT
:
Thanks
Java Program - Swing AWT
Java Program A Java Program that display image on ImageIcon after selecting an image from the JFileChooser
Java Program - Swing AWT
Java Program A Java Program to send message from one computer to another....... | http://www.roseindia.net/tutorialhelp/comment/47064 | CC-MAIN-2014-15 | refinedweb | 1,120 | 56.66 |
Re: Joining Domain Problem
From: Colin Nash [MVP] (cnash-REMOVETHIS-_at_mvps.org)
Date: 02/11/04
- ]
Date: Tue, 10 Feb 2004 23:17:46 -0500
Of course, in order for that to work you would need to do some kind of
routing on the Pro box because of your weird topology. Internet Connection
Sharing would maybe work I guess but you could save yourself a headache by
just hooking them both into the router's switch. It's all firewalled behind
the router anyway right?
"Colin Nash [MVP]" <[email protected]> wrote in message
news:[email protected]...
> If the Pro machine is getting an IP from the router's DHCP server, it is
> probably being passed the DNS settings that come from your ISP. This
won't
> be able to resolve the address of the domain controller.
>
> Either configure the router to pass the IP address of your Server as the
DNS
> server, or disable DHCP on the router and install it on the server.
>
> Follow through the steps on that site and completely take your external
DNS
> out of the picture until the very very end. You will then want your
> server to forward any requests outside of your namespace to your ISP's DNS
> server.
>
> Of course I've basically just restated what that article says but you
didn't
> mention having tried any of those steps... ;)
>
> --
> Colin Nash
> Microsoft MVP
> Windows Printing/Imaging/Hardware
>
>
> "Ice Sickle" <[email protected]> wrote in message
> news:[email protected]...
> > My setup is Windows 2000 Server connected to a machine(crossover cable)
> > running Windows 2000 Pro which has 2 NICs one of which is connected to
the
> > router and then the internet. Pro has dynamic IP and Server has static
> > 192.168.0.2 address. I can see my Pro machine from Server and can even
> > access it from AD administation snapin. . I can see what disks I have
and
> so
> > on. I can see the domain and ping the server from pro machine as well.
> > However when I try to join the domain from Pro machine I get error
> described
> > here:
> >
> >
> > I've uninstalled firewall so that's not an issue.
> >
> > since I have this somewhat weird setup maybe that's why my pro machine
> can't
> > connect. I'm thinking that maybe it's trying to lookup the domain
through
> > the wrong NIC card--the one that is connected to the router instead of
the
> > one that is connected to the Server machine.
> >
> >
>
>
- ] | http://www.tech-archive.net/Archive/Win2000/microsoft.public.win2000.active_directory/2004-02/0773.html | crawl-002 | refinedweb | 424 | 70.94 |
A function in Python can call itself. That’s what recursion is. And it can be pretty useful in many scenarios.
The common way to explain recursion is by using the factorial calculation.
The factorial of a number is the number
n mutiplied by
n-1, multiplied by
n-2… and so on, until reaching the number
1:
3! = 3 * 2 * 1 = 6 4! = 4 * 3 * 2 * 1 = 24 5! = 5 * 4 * 3 * 2 * 1 = 120
Using recursion we can write a function that calculates the factorial of any number:
def factorial(n): if n == 1: return 1 return n * factorial(n-1) print(factorial(3)) # 6 print(factorial(4)) # 24 print(factorial(5)) # 120
If inside the
factorial() function you call
factorial(n) instead of
factorial(n-1), you are going to cause an infinite recursion. Python by default will halt recursions at 1000 calls, and when this limit is reached, you will get a
RecursionError error.
Recursion is helpful in many places, and it helps us simplify our code when there’s no other optimal way to do it, so it’s good to know this technique.
Download my free Python Handbook
More python tutorials:
- Debugging Python
- Introduction to Python
- Python Loops
- Django in VS Code, fix the error `Unable to import django.db`
- Python Objects
- Python Operator Overloading
- Python Lambda Functions
- Python Nested Functions
- How to check if a variable is a number in Python | https://flaviocopes.com/python-recursion/ | CC-MAIN-2021-17 | refinedweb | 238 | 52.9 |
Announcing Microsoft ASP.NET WebHooks V1 RTM
We are very happy to announce ASP.NET WebHooks V1 RTM making it easy to both send and receive WebHooks with ASP.NET.
WebHooks provide a simple pub/sub model for wiring together Web APIs and services with your code. A WebHook can be used to get notified when a file has changed in Dropbox, a code change has been committed to GitHub, a payment has been initiated in PayPal, a card has been created in Trello, and much more — the possibilities are endless! When subscribing, you provide a callback URI where you want to be notified. When an event occurs, an HTTP POST request is sent to your callback URI with information about what happened so that your Web app can act accordingly. WebHooks happen without polling and with no need to hold open a network connection while waiting for notifications.
Because of their simplicity, WebHooks are already exposed by most popular services and Web APIs. To help managing WebHooks,,..
In addition to hosting your own WebHook server, ASP.NET WebHooks are part of Azure Functions where you can process WebHooks without hosting or managing your own server! You can even go further and host an Azure Bot Service using Microsoft Bot Framework for writing cool bots talking to your customers!
The WebHook code targets ASP.NET Web API 2 and ASP.NET MVC 5, is available as Open Source on GitHub, and as Nuget packages.
A port to the ASP.NET Core is being planned so please stay tuned!
Receiving WebHooks
Dealing with WebHooks depends on who the sender is. Sometimes there are additional steps registering a WebHook verifying that the subscriber is really listening. Often the security model varies quite a bit. Some WebHooks provide a push-to-pull model where the HTTP POST request only contains a reference to the event information which is then to be retrieved independently.
The purpose of Microsoft ASP.NET WebHooks is to make it both simpler and more consistent to wire up your API without spending a lot of time figuring out how to handle any WebHook variant:
A WebHook handler is where you process the incoming WebHook. Here is a sample handler illustrating the basic model. No registration is necessary – it will automatically get picked up and called:
public class MyHandler : WebHookHandler
{
// The ExecuteAsync method is where to process the WebHook data regardless of receiver
public override Task ExecuteAsync(string receiver, WebHookHandlerContext context)
{
// Get the event type
string action = context.Actions.First();
// Extract the WebHook data as JSON or any other type as you wish
JObject data = context.GetDataOrDefault<JObject>();
return Task.FromResult(true);
}
}
Finally, we want to ensure that we only receive HTTP requests from the intended party. Most WebHook providers use a shared secret which is created as part of subscribing for events. The receiver uses this shared secret to validate that the request comes from the intended party. It can be provided by setting an application setting in the Web.config file, or better yet, configured through the Azure portal or even retrieved from Azure Key Vault.
For more information about receiving WebHooks and lots of samples, please see these resources:
-.
Sending WebHooks
Sending WebHooks is slightly more involved in that there are more things to keep track of. To support other APIs registering for WebHooks from your ASP.NET application, we need to provide support for:
- Exposing which events subscribers can subscribe to, for example Item Created and Item Deleted;
- Managing subscribers and their registered WebHooks which includes persisting them so that they don’t disappear;
- Handling per-user events in the system and determine which WebHooks should get fired so that WebHooks go to the correct receivers. For example, if user A caused an Item Created event to fire then determine which WebHooks registered by user A should be sent. We don’t want events for user A to be sent to user B.
- Sending WebHooks to receivers with matching WebHook registrations. typically triggered as a result of incoming HTTP requests.
It’s also possible to generate WebHooks from inside a WebJob. This enables you to send WebHooks not just as a result of incoming HTTP requests but also as a result of messages being sent on a queue, a blob being created, or anything else that can trigger a WebJob:
The following resources provide details about building support for sending WebHooks as well as samples:
-.
Thanks to all the feedback and comments throughout the development process, it is very much appreciated!
Have fun!
Henrik
In version 1.2.1 there is an issue with Microsoft.AspNet.WebHooks.AzureWebHookDequeueManager.SendWebHookWorkItemsAsync.
Following line HttpResponseMessage[] array = await Task.WhenAll( list );
Is not wrapped inside try – catch and therefore failure in any request is handeled elsewhere and reported as failure in reading the queue. Moreover failure on the call will result in endless retry and the message will not be removed from the message queue. | https://devblogs.microsoft.com/aspnet/introducing-microsoft-asp-net-webhooks-preview-2/ | CC-MAIN-2020-29 | refinedweb | 825 | 55.03 |
Setting up Hotkey to Spawn Ped Model
I would like to set up a hotkey to spawn a specific ped model without having to go through a trainer. Could someone point me in the right direction on how to go about this via tutorial or personal knowledge? Thanks in advance to anyone that can answer.
Someone might have a different perspective on how to solve this but here is mine
Use ScriptHookV or RagePluginHook
create a script and assign a hotkey to a ped model so everytime you press that key... that ped spawns
Using RagePluginHook to spawn a random ped. Your code would look like this
CODE PROVIDED BY TitanSloth || It's a ped generator
internal static class EntryPoint { public static void Main() { Vector3 spawnPoint; Ped Player = Game.LocalPlayer.Character; Ped myPed; Blip myBlip; while (true) { bool player_Has_Weapon = false; // Sets the spawnPoint spawnPoint = Player.GetOffsetPosition(Vector3.RelativeFront * 10); if (Game.IsKeyDown(Keys.F)) { myPed = new Ped(spawnPoint); myBlip = myPed.AttachBlip(); if (!myPed.Exists()) break; if (!myBlip.Exists()) break; // While the player DOES NOT have the weapon... while (!player_Has_Weapon) { // Give the Ped a weapon myPed.Inventory.GiveNewWeapon("WEAPON_ADVANCEDRIFLE", 150, true); // Set this to true player_Has_Weapon = true; if (player_Has_Weapon) { break; } } myPed.RelationshipGroup = new RelationshipGroup("ATTACKER"); Game.SetRelationshipBetweenRelationshipGroups("ATTACKER", "PLAYER", Relationship.Hate); myPed.Tasks.FightAgainstClosestHatedTarget(20f); } GameFiber.Yield(); } } }
}
Thanks for responding. Would this script spawn the model as a playable character, or as a bodyguard/attacker? Trying to replace the player.
@NotOfTheWorld That code spawns an attacker.
This is how you would change the model of the player using Script Hook V .NET:
using GTA; using System; using System.Windows.Forms; public class ChangePlrModel : Script { Keys changeModelKey = Keys.E; Model modelToChangeTo = "Abigail"; public ChangePlrModel() { KeyDown += ChangePlrModel_KeyDown; } private void ChangePlrModel_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == changeModelKey) { Game.Player.ChangeModel(modelToChangeTo); } } }
To use this, install Script Hook V .NET, create a file named "ChangePlrModel.cs" in the scripts folder in your GTA V directory, then copy the code I provided into that file and save. Edit the model/key to your liking.
- zeroxzerotwo
@zeroxzerotwo It spawns an attacker, not a bodyguard.
@Jitnaught said in Setting up Hotkey to Spawn Ped Model:
@zeroxzerotwo It spawns an attacker, not a bodyguard.
that code you provided. Is that HookV or RAGE hook ?
@zeroxzerotwo Script Hook V .NET.
@NotOfTheWorld Yeah so use his code and compile it wit Hook V in visual studio and you don't need to use a mod menu
@zeroxzerotwo I already provided instructions on how to use the code. You don't need to compile it to use it
?
@zeroxzerotwo SHVDN automatically compiles the code on startup, that way you don't need to install Visual Studio to use the mod. The lag is no different than if you compiled it with Visual Studio.
@Jitnaught this is exactly what I was looking for, Thank you!
And If I was to use zeroxzerotwo's code to spawn a ped, what should I name It, and should I save it as a .cs file as well?
@NotOfTheWorld That code is for RagePluginHook. I don't know how their system works so I have no idea if you can just save it to a CS file or if you have to compile it with Visual Studio
@Jitnaught this thread gave me the idea that I'd like to be able to make a fish appear in front of the player with a hot key. Would be great if it was as easy as putting that cs file together, holy cow!
@NotOfTheWorld This should do what you want (SHVDN script, save it as CS file in scripts folder):
using GTA; using GTA.Math; using System; using System.Windows.Forms; public class CreatePed : Script { Keys createPedKey = Keys.E; Model pedModel = "Fish"; public CreatePed() { KeyDown += CreatePed_KeyDown; } private void CreatePed_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == createPedKey) { Vector3 position = Game.Player.Character.GetOffsetInWorldCoords(new Vector3(0f, 2f, 0.5f)); World.CreatePed(pedModel, position); } } }
Title it CreatePed.cs?
@NotOfTheWorld Sure, although you can name it anything you like.
@Jitnaught Well, Can't get it to work, LOL, but you solved the original question I had though, so thank you.
@NotOfTheWorld What does your ScriptHookVDotNet.log file say? I just wrote that code in notepad so I may have messed something up.
[23:19:54] [DEBUG] Created script domain 'ScriptDomain_8AE9C2E6' with v2.10.3.
[23:19:54] [DEBUG] Loading scripts from 'C:\Program Files\Rockstar Games\Grand Theft Auto V\scripts' into script domain 'ScriptDomain_8AE9C2E6' ...
[23:19:55] [DEBUG] Successfully compiled 'ChangePlrModel.cs'.
[23:19:55] [DEBUG] Found 1 script(s) in 'ChangePlrModel.cs'.
[23:19:55] [DEBUG] Successfully compiled 'CreatePed.cs'.
[23:19:55] [DEBUG] Found 1 script(s) in 'CreatePed.cs'.
@NotOfTheWorld That's the log after pressing E to test right?
- NotOfTheWorld
@Jitnaught Changed key to X but yeah, does nothing. Aiming to do what happens at the end of this video(3:30secs) that I made but with a hotkey instead of a spooner:
@Jitnaught Holy Moley!! Got it to work! The Ped Model name for the fish was A_C_Fish. It works perfect now, Thank you!
@NotOfTheWorld said in Setting up Hotkey to Spawn Ped Model:
@Jitnaught Holy Moley!! Got it to work! The Ped Model name for the fish was A_C_Fish. It works perfect now, Thank you!
glad everything worked out for you | https://forums.gta5-mods.com/topic/17239/setting-up-hotkey-to-spawn-ped-model | CC-MAIN-2018-30 | refinedweb | 889 | 68.57 |
Just use GIT and make regular backups to external media of your choice like everybody should do it.It's so easy these days. 16GB USB sticks (40MB/s) cost almost nothing, external 2.5'' disks are almost free considering the storage space they offer. Unless we're talking about backing up a video collection.If the AVR chips had 'quasi unlimited' storage, I wouldn't oppose as much though. But I would never rely on the source code being stored on the chip as well. Murphy's Law would get you anyway, trust me on that. It's much better to keep your valuable code in good condition and safe somewhere else.
Don't get me wrong, I like the concept of storing sketches in the device, it is an inspiring problem and got me thinking. It has several advantages (especially finding the right code as you pointed out) no doubt about that, but it is not a final solution. This problem is called deployment management. [very recognizable]
As a developer I need a solution that I can trust, it should work every time I want to. Storing sketches in the Arduino is not allways possible (due to size) and therefor I cannot rely on it.
So if one wants to spend energy in solving deployment management for Arduino, we should think of a way that is:- transparant for the programmer - (don't do things that can be automated --> KISS)- configurable (switch on/off etc)- works for all deployments- robust, reliable- and so on.Storing a sketch in an Arduino does not work allways, as it fails on a crucial point imho SIZE. That doesn't mean it has no value, on contrary it can be very usefull as you pointed out, I am just stating it isn't reliable enough. Storing a reference to the source (etc) takes 16 bytes (UUID) which is 0.05% of the available memory and independant of sketch size. And yes there will be applications that don't even have these 16 bytes free. A real final solution should even work for this case. That means that the reference to the code should be stored in the Arduino but at the same time can't be stored in the Arduino. This is a typical TRIZ contradiction.Solving that contradiction => the binary code itself is the reference (mmm 32K keys, no good...)making 32K keys more practical: after uploading a sketch, AVRdude reads the complete memory back and makes a SHA1 hash to be used as reference to the sourcecode. So if one arduino comes back through the mail one can read its memory back, do the SHA1 hash and one has the reference to the sources. That said, this reference will not be the only way to access the sources, full text indexing of all your sketches is very well possible these days, so such things need to be in the final solution too. The complexity to realize SketchWithin, SHA1 and UUID is comparable. The differences between the SHA1 and UUID versions are- SHA1 will generate a new code for every source iteration, where with the UUID this is optional- UUID uses (at least) 16 bytes, SHA1 uses 0 bytes of Arduino memory- SHA1 will detect image tampering, UUID will not (key lost??)- UUID will probably be faster than SHA1.My final choice would be using UUID, and the SHA1 at release moments. The cases I need the last 16 bytes I should really consider an new larger platform. EPILOGUE:In short storing a sketch in the Arduino is usefull in many cases as you pointed out. However it won't solve the "what version of code have I deployed problem" in all cases. The SHA1 and UUID solutions will perform better especially for large sketches. My choice would be using UUID all the time, and the SHA1 at release moments. The cases I need the last 16 bytes I should consider an new platform Again thanks for this inspiration,
volatile char UUID[] = "<UUID=da51a9f0-9a49-11e0-aa80-0800200c9a66>";char version[] = "UUID_TEST 0.04";void setup(){ Serial.begin(115200); Serial.println(version); UUID[0] = UUID[0];}void loop(){}
volatile uint8_t UUID[] = { 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65 }; // character e 16 times char version[] = "UUID_TEST 0.06";void setup(){ Serial.begin(115200); Serial.println(version); UUID[0] = UUID[0];}void loop(){}
#include <avr/pgmspace.h>//volatile char UUID[] = "<UUID=da51a9f0-9a49-11e0-aa80-0800200c9a66>";volatile uint8_t UUID[] PROGMEM = { 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65, 0x65 }; char version[] = "UUID_TEST 0.08";void setup(){ Serial.begin(115200); Serial.println(version); uint8_t x = UUID[0];}void loop(){}
char version[] = "UUID_TEST 0.10";void setup(){ asm volatile( ".cseg" // use code segment "uuid: .byte 101,102,101,102,101,102,101,102,101,102,101,102,101,102,101,102" ); Serial.begin(115200); Serial.println(version);}void loop(){}
avr-objcopy -v -I binary -O elf32-avr -B avr mysketch.pde sourcecode.o
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=64193.msg469048 | CC-MAIN-2016-40 | refinedweb | 877 | 72.76 |
If.)
Does this do anything that normal version control doesn’t? For instance, I put all my project files in <a href="">Subversion</a>, and commit whenever I’ve made a set of changes. If I make a mistake, I can roll back to any previous version.
Sorry for the munged-up link. That should have been:
Windows XP also has a Volume Shadow Copy service, but I can’t find any UI to configure it. Does anyone know how to configure it, or if it supports the file recovery functionality Raymond is talking about?
Shadow Copies are transparent to the end user. They are more like mounting a Subversion repository via WebDAV.
With a version control system, you have check out and commit files.
What’s the motivation for supporting only network drives?
I always found *NIX solutions to versioning to be extremely over complex and lack good GUI intergration… It is almost like trying to use FTP with an extension to do your versioning.
Shadow copies (in this form) seems a lot more natural and human friendly, nobody needs to worry about hitting commit, thier files are just backed up for them.
Jonas Grumby: actually it works wonderfully on local drives too, except the GUI doesn’t support it
But if you 1) know the exact device name for the snapshot (each snapshot acts like its own read-only volume; device names are in the form "DeviceHarddiskVolumeShadowCopyX", X being a progressive number), 2) have a tool to create arbitrary device names (shadow copies aren’t registered with the mount manager, so you can’t use mountvol) and (optionally, only required to create globally-visible drive letters) 3) have write access to the GLOBAL?? directory in the object namespace, you can assign each of them a drive letter
Play with it with the vssadmin command line tool, have fun. Works great for backups too, since it lets you back up files opened in exclusive read mode (e.g. registry hives), and indeed ntbackup does support it (altough it’s been kind of broken in SP1)
It’s not the same as source control. Suppose you are working on something that will take two days to complete. After day 1, do you commit all your changes? No, because it doesn’t work yet. If you committed it, then everybody else on your project would have your half-finished job and the project would be broken.
And I suspect you don’t put every single file on your computer under version control. Imagine having to create a new project for every grocery list you write and every podcast you download…
I really liked how VMS handled file versioning. No UI was required since it was built into the file system.
bob.txt;1
bob.txt;2
bob.txt;3
etc.
With version control, I commit everything after 1 day, yes. That’s what branches are for.
This way, I can go back without negatively affecting other users. It’s simple.
Regarding the quota issue: I hope the system makes sure that cannot be used to perform a DoS attack by using more memory than my quota allows?
Carlos: I think the VSC on XP is just the "client" half of the Server 2003 feature, usually refered to as Previous Versions.
Jonus: I think the motivation behind only having a GUI for network shares is that the target audience were people storing profiles/document folders on a WS2003 server.
strik: You can’t DoS a server with VSC, because the server admin designates what percentage of drive space can be used for shadow copies. I believe Windows will also happily trash previous file versions if space becomes a premium.
Timewarp sounds a lot like the "Snapshot" functionality provided by NetApp filers ().
At Tellme we put home directories and CVS repositories on a NetApp filer and from any directory you can access a .snapshot "directory" (it’s not a real directory, so it won’t show up if you do something like "ls" or "readdir()" but you can get to it regardless). In there are hourly and nightly copies, up to a week back, of the files in the current directory.
As Raymond says I don’t think typically people use this functionality a lot, but each time you do, it saves you a LOT of time. It’s fantastic.
I use source control fairly religiously (I just use a personal repository in my home directory when I want only versioning, and not sharing), but I still find Snapshots indispensible for recovering from screwups.
Does anyone know if we will see this in Vista? I would love to be able to shadow a folder (instead of a drive), it would also be great if you could specify what location the shadow gets placed in.. That way you could have it shadowing your Documents folder constantly, so if you make any stupid changes or it corrupts on you it will be easy to go back to a previous version.
Note: Yes I am a lazy guy who would use it to backup documents ^Shrug^
strik: Well you have much better discipline than most people. Try convincing the Marketing department that they need to "branch" a file before they can edit it.
Carlos:
The version of Volume Shadow Copy Service in XP doesn’t support "persistent" shadow copies which survive across reboots. As a result, it doesn’t support the "view previous versions" or "timewarp" feature that Raymond is describing.
What it is used for (and Server 2003 does this as well) is for backups. When you start a backup using a backup app which supports VSS, a temporary shadow copy is created, and the backup is read from that instead of the live volume.
This serves two purposes: Allowing backup of files which would otherwise be difficult to back up because they were exclusively open or being written to during the backup process; and synchronizing the files so that all the files in a given backup are consistent with each other, because they all reflect their state at a precise point in time.
You can see VSS in action on XP by starting a backup using Ntbackup. The first thing you will see before the backup starts is a message telling you that it’s creating a shadow copy of the volume.
Incidentally, the offical abbreviation for Volume Shadow Copy Service is VSS (not VSC or VSCS). This may seem odd, but it has to do with a name change made during the development cycle.
strik, chris mear:
Version control and Volume Shadow Copy serve completely different usage scenarios – which explains why SourceSafe 2005 is still coming out.
Volume Shadow Copy is simplified source/version control "without thinking about or knowing about it" but at the same time, you lose the functionality (branching, merging, etc.) provided by subversion/cvs/vss.
The fact that it requires no client software, is automated, etc. makes it more useful for document shares on the network.
Wow. This removes yet another reason for using Novell Netware around here, knowing that Windows 2003 has the "Salvage" feature also… just by a different name.
A couple of comments:
The lazy copying is actually done at a block level. This distinction is transparent to a user, but it means that if one block of a file changes no other blocks need to be copied.
The 64 shadow copy limit is per volume. Each volume in your systems can have a maxiumum of 64 shadow copies.
Storage for shadow copies is on the same volume by default, but the system can be configured to store the shadow copies on a different volume. This doesn’t help you with reliability (if you lose either volume, you’ve lost your shadow copies), but it does improve performance.
In response to KJK::Hyperion
The internal device name of the shadow copy of the form DeviceHarddiskShadowCopyN is subject to change every time you reboot, making your mechanism very fragile. The correct way to expose a shadow copy is through the ExposeSnapshot API, which will expose the shadow copy in a more reliable way. Unfortunately, this API is currently blocked for this type of shadow copy.
With Volumen Shadow Copy you also lose the ability to commit files more frequently than what the administrator setup, to see the diffs between file versions or even to see who’s copy it is you’re looking at (if I see a file version 14 days old I have no idea if it’s different from the version 13 days old or even who saved that file). As Andy said, they serve completely different purposes.
On an off-topic note, why do people branch files just to work on them in source control, only to merge them back into the main tree? It looks to me like most developers (at least most I know) are still not used to work in teams, where changes happen at the same time, they all seem to want to play in their little sandbox, so they "don’t break anybody else’s changes". As far as I’ve seen trying to merge all these branches (I’ve seen teams of 10 or developers where everyone had their own branch) just creates many more problems than simply working in the main branch, which works quite well for thousands of open source projects and probably three or four closed source ones (but who can really tell).
Of course I could be an arse and point out Novell has had the equivilant of Shadow Copy since before 1995 (give or take).
It’s nice to see it, but jesh, it took almost 10 yrs.
Shadow Copy is great and I hope it’s available and enabled by default on Vista Home edition.
Or better yet, the home user is asked to optimize Vista for "speed" or "reliablitiy" and it will enable many settings, such as Shadowing.
This sounds much like the snapshots offered by Veritas, Linux’s LVM2, and other storage management systems. Nice to see it natively available in Windows, as it’s something I’ve often missed when working on Windows servers.
Copy-on-write snapshots are just wonderful when you’re backing up databases, mail servers, etc. Ask the service to finish its disk tasks and work from RAM, snapshot, tell the service it can continue writing. Nobody even notices the interruption.
So you have to specify the files to be shadow copied in advance?
Surely that’s one level of activity alone will mean most users will never use it until its too late (and I’m talking about Marketing et al here).
When I used to be a Netware admin back in the mid-90s it did this kind of thing, but automatically with all the files on the server (until it ran out of disk space, then it just discarded the oldest files first).
It was a life-saver for when we were on a nightly backup routine and users messed up files in the middle of the afternoon. It was command line originally, but easy enough that the biggest trouble makers picked up how to do it themselves pretty quickly, and once we started using the Win 95 clients it was only a right-click away.
You have to turn it on manually because on servers, everything defaults off for security reasons. (System Restore does something similar and everybody is furious that it’s on by default. Clearly this needs to be off by default too!)
I’ve used Executive Softwares Undelete for the same type of thing. It doesn’t do scheduled copies but instead catches each file modification and saves a copy to a different hidden folder. I set mine to keep the last 2Gb of changes, which is enough to undo most slipups.
As someone earlier pointed out, it sounds like VSC is closest to VMS’s file backups. Comparisons to cvs/sub are somewhat humorous. The quota question is interesting. Maybe VMS changed later, but when I was in school circa 1990, file versions definitely counted against my quota on the student VAX cluster. If my FORTRAN compile ran out of space I had to delete some backups (or some uuencoded usenet pr0n) to make room. I think our student accounts had 5000 block quotas. What are the pros and cons of putting VSC backup management into a grey zone? It seems that if they are accounted against your quota, you would have better control–the space is preallocated, so to speak, so any scavanging to reclaim freespace has to be self-directed. From a user’s point of view you know that backup copies won’t disappear just because some other user does something unrelated that causes the system to randomly reap backups. Is this how Novell’s "scavange" worked? Could VMS’s backup strategy be tuned to behave differently?
The server administrator specifies which files to build shadow copies for. I discussed deletion in the original article: "In the case of a deleted file". If a file moves it looks like a deletion and a creation. (Imagine the confusion if moving a folder moved its history.)
(Imagine the confusion if moving a folder moved its history.)
Imagine the confusion when it doesn’t. Microsoft is not the only company that’s shipped revision control software that doesn’t get this stuff right, but it’s no surprise that in 2005 they’re the ones still getting it wrong. A move is a move, you should be able to undo the /move/, but it’s still the same file, not "a deletion and a creation". How’d this come to be wrong? Microsoft doesn’t understand the difference between a /file/ and a filename, see also the inability to rename files that are "in use", and the consequent need to reboot Windows PCs all the time so that the OS can move a file out of the way…
As to the earlier comments, firstly yes, for most purposes this is equivalent to the existing snapshot functionality in your volume management system which everyone (again, except Microsoft) now seems to ship as the default way of handling fixed disks.
Secondly no, this isn’t version control, because it doesn’t know about versions. VMS and any revision control system keep every version, not just whatever you had 12, 24 and 36 hours ago (per the administrator’s arbitrary choices). If you do a lot of minor edits and once in a while you screw up big-time, you probably want (a decent version control system) and transparent (e.g. WebDAV) access, not this.
Okay, let’s walk through it.
File "dir1foo" exists on Jan1.
On Jan2 it is moved to "dir2foo".
On Jan3, you go to "dir1" and say "View previous versions of this directory" and select Jan1.
You expect to see foo there because foo existed in dir1 on Jan1.
Therefore, the Jan1 history for dir1foo needs to stay in dir1.
On Jan3, you go to "dir2" and say "View previous versions of this directory’ and select Jan1.
You do not expect to see foo there because foo did not exist in dir2 on Jan1.
Therefore the Jan1 history for dir2foo should not exist in dir2.
Volume shadowing is not version control and is not intended to replace version control. Volume shadowing is just a collection of recent volume snapshots.
(Enlighten me since I don’t know: Does "everyone" else really already implement automatic volume snapshots by default? Linux has "lvcreate" and Solaris "fssnap", but these are manual processes, not an automatic one, and these snapshots do not persist across reboots. I don’t see anything for the Mac or FreeBSD but then again I didn’t look very hard.)
The Plan9 file server had exactly the same feature in 1995.
See for instance
and search "The File Server"
Not all folders in the user profile are stored in the user profile.
I’m a big fan of being productive by not losing work. I don’t care how optimized your system and development | https://blogs.msdn.microsoft.com/oldnewthing/20050906-11/?p=34313 | CC-MAIN-2017-09 | refinedweb | 2,688 | 68.7 |
- .16 Recursion
The apps we’ve discussed thus far are generally structured as methods that call one another in a disciplined, hierarchical manner. For some problems, however, it’s useful to have a method call itself. A recursive method is a method that calls itself, either directly or indirectly through another method. We consider recursion conceptually first. Then we examine an app containing a recursive method.
7.16.1 Base Cases and Recursive Calls
Recursive problem-solving approaches have a number of elements in common. When a recursive method is called to solve a problem, it actually is capable of solving only the simplest case(s), or base case(s). If the method is called with a base case, it returns a result. If the method is called with a more complex problem, it divides the problem into two conceptual pieces (often called divide and conquer): a piece that the method knows how to do and a piece that it does not know how to do. To make recursion feasible, the latter piece must resemble the original problem, but be a slightly simpler or slightly smaller version of it. Because this new problem looks like the original problem, the method calls a fresh copy (or several fresh copies) of itself to work on the smaller problem; this is referred to as a recursive call and is also called the recursion step. The recursion step normally includes a return statement, because its result will be combined with the portion of the problem the method knew how to solve to form a result that will be passed back to the original caller.
The recursion step executes while the original call to the method is still active (i.e., while it has not finished executing). The recursion step can result in many more recursive calls, as the method divides each new subproblem into two conceptual pieces. For the recursion to terminate eventually, each time the method calls itself with a slightly simpler version of the original problem, the sequence of smaller and smaller problems must converge on the base case(s). At that point, the method recognizes the base case and returns a result to the previous copy of the method. A sequence of returns ensues until the original method call returns the result to the caller. This process sounds complex compared with the conventional problem solving we’ve performed to this point.
7.16.2 Recursive Factorial Calculations
Let’s write a recursive app to perform a popular mathematical calculation. The factorial of a nonnegative integer n, written n! (and pronounced “n factorial”), is the product
n · (n – 1) · (n – 2) · ... · 1
1! is equal to 1 and 0! is defined to be 1. For example, 5! is the product 5 · 4 · 3 · 2 · 1, which is equal to 120.
The factorial of an integer, number, greater than or equal to 0 can be calculated iteratively (nonrecursively) using the for statement as follows:
long factorial = 1; for (long counter = number; counter >= 1; --counter) { factorial *= counter; }
A recursive declaration of the factorial method is arrived at by observing the following relationship:
n! = n · (n – 1)!
For example, 5! is clearly equal to 5 · 4!, as is shown by the following equations:
5! = 5 · 4 · 3 · 2 · 1 5! = 5 · (4 · 3 · 2 · 1) 5! = 5 · (4!)
The evaluation of 5! would proceed as shown in Fig. 7.15. Figure 7.15(a) shows how the succession of recursive calls proceeds until 1! is evaluated to be 1, which terminates the recursion. Figure 7.15(b) shows the values returned from each recursive call to its caller until the value is calculated and returned.
7.16.3 Implementing Factorial Recursively
Figure 7.16 uses recursion to calculate and display the factorials of the integers from 0 to 10. The recursive method Factorial (lines 17–28) first tests to determine whether a terminating condition (line 20) is true. If number is less than or equal to 1 (the base case), Factorial returns 1 and no further recursion is necessary. If number is greater than 1, line 26 expresses the problem as the product of number and a recursive call to Factorial evaluating the factorial of number - 1, which is a slightly simpler problem than the original calculation, Factorial(number).
1 // Fig. 7.16: FactorialTest.cs 2 // Recursive Factorial method. 3 using System; 4 5 class FactorialTest 6 { 7 static void Main() 8 { 9 // calculate the factorials of 0 through 10 10 for (long counter = 0; counter <= 10; ++counter) 11 { 12 Console.WriteLine($"{counter}! = {Factorial(counter)}"); 13 } 14 } 15 16 // recursive declaration of method Factorial 17 static long Factorial(long number) 18 { 19 // base case 20 if (number <= 1) 21 { 22 return 1; 23 } 24 else // recursion step 25 { 26 return number * Factorial(number - 1); 27 } 28 } 29 }
0! = 1 1! = 1 2! = 2 3! = 6 4! = 24 5! = 120 6! = 720 7! = 5040 8! = 40320 9! = 362880 10! = 3628800
Fig. 7.16 | Recursive Factorial method.
Method Factorial (lines 17–28) receives a parameter of type long and returns a result of type long. As you can see in Fig. 7.16, factorial values become large quickly. We chose type long (which can represent relatively large integers) so that the app could calculate factorials greater than 20!. Unfortunately, the Factorial method produces large values so quickly that factorial values soon exceed even the maximum value that can be stored in a long variable. Due to the restrictions on the integral types, variables of type float, double or decimal might ultimately be needed to calculate factorials of larger numbers. This situation points to a weakness in some programming languages—the languages are not easily extended to handle the unique requirements of various apps. As you know, C# allows you to create new types. For example, you could create a type HugeInteger for arbitrarily large integers. This class would enable an app to calculate the factorials of larger numbers. In fact, the .NET Framework’s BigInteger type (from namespace System.Numerics) supports arbitrarily large integers. | http://www.informit.com/articles/article.aspx?p=2731935&seqNum=16 | CC-MAIN-2018-13 | refinedweb | 1,007 | 55.03 |
Opened 12 years ago
Closed 12 years ago
#1502 closed enhancement (wontfix)
Make (?P) url parameters available to filter a generic view's queryset.
Description
Example Problem:
1) - lists all the users.
2)(?P<mystr>[a-z]+)/ - For given string, list all the users with a username beginning with that string.
(1) is a simple generic view. However, one currently must write a custom view to come up with a QuerySet for (2). What we could do is just take the generic view used for (1) and filter that QuerySet down more.
The code listed here is an example solution using a custom wrapper to the object_list generic view. A real solution would patch the generic views. 'myproject.object_list' (shown here) differs from the generic object_list by two items in its method signature: extra_lookup_kwargs={} and kwargs; some of its code can be used to filter the queryset.
myproject/urls.py
info_dict = { 'queryset': MyModel.objects.all(), 'extra_lookup_kwargs': { 'mystr': 'name__istartswith'} } urlpatterns = patterns('', (r'^users/$', 'myproject.views.object_list', info_dict), (r'^users/(?P<mystr>.+)/$', 'myproject.views.object_list', info_dict), )
myproject/views.py:
from django.template import loader from django.views.generic import list_detail def object_list(request, queryset, paginate_by=None, allow_empty=False, template_name=None, template_loader=loader, extra_context={}, context_processors=None, template_object_name='object', extra_lookup_kwargs={}, **kwargs): extra_lookup_dict = {} for param in kwargs: if param in extra_lookup_kwargs: extra_lookup_dict[extra_lookup_kwargs[param]] = kwargs[param] if len(extra_lookup_dict) > 0: newqueryset = queryset.filter(**extra_lookup_dict) else: newqueryset = queryset return list_detail.object_list(request=request, queryset=newqueryset, paginate_by=paginate_by, allow_empty=allow_empty, template_name=template_name, template_loader=template_loader, extra_context=extra_context, context_processors=context_processors, template_object_name=template_object_name)
Change History (3)
comment:1 Changed 12 years ago by
comment:2 Changed 12 years ago by
The custom view code for your specific case looks like this:
def my_object_list(request, mystr): mymodels = MyModel.objects.filter(name__istartswith=mystr) return list_detail.object_list(request, mymodels, extra_context={'foo':'bar'}, paginate_by=10, allow_empty='True', template_name='mymodels/index')
Is that so hard? Remember, you don't have to make the function have the same interface as the generic view -- you simply take the parameters that were in your urls.py and put them in your view function.
Trying to stuff everything into generic views makes them less useful, since the interface becomes more and more complex. There are lots of tickets similar to this one, trying to get generic views to do one more thing (i.e. a custom tweak). If you added them all it would be ridiculous. Generic views are already very easy to wrap, with very few lines of code. Your my_queryset_modifier() function is almost twice as long as the code above.
So I would be -1 on this.
comment:3 Changed 12 years ago by
Marking this as a wontfix for the reasons Luke mentioned in his previous comment.
Thinking about the above... Instead of hardcoding logic for the filter, it may be better to allow the end user to use their own queryset_modifier method.
#The developer will have to coordinate the ?P<names> and the extra keys in info_dict so they don't overlap, since they'll both be passed in as kwargs to the object_list. This will be done when the queryset_modifier function is defined, since the conflicts will be settled in that function's parameter listing. | https://code.djangoproject.com/ticket/1502 | CC-MAIN-2018-30 | refinedweb | 531 | 50.02 |
Advanced Namespace Tools blog 22 December 2016
Working on 9front's new TLS boot option
Plan 9 has always had an option for "tcp boot" - which means attaching to a root fileserver over the network at startup. 9front has recently added a "tls boot" option, which is similar, but sets up an encrypted connection using TLS. I'd like to get this working and supported in ANTS.
I have made a first attempt at adding support for it in the plan9rc bootup script, but haven't succeeded in making it work yet. However, I haven't succeeded with the standard bootrc script either, so I think I may have an auth configuration issue in my systems, seperate from anything the plan9rc script is doing. On the client system I see an error like:
mount: mount /root: tls error
And on the server side I see this error:
/bin/aux/trampoline: dial net!$fs!9fs: cs: can't translate address: dns: resource does not exist; negrcode
The server error looks familiar, from other authentication issues. Perhaps some information is missing from /lib/ndb/local. Looking at the /rc/bin/service/tcp17020 file, it seems like I might need to be running standard 9fs on port 564, and the port 17020 service will just wrap it in a tls tunnel.
So, I have now started standard fs listener on port 564 from fossil, and added fs= and auth= to the server ndb. After doing this, I was able to successfully use srvtls as a test from my client node, so I am optimistic that the boot might work. About to retest...and...success!
Cruft in plan9rc Boot Options
Adding support for the TLS boot option meant adding another case to the function called "dogetrootfs" in plan9rc. This is a crucial function, and it has become too long and overloaded. It is built as a large set of cases to switch($getrootfs) and it should probably be factored so that the logic for each case becomes its own fn. 9front has also dropped kfs from the distribution and with hjfs filling the role of a smaller and simpler fs, I don't see much need to keep support for it in 9front-ANTS. I'm also not confident that the logic for using cfs (a local cache file server for a remote connection) even works any more in 9front, I should probably test it and fix it if it is broken.
The plan9rc script was originally written for the Bell Labs version of Plan 9, and it was modified to work with 9front, without a comprehensive rewrite. In comparison to the standard 9front bootrc script, it is rather gracelessly written. (ANTS also has an ANTS-specific variant of the bootrc script, you can choose which script you want to use via the "bootcmd" variable in plan9.ini) There is support for interactive, non-interactive, and traditional bootup, which is implemented with quite a bit of mostly-duplicated code.
I have mixed feelings about trying to do comprehensive code improvement here: on one hand, the current script works well and I understand it, and there is a lot of wisdom in "if it ain't broke, don't fix it". On the other hand, cleanup work would make it more maintainable. It would be nice if it shared as much code as possible with the standard 9front bootrc, partly because users who aren't me are likely to find the current script "a bit much." Looking at the 9front base:
cpu% wc bootrc local.rc net.rc 237 629 4010 bootrc 75 178 1102 local.rc 67 200 1324 net.rc 379 1007 6436 total
In comparison, the ANTS startup scripts:
cpu% wc plan9rc initskel 701 2040 15303 plan9rc 170 524 3590 initskel 871 2564 18893 total
Obviously the ANTS startup does more (it creates an independent namespace separate from standard userspace, supports fossil+venti boot and other variants) but I'm sure it doesn't need to use twice as many lines and three times as many total characters to do its work. | http://doc.9gridchan.org/blog/161222.tlsboot | CC-MAIN-2017-22 | refinedweb | 680 | 64.95 |
Hello:
I'm creating XAML pages from code, with a XMLWriter. No problem adding buttons, textbox, etc... So, i add to XAML file his XAML.CS with constructor etc.. . For example:
Customers.XAML
|_ Customers.XAML.CS (With CsharpCodeGenerator)
My problem is when I try to bind a control from this XAML to a event, for example, button1_click(). I create in the XAML.CS file an empty "InitializeComponent method". When I doubleclick control in my XAML to bind the event, this method is created but when I compile it, returns an error:
"Error 2 'CUSTOMERS1' does not contain a definition for 'button1_Click' and no extension method 'button1_Click' accepting a first argument of type 'CUSTOMERS1' could be found (are you missing a using directive or an assembly reference?)"
How could I create a correct InitializeComponent() by coding?
Thanks and regards
You may try to correct the reference in .xaml files
provide correct namespace there
View Tag Cloud
Forum Rules | http://forums.codeguru.com/showthread.php?486309-Autogenerated-XAML-and-initializecomponent&p=1955521 | CC-MAIN-2016-40 | refinedweb | 158 | 68.06 |
Jakarta EE 9 – signs point to a big bang
.
Considering the last two to three years of the Enterprise Edition of Java, Oracle’s love for Java EE could well be brought into question. But anyone who thinks that Oracle does not show any sense of responsibility with regard to the Java version will – once again – be proven wrong. None other than Oracle Architect Bill Shannon has presented a proposal for the first “real” release of Jakarta EE (Jakarta EE 9).
Jakarta EE 9 will be the first “feature release” under the umbrella of the Eclipse Foundation, but actually the second release. Version 8 of Jakarta EE was released in a version identical to Java EE 8 and can be seen as the final milestone in the move from Oracle to the Eclipse Foundation.
Jakarta EE 9 – Oracle’s proposal
Bill Shannon’s proposed plan for Jakarta EE 9 should not be taken as an official Oracle plan. It’s more a coincidence that a dedicated community member is part of Oracle’s Java EE community and – of course – also talked to his colleagues about the possible next step for the Enterprise Java project. These considerations and thoughts led to a statement from Oracle, which was finally revised after some comments and accepted in its current form as an official proposal for and by the community as a basis for discussion.
Cleanup
The first part of the Jakarta EE 9 plan concerns the removal of some specifications from Jakarta EE 9:
- Jakarta XML Registries
- Jakarta XML RPC
- Jakarta Deployment
- Jakarta Management
- Jakarta Enterprise Bean entity beans
- Jakarta Enterprise Bean interoperability
- Jakarta Enterprise Bean 2.x and 1.x client view
- Jakarta Enterprise Web Services
These will still be available at the Eclipse Foundation, but will not be updated. Of course, it should still be possible for the implementation projects to run service updates; after all the specs still have some users, but they should not be defined by the official platform specification. Meanwhile, Payara’s Steve Millidge suggests that Jakarta Management be made available as an option and expanded later.
Additionally, there are some APIs that are closely related to Java SE 8 that should not be implemented:
- Jakarta XML Web Services
- Jakarta SOAP Attachments
- Jakarta Web Service Metadata
- CORBA and RMI-IIOP
Other than these, Bill Shannon proposes to implement Jakarta XML Binding and Jakarta Activation in the upcoming version of the Enterprise Edition.
The Big Bang
Since Oracle made it clear that the namespace
javax.* for Jakarta EE was only available under the status quo of the specifications, it was clear that the packages would have to be renamed. However, it is unclear to date whether to move only the specs that will really be changed to the new namespace, pursuing a gradual approach, or whether it’s better to move all specifications immediately and at the same time.
The latter has entered the discussion as a so-called big bang approach and seems to have conquered the hearts of most developers and members of the community. Bill Shannon’s proposal is to move all specs (remaining after the cleanup) directly to the
jakarta.* namespace. The advantage: after a one-time effort and a one-time break in backwards compatibility, Jakarta EE’s further development can begin without having to re-examine the namespace question with every release.
The question about backward compatibility of existing projects should not be part of the Jakarta EE specification. Instead, the products based on Java EE or Jakarta EE should take care of this themselves. It is also possible that a separate open source project will be set up for this purpose.
Java SE, TCKs, microservices & containers
Logically, Jakarta EE 9 should also support Java SE. The minimum version is the latest long term release, Java 11. Furthermore, the Java modules should be considered, and rules and guidelines for specs that define modules should be developed. The Jakarta-EE-9 platform itself should not be modularized, but microservices should be better supported – this also applies to use without containers.
One wish that will entail a lot of work is the division of the TCK project. The aim of this idea is to enable all projects of the Jakarta EE platform to manage their own TCK. However, this is not being considered for Jakarta EE 9. On the one hand because of the workload mentioned above and on the other hand because it has to be ensured that such a change does not make testing Jakarta EE products more difficult.
MicroProfile + Jakarta EE = tomorrow’s Enterprise Java?
It’s been a while now since we published our Understanding Jakarta EE series. At that time, most people answered “no” to the question of whether MicroProfile should be transferred to the Jakarta EE project. According to the majority of participants, MicroProfile should remain an independent project in which innovative ideas and concepts can be tested. A small incubator, so to speak, for the greater good of Jakarta EE.
Bill Shannon’s proposal now strikes a different note: MicroProfile APIs are to be implemented in the Jakarta EE platform, the work of the project will be placed under the umbrella of the Jakarta EE Specification Process. However, this will not be implemented for Jakarta EE 9. This will not be an easy path if it is followed, so more time will be allowed for it.
Outlook on Jakarta EE 9 and beyond
As can be seen in Bill Shannon’s proposal as well as in Red Hat’s and Payara’s responses, the existing APIs and Specs will not be revised. The reason for this is the namespace problem: if you are already moving, then you should not rework the assets you want to move – there will be time for that later.
So the signs point to a “Big Bang” and the release date has also been announced already. The goal was to finish Jakarta EE 9 at most 12 months after Jakarta EE 8. This would mean that a version of Jakarta EE could be expected in autumn 2020, which could serve as a basis for the further development of the platform and the individual specifications – that sounds positive at first.
On the other hand, 12 months is also a long time and we will see how the landscape develops in the coming weeks. The work to make Jakarta EE the definitive place for cloud-native Java will certainly not come to an end.
Bill Shannon’s current proposal, as well as the entire discussion about it, can be found on the mailing list jakartaee-platform-dev, which is freely accessible to anyone interested. If you want to participate, you can simply register there.
all in the future of java
Keeping the markets closed with bloated specifications. Just shut down this whole enterprise concept. There are smaller, better and easier to work with libraries out there. That doesn’t couple you to big application servers and allow to compose your application from modules.
Could you please name these smaller and better libraries?
he doesn’t know what he’s talking about | https://jaxenter.com/jakarta-ee-9-big-bang-163143.html | CC-MAIN-2021-10 | refinedweb | 1,186 | 58.62 |
It is helpful to understand how to connect a database to Python scripts for serving dynamically generated web pages and collaborative reports. Python is almost always included in Linux distributions and used for multiple applications already. You don’t need PHP for this.
Below we’ll cover how to create a Python database connection (MySQL/MariaDB) in the Linux terminal.
How to Connect a Database to Python 2.7
- Log into SSH.
- From your website root directory, create a Python script file in the “cgi-bin” directory:
touch cgi-bin/test-db.py
- Change the file’s permissions to 755:
chmod 755 cgi-bin/test-db.py
- If you wish to execute Python scripts in web browsers, edit your Apache .htaccess file:
nano .htaccess
- Add the following at the top of the file and save changes:
AddHandler cgi-script .py
- To complete the Python database connection you’ll need to know the database host (“localhost” if on the same system), name, username, and user password.
- Run Python:
python
- Ensure you have the MySQL Python module installed:
import MySQLdbIf you receive no notification, that means it is installed. You’ll need to install the module if you receive the error “ImportError: No module named mysqldb.”
- Exit Python:
exit ()
- If you need to install it, we recommend using your OS repositories. You can also use PIP.
Alma / Enterprise Linux:
sudo yum install MySQL-python
Ubuntu:
sudo apt-get install python-pip python-dev libmysqlclient-dev
PIP:
pip install MySQL-python
- Edit your Python script:
nano cgi-bin/test-db.py
- Insert the code below to connect to the database and run “SELECT VERSION(),” which shows our current version of MySQL. Replace the database user, password, and database.
#!/usr/bin/env python
import MySQLdb
# connect to the database
db = MySQLdb.connect("localhost","user","password","database" )
# setup a cursor object using cursor() method
cursor = db.cursor()
# run an sql question
cursor.execute("SELECT VERSION()")
# grab one result
data = cursor.fetchone()
# begin printing data to the screen
print "Content-Type: text/html"
print
print """
<!DOCTYPE html>
<html>
<head>
<title>Python - Hello World</title>
</head>
<body>
"""
print "Database version : %s " % data
print"""
</body>
</html>
"""
# close the mysql database connection
db.close()
- Save changes.
- Run the Python script:
python test-db.pyThe results should show basic HTML markup and your current database version.
You can also visit the Python script URL in the web browser if you updated your web server configuration file. You’ll see the database version line.
Congrats on learning how to connect a database to Python 2.7+. Learn more about programming with Python.
If you don’t need cPanel, don't pay for it. Only pay for what you need with our Cloud VPS solutions.
CentOS, Debian, or Ubuntu No bloatware SSH Key management made easy | https://www.inmotionhosting.com/support/website/how-to-use-python-to-connect-to-a-database/ | CC-MAIN-2022-21 | refinedweb | 464 | 59.4 |
Have an account?
Need an account?Create an account
Cisco Nexus 9000 Series NX-OS Release Notes, Release 9.2(2)
This document describes the features, caveats, and limitations of Cisco NX-OS Release 9.2(2) software for use on the following switches:
■ Cisco Nexus 9000 Series
■ Cisco Nexus 3264C-E
■ Cisco Nexus 34180YC-S, 3500, and 3600 platform switches, and Cisco Nexus 9300 and 9500 platform switches run on the same binary image, also called the “unified” image., a release with X.Y(Z) would mean:
X – Unified release major
Y – Major / Minor release
Z – Maintenance release (MR)
Where the Z = 1 is always the first FCS release of a Major/Minor release.).
■ Beginning with Cisco NX-OS Release 9.2(1), dual-homed FEX support is added to Cisco Nexus 93180YC-FX, and 93108TC-FX switches in addition to straight-through FEX support.
■ Beginning with Cisco NX-OS Release 9.2(1), straight-through FEX support is added to Cisco Nexus 93240YC-FX2 9.2(2)
■ New Software Features in Cisco NX-OS Release 9.2(2)
Cisco NX-OS Release 9.2(2) supports the following new hardware:
■ Cisco Nexus 34180YC (N3K-34180YC)—1-RU Top-of-Rack switch with 48 10-/25-Gigabit SFP28 ports and 6 40-/100-Gigabit QSFP28.
Cisco NX-OS Release 9.2(2) supports the following new software features:
■ POAP over IPv6—Support added for POAP over IPv6.
For more information, see the Cisco Nexus 9000 Series NX-OS Fundamentals Configuration Guide, Release 9.2(x)
Intelligent Traffic Director (ITD) Features
■ ITD Service—Added the failaction node-per-bucket command to specify how traffic is assigned after a node failure.
■ Pre-fetch Optimization—Enhanced to pre-fetch the status of the service nodes before reassigning the failed node’s buckets to the next available active nodes.
For more information, see the Cisco Nexus 9000 Series NX-OS Intelligent Traffic Director Guide, Release 9.2(x).
■ Autonegotiation (40 G/100 G) is supported on the following ports:
o Cisco Nexus 9336C-FX2 switch: ports 1-6 and 33-36
o Cisco Nexus 9364C switch: ports 49-64
o Cisco Nexus 93240YC-FX2 switch: ports 51-54
o Cisco Nexus 9788TC line card: ports 49-52
■ 10 Gb with QSA is supported on the following ports:
o Cisco Nexus 9336C-FX2 switch: ports 1-36
o Cisco Nexus 9364C switch: ports 49-64
o Cisco Nexus 9788TC line card: ports 49-52
■ 1 Gb with QSA is supported on the following ports:
o Cisco Nexus 9336C-FX2 switch: ports 7-32
o Cisco Nexus 9364C switch: ports 65 and 66 only
■ Breakout support—Added 4x25 Gb breakout support for N9K-C9636C-R and N9K-C9636C-RX switches
■ CWDM4 Optics on 100 G Interfaces—CWDM4 is supported on the 36-port 100-Gigabit Ethernet QSFP28 line cards (N9K-X9636C-R), the 36-port 100-Gigabit QSFP28 line cards (N9K-X9636C-RX) and the 4-port 100-Gigabit QSFP28 line cards (N9K-X96136YC-R).
For more information, see the Cisco Nexus 9000 Series NX-OS Interfaces Configuration Guide, Release 9.2(x).
Label Switching Features
■ Segment Routing
o Layer3 VPN and Layer3 EVPN Stitching for Segment Routing is supported on Cisco Nexus 9364C (N9K-C9364C) switches.
o The OSPF segment routing command and segment-routing traffic engineering with on-demand nexthop is supported on Cisco Nexus 9364C (N9K-C9364C) switches.
o Segment Routing is supported on Cisco Nexus 9300-FX2 platform switches
o L3VPN over segment routing is added for Cisco Nexus 9200, 9300, 9300-EX, 9300-FX, 9300-FX2 and 9500 switches with 9400, 9500, 9600, 9700-EX, and 9700-FX line cards.
■ Labeled and Unlabeled Unicast Paths—Added support for IPv4 and IPv6 unlabeled unicast route on a single BGP session. This behavior is the same irrespective of whether one or both SAFI-1 and SAFI-4 are enabled on the same session or not. This is supported on all Cisco Nexus 9000 Series switches.
For more information, see the Cisco Nexus 9000 Series NX-OS Label Switching Configuration Guide, Release 9.2(x).
■ NX-API REST Data Paths—See the New and Changed Information section of the Cisco Nexus 3000 and 9000 Series NX-API REST User Guide and API Reference for a detailed list of the updates.
For more information, see the Cisco Nexus 9000 Series NX-API CLI Reference.
Programmability Features
■ Perl Modules—Support added for the Cisco Nexus 9504 and 9508 switches with -R line cards.
■ Synchronization—Certain files and directories on the active supervisor module or active bootflash (/bootflash) can be automatically synchronized to the standby supervisor module, or standby bootflash (/bootflash_sup-remote, if the standby supervisor module is up and available.
■ NX-API Developer Sandbox— Various enhancements have been added to the NX-API Developer Sandbox.
■ Netdevice Property— Starting with the NX-OS 9.2(2) release, netdevices representing the front channel port interfaces will always be in the ADMIN UP state. The final, effective state will be determined by the link carrier state.
For more information, see the Cisco Nexus 9000 Series NX-OS Programmability Guide, Release 9.2(x).
Security Features
■ 802.1X - Added 802.1X support for VXLAN EVPN on the Cisco Nexus 9000 Series switches.
■ CoPP - Added support for custom protocol ACL filtering at CoPP on the Cisco Nexus 9300-EX, Cisco Nexus 9300-FX Series switches and the Cisco Nexus 9500 platform switches. Using this the customer can define mis-behaving traffic in their network, using custom ACL, and use that in dynamic policy-map in order to block traffic. These ACLs will be programmed on top of existing COPP ACLs with no traffic disruption upon pushing this policy. Note that N9500-R series line cards do not support this feature.
■ MACsec EAPOL—Added the ability to configure the EAPOL destination address and Ethernet type for MACsec on the Cisco Nexus N9K-C93240YC-FX2, N9K-C9336C-FX2, N9K-C93108TC-FX, N9K-X9736C-FX, N9K-C93180YC-FX, and N9K-X9732C-EXM platform switches.
■ MACsec – Added support for Cisco Nexus N9K-93240YC-FX2 and N9K-9336C-FX2 platform switches.
For more information, see the Cisco Nexus 9000 Series Security Configuration Guide, Release 9.2(x).
System Management Features
■ Configuration Replace—Updates the configuration replace command from maintenance mode to include a user confirmation and a warning.
■ NetFlow—Support added for Cisco Nexus 9500 platform switches with N9K-X9700-EX line cards.
■ NetFlow CE—Support added for Cisco Nexus 9300-EX platform switches.
■ SNMP—Support added for OIDs cefcFRUActualInputCurrent and cefcFRUActualOutputCurrent
For more information, see the Cisco Nexus 9000 Series NX-OS System Management Configuration Guide, Release 9.2(x).
■ BGP—Added the following features:
o BGP best-path algorithm— Added the option to ignore the Interior Gateway Protocol (IGP) metric for next hop during best-path selection.
o IPv4 BGP path selection in route maps for advertising BGP additional paths to peers – Added the ability to specify backup paths, the second best path, or multipaths as advertised paths.
o BGP prefix independent convergence (PIC) edge - Introduced this feature for Cisco Nexus 9200, 9300-EX, 9300-FX, 9300-FX2, and 9300-FXP platform switches and Cisco Nexus 9500 platform switches with -EX, -FX, and -R line cards. This feature ensures fast convergence to a BGP backup path when an external (eBGP) edge link or an external neighbor node fails. BGP PIC edge supports on the IPv4 address family.
o RFC 5549 IPv6 – Added support for Cisco Nexus 9500 platform switches with -R line cards.
■ Policy-based routing—Added the ability to drop packets when the configured next hop becomes unreachable, when setting the IPv4 or IPv6 next-hop address. This option applies to Cisco Nexus 9200, 9300-EX, 9300-FX, 9300-FX2, and 9364C platform switches and Cisco Nexus 9500 platform switches with -EX and -FX line cards.
■ VRRPv3 – Added support for object tracking.
For more information, see the Cisco Nexus 9000 Series NX-OS Unicast Routing Configuration Guide, Release 9.2(x).
VXLAN Features
■ VXLAN: IPv4 DHCP relay- Support added for Cisco Nexus 9504 and 9508 switches with -R line cards.
For more information, see the Cisco Nexus 9000 Series NX-OS VXLAN Configuration Guide, Release 9.2(x)
This section includes the following topics:
■ Resolved Caveats—Cisco NX-OS Release 9.2(2)
■ Open Caveats—Cisco NX-OS Release 9.2(2)
■ Known Behaviors—Cisco NX-OS Release 9.2(2)
The following table lists the Resolved Caveats in Cisco NX-OS Release 9.2(2). Click the bug ID to access the Bug Search tool and see additional information about the bug.
Table 13 Resolved Caveats in Cisco NX-OS Release 9.2(2)
The following table lists the open caveats in the Cisco NX-OS Release 9.2(2). Click the bug ID to access the Bug Search tool and see additional information about the bug.
Table 14 Open Caveats in Cisco NX-OS Release 9.2(2)
The following known behaviors are in this release:
Table 4 Known Behaviors in Cisco NX-OS Release 9.2(2)
■ The output format for the exec command CLI show vpc orphan-ports has changed from the 7.0(3)F3(4) release to the 9.2(2) release.
■ Release 9.2(2) brings in a new kernel.
■ Stronger ciphers are used in this release.
■ A new command, no service password-recovery is supported.
■ Only one version out of v4 and v6 versions of the uRPF command can be configured on an interface. If one version is configured, all the mode changes must be done by the same version. The other version is blocked on that interface. Cisco Nexus 9300-EX, 9300-FX, and 9300-FX2 platform switches do not have this limitation and you can configure v4 and v6 version of urpf cmd individually.
■ In the NX-API sandbox, whenever XML or JSON output is generated for the show run command or the show startup command, the output contains additional characters.
</nf:source> <============nf: is extra
<namespace> : extra characters are seen with XML and JSON from NX-API.
=============================
To perform a software upgrade, follow the installation instructions in the Cisco Nexus 9000 Series NX-OS Software Upgrade and Downgrade Guide, Release 9.2(x).
You can perform a standard In-Service Software Upgrade (ISSU) from the following release to Cisco NX-OS Release 9.2(2):
o 7.0(3)I7(4) or 7.0(3)I7(5)
Note: Enhanced ISSU to Cisco NX-OS Release 9.2(2) results in a disruptive upgrade. If syncing images to standby SUP failed during the disruptive upgrade from Cisco NX-OS Releases 7.0(3)I4(8), 7.0(3)I5(3,) or 7.0(3)I6(1) to 9.2(2), you should manually copy the image to the standby SUP and perform the disruptive upgrade.
■ When upgrading to Cisco NX-OS Release to 9.2(2) from any release prior to 7.0(3)I2(3) an intermediate upgrade to 7.0(3)I4(x), 7.0(3)I5(x), 7.0(3)I6(x), or 7.0(3)I7(x) is required. We recommend using Cisco NX-OS Release 7.0(3)I4(8) or 7.0(3)I7(4) as the interim release to aid in a smooth migration. For further details, please refer to CSCvk66763.
■ When upgrading from Cisco NX-OS Release 7.0(3)I6(1) or 7.0(3)I7(1) to Cisco NX-OS Release 9.2(2), if the Cisco Nexus 9000 Series switches are running vPC and they are connected to an IOS-based switch via Layer 2 vPC, there is a likelihood that the Layer 2 port channel on the IOS side will become error disabled. The workaround is to disable the spanning-tree etherchannel guard misconfig command on the IOS switch before starting the upgrade process. Once both the Cisco Nexus 9000 Series switches are upgraded, you can re-enable the command. For more information, see defect CSCvg05807.
■ If you are upgrading from Cisco NX-OS Release 7.0(3)I5(2) to Cisco NX-OS Release 9.2(2) using the install all command, BIOS will not be upgraded due to CSCve24965. When the upgrade to Cisco NX-OS Release 9.2(2) is complete, use the install all command again to complete the BIOS upgrade, if applicable.
■ An upgrade performed via the install all command for Cisco NX-OS Release 7.0(3)I2(2b) to Release 9.2(2) might result in the VLANs being unable to be added to the existing FEX HIF trunk ports. To recover from this, the following steps should be performed after all FEXs have come online and the HIFs are operationally up:
1. Enter the copy run bootflash:fex_config_restore.cfg command at the prompt.
2. Enter the copy bootflash:fex_config_restore.cfg running-config echo-commands command at the prompt.
■ In Cisco NX-OS Release 7.0(3)I6(1) and earlier, performing an ASCII replay or running the copy file run command on a FEX HIF configuration requires manually reapplying the FEX configuration after the FEX comes back up.
■ When upgrading to Cisco NX-OS Release 9.2(2) from 7.0(3)I2(x) or before and running EVPN VXLAN configuration, an intermediate upgrade to 7.0(3)I4(x) or 7.0(3)I5(x) or 7.0(3)I6(x) is required. For further details, please refer to CSCvh02777.
■ Before enabling the FHS on the interface, we recommend that you carve the ifacl TCAM region on Cisco Nexus 9300 and 9500 platform switches. If you carved the ifacl TCAM region in a previous release, you must reload the system after upgrading to Cisco NX-OS Release 9.2(2). Uploading the system will create the required match qualifiers for the FHS TCAM region, ifacl.
■ Before enabling the FHS, we recommend that you carve the ing-redirect TCAM region on Cisco Nexus 9200 and 9300-EX platform switches. If you carved the ing-redirect TCAM region in a previous release, you must reload the system after upgrading to Cisco NX-OS Release 9.2(2). Uploading the system will create the required match qualifiers for the FHS TCAM region, ing-redirect.
■ An error occurs when you try to perform an ISSU if you changed the reserved VLAN without entering the copy running-config save-config and reload commands.
■ During an ISSU, there is a drop for all traffic to and from 100 Mb ports 65-66 on the Cisco Nexus 92304QC switch.
■ The install all command is the recommended method for software upgrades and downgrades because it performs configuration compatibility checks and BIOS upgrades automatically. In contrast, changing the boot variables and reloading the device bypasses these checks and the BIOS upgrade and therefore it is not recommended.
■ Upgrading from Cisco NX-OS Release 7.0(3)I1(2), Release 7.0(3)I1(3), or Release 7.0(3)I1(3a) requires installing a patch for Cisco Nexus 9500 platform switches only. For more information on the upgrade patch, see Patch Upgrade Instructions.
■ When upgrading to Cisco NX-OS Release 9.2(2), Guest Shell automatically upgrades from 1.0 to 2.0. In the process, the contents of the guest shell 1.0 root filesystem are lost. To keep from losing important content, copy any needed files to /bootflash or an off-box location before upgrading to Cisco NX-OS Release 9.2(2).
■ An ISSU can be performed only from a Cisco NX-OS Release 7.0(3)I4(1) to a later image.
■ While performing an ISSU, VRRP and VRRPv3 displays the following messages:
■ Guest Shell is disabled during an ISSU and reactivated after the upgrade. Any application running in the Guest Shell will be affected.
■ If you have ITD probes configured, you need to disable the ITD service (using the shutdown command) before upgrading to Cisco NX-OS Release 9.2(2)..)
For additional information, see the Cisco NX-OS ISSU Support application.
The following are the upgrade paths from previous 7.0(3)F3(x) releases:
■ Release 7.0(3)F3(3) -> Release 7.0(3)F3(4) -> Release 9.2(2)
■ Release 7.0(3)F3(3c) -> Release 9.2(2)
■ Release 7.0(3)F3(4) -> Release 9.2(2)
■ Upgrading from Cisco NX-OS Release 7.0(3)I1(2), 7.0(3)I1(3), or 7.0(3)I1(3a) requires installing a patch and then upgrading using the install all command. Failing to follow this requirement requires console access to recover.
■ Upgrading from Cisco NX-OS Release 7.0(3)1(2), 7.0(3)I1(3), or 7.0(3)I1(3a) to 9.2(2) requires a patch for modular switches. A patch is available for each respective release. Please see the respective links below.
■ When upgrading from Cisco NX-OS Release 7.0(3)I1(1) or earlier, including all variants of 6.1(2) based releases, a patch is not required. You can upgrade directly using the install all command.:
4. Upgrade using the install all command.
The following table is an example of a patch upgrade:
The only supported method of downgrading a Cisco Nexus 9000 Series switch is to utilize the install all command. Changing the boot variables, saving the configuration, and reloading the switch is not a supported method to downgrade the switch.
Disable the Guest Shell if you need to downgrade from Cisco NX-OS Release 9.2(2) to an earlier release.
■ Performing an ISSU downgrade from Cisco NX-OS Release 9.2(2), Release 9.2(x).
Note: If you perform a software maintenance upgrade (SMU) and later upgrade your device to a new Cisco NX-OS software release, the new image will overwrite both the previous Cisco NX-OS release and the SMU package file.
If you are going to apply the patch for the issue described in CSCvh04723, you must make sure that the ACL is deleted before applying the patch. Otherwise, the issue will be seen again. This issue applies only to the ACL which has the redirect keyword in it.
This section lists limitations related to Cisco NX-OS Release 9.2(2).
■ When you upgrade a Cisco Nexus 9000 device to Cisco NX-OS Release 9.2.
■ Due to the design of airflow, back-to-front fans requires fan speed to be run at full speed all the time. You might also see fan speeds increase from 40% to 70% post-upgrade. This applies to the following PIDs: N9K-C9272Q, N9K-C9236C, N9K-C93180YC-FX, N9K-C93180 We recommend using multicast heavy template for optimal bandwidth utilization when using multicast traffic flows.
■ IPv6 multicast is not supported on Cisco Nexus 9500 platform switches.(2), 9.2(2).
o N9K-X9732C-FX line card.2 is supported on the Cisco Nexus 9300-EX and 9300-FX platform switches. It is not supported on the Cisco Nexus 9200 platform switch. The only policer action supported is drop. Remark action is not supported on egress policer.
■ FEX (supported for Cisco Nexus 9300-EX platform switches but not for Cisco Nexus 9200 platform switches.)
■ GRE v4 payload over v6 tunnels
■ IP length-based matches
■ IP-in-IP on Cisco Nexus 92160 switch
■ ISSU enhanced is not supported on the Cisco Nexus 9300-FX platform platform switches. on the Cisco Nexus 9200 platform switches. the N9K-X96136YC-R line card:
■ Breakout is not supported.
■ PTP and gPTP are not supported. Cisco Nexus 3000 and 9000 Series NX-API REST SDK User Guide and API Reference is available at the following URL:
The Cisco NX-OS Supported MIBs URL:
The Cisco Nexus 9000 Series FPGA/EPLD Upgrade Release Notes, Release 9.2(2) is available at the following URL:
The Cisco Nexus 9000 Series NX-OS Verified Scalability Guide, Release 9.2(2) 9.2(2) | https://www.cisco.com/c/en/us/td/docs/switches/datacenter/nexus9000/sw/92x/release/notes/922_9000_nxos_rn.html | CC-MAIN-2021-10 | refinedweb | 3,315 | 57.98 |
Warning for framework include violation from Headers to PrivateHeaders
Framework vendors usually layout their framework headers in the
following way:
Foo.framework/Headers -> "public" headers
Foo.framework/PrivateHeader -> "private" headers
Since both headers in both directories can be found with #import
<Foo/some-header.h>, it's easy to make mistakes and include headers in
Foo.framework/PrivateHeader from headers in Foo.framework/Headers, which
usually configures a layering violation on Darwin ecosystems. One of the
problem this causes is dep cycles when modules are used, since it's very
common for "private" modules to include from the "public" ones; adding
an edge the other way around will trigger cycles.
Add a warning to catch those cases such that:
./A.framework/Headers/A.h:1:10: warning: public framework header includes private framework header 'A/APriv.h'
#include <A/APriv.h>
^
rdar://problem/38712182 | https://reviews.llvm.org/rC335542 | CC-MAIN-2019-30 | refinedweb | 144 | 50.94 |
Hello everyone,
I need help. In my assignment i have an input file that contains string data: ----> i love music and chicken. today is the first day of the rest of your life. my name is john.
I must change it to the following in the output file: ----> I love music and chicken. Today is the first day of the rest of your life. My name is john.
I need to use this function to help achieve this:
void initialCap(string&);
No classes are required.
All the white space is erased initially that's why i use the insert function. However, the last word "john" is erased when i use this function. How do i increase the size of the string in order to still have john included in the output. I realize string is an array and arrays are immutable but i know there has to be a way around this. Also, I can't capitalize the words i need to capitalize. I need to use the replace function but, i'm not sure how the parameters work.
Any help is greatly appreciated
#include <iostream> #include <fstream> #include <string> using namespace std; void initialCap(string&); int main () { ifstream inputFile; inputFile.open("C:/Notepad++/input.txt"); ofstream outputFile; outputFile.open("c:/Notepad++/output.txt"); string baseSentence = ""; string str2 = " dave"; if (inputFile) { while (!inputFile.eof()) { string str1 = baseSentence; str1.insert(0," "); inputFile>>baseSentence; cout<<str1; outputFile<<baseSentence; } } else { cout<<"this file does not exist dave!"; } inputFile.close(); outputFile.close(); cout << "\n\n"; system("pause"); return 0; } | https://www.daniweb.com/programming/software-development/threads/265900/modifying-strings-from-input-file-to-output-file | CC-MAIN-2017-04 | refinedweb | 255 | 68.26 |
Want to calculate the variance of a column in your Pandas DataFrame?
In case you’ve attended your last statistics course a few years ago, let’s quickly recap the definition of variance: it’s the average squared deviation of the list elements from the average value.
You can calculate the variance of a Pandas DataFrame by using the
pd.var() function that calculates the variance along all columns. You can then get the column you’re interested in after the computation.
import pandas as pd # Create your Pandas DataFrame d = {'username': ['Alice', 'Bob', 'Carl'], 'age': [18, 22, 43], 'income': [100000, 98000, 111000]} df = pd.DataFrame(d) print(df)
Your DataFrame looks like this:
Here’s how you can calculate the] | https://blog.finxter.com/how-to-calculate-the-column-variance-of-a-dataframe-in-python-pandas/ | CC-MAIN-2020-50 | refinedweb | 121 | 63.7 |
About Me: I've been a professional web developer for just over 10 years now. I'm currently the lead web development instructor at Better Coding Academy, and as part of what I do, I post videos on our YouTube channel at
(Subscribe for awesome web development content!)
The following content was sourced from the Better Coding Academy style guide.
When deciding between hooks, render props and higher-order components, always go with hooks wherever possible.
// #1 - best const MyComponent = () => { const mousePosition = useMouse(); // mousePosition.x, mousePosition.y } // #2 - not as good const MyComponent = () => { return ( <Mouse> {({ x, y }) => { // ... }} </Mouse> ) } // #3 - bad const MyComponent = ({ x, y }) => { // ... } export default withMouse(MyComponent);
Why? Well, let's start with higher-order components (HOCs).
Why are higher-order components bad?
Higher order components are bad for two main reasons:
They take up a fixed prop name, possibly removing other props. For example, imagine for above example #3 we want to include an
xand
yprop on the component:
<MyComponent x="some value" y="some other value" />
Both of these values will be overwritten by the values coming from the higher order component. This issue can also arise when you wish to use multiple higher order components:
export default withMouse(withPage(MyComponent)); // if withMouse and withPage set the same props, there will be clashing issues
They do not clearly identify the source of your data.
withMouse(MyComponent)does not tell you which props are being included onto the component (if any), hence increasing the amount of time spent debugging and fixing up the code.
Okay then, now let's look at render props. Because render props give you data within a function parameter, you can freely rename it however you like. For example:
<Mouse> {({ x, y }) => ( <Page> {({ x: pageX, y: pageY }) => { // ^ big brain }} </Page> )} </Mouse>
Okay, well what about render props?
However, render props still have their own issues:
- They don't allow you to use their data outside of the
returnstatement. With the example above, you can't use the
xand
yvalues in any state variables,
useEffecthooks, or any other functions within your component, because it's only accessible within the
returnstatement.
They get nested... really quickly. Imagine we have three render prop components within a given component:
const MyComponent = () => { return ( <Mouse> {({ x, y }) => ( <Page> {({ x: pageX, y: pageY }) => ( <Connection> {({ api }) => { // yikes }} </Connection> )} </Page> )} </Mouse> ) };
So now, onto the final (and best) solution!
How hooks solve all of these issues!
Hooks address all of the above issues.
Hooks don't have any fixed prop names - you can rename however you like:
const { x, y } = useMouse(); const { x: pageX, y: pageY } = usePage();
Hooks clearly identify the source of the data - in the example above, it's clear that
xand
ycome from
useMouse, and
pageXand
pageYcome from
usePage.
Hooks allow you to access data outside of the
returnstatement. For example, you can do stuff like:
const { x: pageX, y: pageY } = usePage(); useEffect(() => { // this runs whenever pageX or pageY changes }, [pageX, pageY]);
Hooks don't get nested at all. With the above render prop monstrosity rewritten using hooks, the code would look something like:
const { x, y } = useMouse(); const { x: pageX, y: pageY } = usePage(); const { api } = useConnection();
Three lines of beautiful code.
Hope you guys enjoyed this comparison between three architectural patterns within React! Be sure to follow me on YouTube for tons of free, full-length React, JavaScript, Node.js and general web development tutorials.
Happy coding!
Discussion (15)
Hi, 😄 May I know which software you used to record the videos. is it freeware. And did you used normal mobile headphones as mic ?
I use OBS to record the videos with an AT2020 mic :)
Actually I thought you are using some paid ones like
Camtasia, But Good to hear that by using OBS your videos are clear. Have you configured some special settings in OBS ?
Nope - OBS is really good. Since it's screen recording there really isn't too much variation in quality; it's more important to have a good microphone setup (something I definitely can improve upon 😁)
Other problems with HoC are,
It does not clearly identify the source of your data - you have no idea where
xand
ycome from :)
They don't need to know, just like with Dependency Injection in Angular.
Drawing a similarity between Angular and React here isn't particularly beneficial.
In React, tracking down where a particular prop comes from within a nest of multiple HOCs is a huge pain and a massive code smell that Hooks has since addressed beautifully.
Aren't we talking about the dependencies of a component?
I don't know why it is necessary to know the source of a dependency inside a component.
this is very useful, thank you
You're welcome Nico!
youtube.com/watch?v=xiKMbmDv-Vw&fe...
However I choose Hooks, if I'm able to, I don't think your arguments on higher-order components are strong, as they had to be, anyway thanks.
Even though the codebase I'm working on right now is mostly with hooks, there are a couple of HOCs and render props thrown in. I think each of them have their appropriate use. | https://dev.to/bettercodingacademy/react-hooks-vs-render-props-vs-higher-order-components-1al0 | CC-MAIN-2022-21 | refinedweb | 857 | 61.56 |
cmp - compare two files
cmp [-l] [-s] file1 file2 [skip1 [skip2]]
cmp compares two files, byte-by-byte. The result of the comparison is always given by the exit status, and may be summarized on the standard output according to the options given on the command line.
If the two compared files are identical, cmp will exit with a zero exit status. If the compared files are not identical, cmp will exit with a status of 1.
If no options are given, cmp will return (on the standard output), the byte number and line number where the first difference is encountered.
If one file is an initial subsequence of the other, a message will also be returned on the standard error indicating that EOF was reached in the shorter of the two file.
skip1 and skip2 are optional byte offsets into file1 and file2, respectively, that determine where the file comparison will begin. Offsets may be given in decimal, octal, or hexadecimal form. Indicate octal notation with a leading '0', and hexadecimal notation with a leading '0x'.
silent execution; indicate results only by exit status, suppressing all output and warnings.
list differences; return the byte number and the differing byte values for each difference between the two files. The byte number is given in decimal, and the byte values are given in octal.
No environment variables affect the execution of cmp.
No known bugs.
D Roland Walker <[email protected]>.
This program is copyright (c) D Roland Walker 1999.
This program is free and open software. You may use, modify, distribute, and sell this program (and any modified variants) in any way you wish, provided you do not restrict others from doing the same. | http://search.cpan.org/~bdfoy/PerlPowerTools-1.012/bin/cmp | CC-MAIN-2017-43 | refinedweb | 284 | 64.1 |
Storing objects in a list
On 20/05/2013 at 12:32, xxxxxxxx wrote:
User Information:
Cinema 4D Version: R14Stdio
Platform: Windows ;
Language(s) : C++ ;
---------
I want to store objects in a list, and while this is as basic as it can be when programming, I do not know what the best way is for C4D plugins, because I eventually might want to compile it for Mac too.
I guess the Basecontainer is a suitable class for this(?).
BaseContainer* myContainer = new BaseContainer(); myContainer->SetData(1, "Foo"); myContainer->SetData(2, "Bar"); MySpecialClass* mySpecialClass = new MySpecialClass(); myContainer->SetData(3, mySpecialClass); //Getting the strings is no problem String s1 = myContainer->GetString(1); //but getting out mySpecialClass from the list again - I am banging my head against the wall..
All I want is to be able to store a number of instances of mySpecialClass in a list. Maybe I could just store the pointers? In Pascal I did this all the time, but here..
On 20/05/2013 at 13:36, xxxxxxxx wrote:
You can use any kind of container that is available cross-platform, and I haven't heard of any
kind of container that is preferred on windows or mac. You can use the standart-library or the
Cinema 4D classes. The only thing with the std lib is, that is uses Exception, in contrast to the
C4D API.
Check out std::list, std::vector or from the C4D API GeDynamicArray. All of these are templates (or
generics, as you might know from Java or C#).
Best,
-Nik
On 20/05/2013 at 14:03, xxxxxxxx wrote:
Hi Niklas, yes I am absolutely familiar with Generics, I use it all the time in C#.
But man - it is different from C# though
So thank you, the std::list is precisely what I need!! Great!
While we are at it, cannot object addresses be stored as integers?
Like this:
int myobjectAddress = &someObject;
And then just store myobjectAddress anywhere where I can store integers, in arrays, lists etc. I am used to this from Delphi's Pascal.
On 20/05/2013 at 14:13, xxxxxxxx wrote:
Hi Ingvar,
yes, technically an object-address is just an integer. You can convert an address to an integer, but
you'd be better off just using pointers instead (which is just an integer treated as a memory address
to an object by the compiler).
#include <c4d.h> #include <list> // ... std::list<BaseObject*> objects; // Store the top-level objects in a list. BaseObject* op = doc->GetFirstObject(); while (op) { objects.push_back(op); op = op->GetNext(); } // Iterate over them again. std::list<BaseObject*>::iterator it = objects.begin(); for (; it != objects.end(); it++) { // You can implicitly access all attributes of BaseObject via ``it``. String name = it->GetName(); // ... }
Best,
-Niklas
On 20/05/2013 at 15:01, xxxxxxxx wrote:
Thanks for the code!
I got one example from MSDN working, where the Generic object is not a pointer.
In your example, when using a pointer, like BaseObject*, I get an error:
Error 3 error C2039: **'GetName' : is not a member of 'std::_List_iterator <_Mylist>'**
Can you reproduce this?
On 20/05/2013 at 16:47, xxxxxxxx wrote:
Fair warning.
The GeArray types are the only officially supported lists(arrays). And they are cross platform too AFAIK.
Maxon does not like it when we use things like the Standard Library. If you use it, you are on your own. And they will not help you with any code problems if you are using it.
With that disclaimer out of the way.
Try it like this and see if it works better for you:
//This example uses the Standard Library instead of the C4D API to store objects into a list array //WARNING!!: Maxon does not like it when you do this. And will probably ignore requests for help if you use it! #include <c4d.h> #include <list> using namespace std; BaseDocument *doc = GetActiveDocument(); list<BaseObject*> objects; //Create an empty list array BaseObject *obj = doc->GetFirstObject(); //Start the iteration from the first object in the OM if(!obj) return FALSE; while (obj) { objects.push_back(obj); //Store the objects in the list array obj = obj->GetNext(); } list<BaseObject*>::iterator it; for (it=objects.begin(); it != objects.end(); it++) { BaseObject *listObj = *it; String name = listObj->GetName(); GePrint(name); //etc... }
-ScottA
On 20/05/2013 at 17:19, xxxxxxxx wrote:
Thanks! In the man time I got it working. I never wanted to store a BaseObject*, I want to store my own class instances.
But initially I asked for a way to use Maxon approved lists. Because I suspected that cross platform code has to be catered for. And I believe the BaseContainer is such. So if you read my original post in this thread, I come to a point where I can store objects, but not read them.
If I could use the BaseContainer or any other "container" I would be interested.
On 20/05/2013 at 17:42, xxxxxxxx wrote:
I've never tried to store a custom class in a list before. So I can't be of much help with that.
The GeData class is another common way to store things in C4D. And the docs mention that they can be used for storing custom class types.
But I must confess I've never even attempted it. So I don't know if that class would be any help.
-ScottA
On 20/05/2013 at 18:13, xxxxxxxx wrote:
Ok, I found a way that works.
To what extent it is patent C++, and furthermore C4D safe, I have no idea.
But as soon as I understood how you dereference pointers to objects in C++ (I have used it a lot in Pascal), I got something that so far seems to work ok:
myContainer->SetLong(4, (LONG)&mySpecialClass); // Then to get it out again: MySpecialClass* mySpecialClass2 = (MySpecialClass* )myContainer->GetLong(4); String test = mySpecialClass2->DoSomethingMeaningful();
Hope this will be ok with Mr. Maxon ;)
On 20/05/2013 at 23:16, xxxxxxxx wrote:
Originally posted by xxxxxxxx
The GeArray types are the only officially supported lists(arrays). And they are cross platform too AFAIK.
They are not the only supported types. Actually I rather recommend against using GeArray types, as they are rather slow, seen from nowadays standards. Use c4d_misc::BaseArray<> instead, if you need arrays. They are blazing fast, support sorting and iterators, and have almost no overhead.
Or, if you want it the old-fashioned way, how about a simple AtomArray?
Originally posted by xxxxxxxx
So thank you, the std::list is precisely what I need!! Great!
Objection, your honor.
Originally posted by xxxxxxxx
Maxon does not like it when we use things like the Standard Library. If you use it, you are on your own.
Exactly :-)
On 20/05/2013 at 23:42, xxxxxxxx wrote:
My fault, sorry. You first have to dereference the iterator.
(*it)->GetName()
Ingvar, using a container for this is really not a good idea I think. 1. the BaseContainer is intended
as a mapping type, 2. you need to do a cast every-time you want to access. Not that this will have
any cause on the performance on the program, but results in clunky and large code.
You can use the GeDynamicArray class from the Cinema SDK as well (as I have already mentioned
in my first answer).
#include <ge_dynamicarray.h> GeDynamicArray<BaseObject*> objects; // Store the top-level objects in a list. BaseObject* op = doc->GetFirstObject(); while (op) { objects.Push(op); op = op->GetNext(); } // Iterate over them again. LONG count = objects.GetCount(); for (LONG i=0; i < count; i++) { BaseObject* obj = objects[i]; // ... }
PS: Code is untested, intended to give you a small overview over the usage only.
Best,
-Nik
On 21/05/2013 at 00:16, xxxxxxxx wrote:
...OR you can use a shiny BaseArray instead of the dusty GeDynamicArray :-P
On 21/05/2013 at 00:21, xxxxxxxx wrote:
Is it much faster than the GeDynamicArray? I must admit that I have not yet taken a look
into the c4d_misc namespace. From the name, I always thought it would be a fixed size array.
On 21/05/2013 at 01:41, xxxxxxxx wrote:
Never assume, always look :-) The fact that BaseArray has methods like Push(), Insert() and Resize() tells you it's dynamic.
And about the speed: The BaseArray is not just faster, it's ridiculously fast. Really.
Here's some code I just wrote to benchmark it (as I didn't have any concrete numbers) :
void MyBench(LONG cnt) { GeDynamicArray<Real> dynamicArray; c4d_misc::BaseArray<Real> baseArray; LONG i; LONG timer = 0; Real x = 3.14165; GePrint("Array Benchmark (" + LongToString(cnt) + ")"); // Push() GePrint("GeDyamicArray::Push()..."); timer = GeGetTimer(); for (i = 0; i < cnt; i++) { dynamicArray.Push(x); } GePrint("..." + LongToString(GeGetTimer() - timer) + " msec."); GePrint("BaseArray::Push()..."); timer = GeGetTimer(); for (i = 0; i < cnt; i++) { baseArray.Append(x); } GePrint("..." + LongToString(GeGetTimer() - timer) + " msec."); // Reading GePrint("GeDynamicArray[]..."); timer = GeGetTimer(); for (i = 0; i < cnt; i++) { x = dynamicArray[i]; } GePrint("..." + LongToString(GeGetTimer() - timer) + " msec."); GePrint("BaseArray[]..."); timer = GeGetTimer(); for (i = 0; i < cnt; i++) { x = baseArray[i]; } GePrint("..." + LongToString(GeGetTimer() - timer) + " msec."); // Pop() GePrint("GeDynamicArray::Pop()..."); timer = GeGetTimer(); for (i = 0; i < cnt; i++) { x = dynamicArray.Pop(); } GePrint("..." + LongToString(GeGetTimer() - timer) + " msec."); GePrint("BaseArray::Pop()..."); timer = GeGetTimer(); for (i = 0; i < cnt; i++) { x = baseArray.Pop(); } GePrint("..." + LongToString(GeGetTimer() - timer) + " msec."); GePrint("Array Benchmark finished."); }
I built it using the latest Intel Compiler (version 13) as a 64 Bit Release build and ran it with different cnt values:
MyBench(10000); MyBench(100000); MyBench(1000000);
And here are the results (on a 27" iMac with 3.4Ghz i7 and 8GB RAM) :
10000 elements Push [] Pop GeDynamicArray 1 msec 0 msec 5 msec BaseArray 0 msec 0 msec 0 msec 100000 elements Push [] Pop GeDynamicArray 602 msec 0 msec 602 msec BaseArray 0 msec 0 msec 0 msec 1000000 elements Push [] Pop GeDynamicArray 272085 msec 0 msec 271149 msec BaseArray 9 msec 0 msec 0 msec
By the way, GeAutoDynamicArray and GeSafeDynamicArray are even slower.
On 21/05/2013 at 01:58, xxxxxxxx wrote:
Thanks Jack, this is a very useful resource! Those differences in speed are tremendous! You convinced
me rather using the BaseArray instead.. ;-)
Best,
-Nik
On 21/05/2013 at 02:58, xxxxxxxx wrote:
Uhm, how do I copy a BaseArray to another BaseArray? Copy&Assign is disallowed for the BaseArray
class. I get compiler errors when doing
array1 = array2
"" error C2248: 'c4d_misc::BaseArray<T>::operator =' : cannot access private member declared in class 'c4d_misc::BaseArray<T>' ""
Thanks,
-Niklas
On 21/05/2013 at 03:02, xxxxxxxx wrote:
Nevermin, just found the "CopyFrom" method.
On 21/05/2013 at 03:30, xxxxxxxx wrote:
Wow - what an interesting thread!
I thank you all for all new knowledge. I have a few comments though. My experience in general, is that while you can make speed tests, they are not always reliable. You have something called a compiler which lives its own superior life and is the ultimate decision maker. Certain ways of doing things might be fast in one situation, slow in another.
Anyhow - for the plugins I write, my speed concern is purely to speed up me. To get things done. My current plugins execute more than fast enough, regardless of list implementation.
But I like what I see about the BaseArray, so I will go for that one.
On 21/05/2013 at 05:07, xxxxxxxx wrote:
Of course, the compiler is responsible for the final performance. Anyway, if one array type takes 272085 msec to accomplish a certain task, and another type takes 9 msec, it's pretty obvious that the first type will always be the slower one.
On 21/05/2013 at 07:34, xxxxxxxx wrote:
You can't blame me too much for not recommending the the BaseArray Frank.
Because the only rolled out in the in R14. And like most people. I'm still using older versions.
I've been wondering what's the benefit for putting a class inside of a container?
The class is always there. And you can create an instance of it whenever you want. So I don't understand what benefit comes from stuffing it into a B.C.?
Where (in what case) would you need to use such a thing?
-ScottA | https://plugincafe.maxon.net/topic/7185/8213_storing-objects-in-a-list | CC-MAIN-2020-16 | refinedweb | 2,043 | 74.29 |
I have a hw prob here that im done coding w/.. i just get the wrong output about halfway and i know kind of wat it is but i cant fix it. my thought is im makin my function to complex.
thats the site that explains the hw prob and heres my code..
Code:
#include <vector>
#include <iostream>
#include <fstream>
#include <string>
using namespace std;
struct Process //Structure will hold all process information from the procces file
{
//CHECK THESE TYPES!!!
int procId; // process ID ... 1000.. 1001 etc
string name; // holds the name of the process
int cpuTime; // Total CPU time needed for process
int timeBurst; // Burst for each process
};
//will gather contents from stream and enter them into a vector of structs
void fillVector(ifstream&, vector<Process>&);
//Processes the schedule and outputs as it creates the schedules.
void procSched(vector<Process>&);
//----------------------------------------------
int main()
{
vector<Process> proc; //Vector of process structs
ifstream ifs; // filestream for process text
ifs.open("process.txt"); // open file and check for error while opening
if (!ifs)
{
cerr << "Error opening 'process.txt' \n ";
exit(1); //exit if error is found
}
fillVector(ifs, proc); //fill the vector with information from the stream
procSched(proc); //output the scheduling as it is calculated
return 0;
}
//-----------------------------------------------
void fillVector(ifstream& inputFile, vector<Process>& procVec)
{
Process tempProc; //temporary variable to fill process vector
//Loop through the file checking for each struct member
while (inputFile >> tempProc.procId >> tempProc.name
>> tempProc.cpuTime >> tempProc.timeBurst)
{
procVec.push_back(tempProc); // fill the vector with each temp member
}
}
void procSched(vector<Process>& procVector)
{
// totProc is the total number of Proccess that need to run
//size_t totProc = procVector.size();
int time(0); //will print the time each process began
while (!procVector.empty()) // Continue to process until vector is empty
{
for (size_t j = 0; j < procVector.size(); j++) // Loop through the vecotr
{
//The time is decreased by each processes burst
procVector[j].cpuTime -= procVector[j].timeBurst;
if (procVector[j].cpuTime <= 0) // If time is below 0, the process is exhausted
{
//output the process information with 0 being the final time left
cout << time << ' ' << procVector[j].procId << ' '
<< procVector[j].name << ' '
<< "0" << endl;
//The time the next process begins is incremented by the burst
//of the process just completed
time += procVector[j].timeBurst;
//delete the element of the vector
//that no longer runs as a process
procVector.erase(procVector.begin() + j);
}
else // if its not 0 the process is ongoing
{
//output the process information
cout << time << ' ' << procVector[j].procId << ' '
<< procVector[j].name << ' '
<< procVector[j].cpuTime << endl;
//The time the next process begins is incremented by the burst
//of the process just completed
time += procVector[j].timeBurst;
}
}
}
}
I know the prob is in my last function.. i think im either deleting something 2 quick or displaying 2 late. something to that effect... if u think i can do this more effectively plz tell me. | https://cboard.cprogramming.com/cplusplus-programming/69568-lil-help-progs-done-but-wrong-output-printable-thread.html | CC-MAIN-2017-13 | refinedweb | 470 | 62.88 |
Fire-and-forget in Service Fabric actors
At the recent Webscale Architecture meetup we discussed two implementations of the Actor model in the .NET ecosystem: Akka.NET and Azure Service Fabric Actors. One important discussion was around Ask vs Tell call model. With Tell model, the Sender just sends the message to the Recepient without waiting for a result to come back. Ask model means the Sender will at some point get a response back from the Receiver, potencially blocking its own execution.
The default model of Akka.NET is Tell:
Tell: Fire-forget
This is the preferred way of sending messages. No blocking waiting for a message. This gives the best concurrency and scalability characteristics.
On the contrary, the default model for Service Fabric Actors is RPC-like Ask model. Let's have a close look at this model, and then see how we can implement Tell (or Fire-and-Forget) model.
Actor definition starts with an interface:
public interface IHardWorkingActor : IActor { Task DoWork(string payload); }
As you can see, the method does not return any useful data, which means the client code isn't really interested in waiting for the operation to complete. Here's how we implement this interface in the Actor class:
public class HardWorkingActor : Actor, IHardWorkingActor { public async Task DoWork(string payload) { ActorEventSource.Current.ActorMessage(this, "Doing Work"); await Task.Delay(500); } }
This test implementation simulates the hard work by means of an artificial 500 ms delay.
Now, let's look at the client code. Let's say, the client receives the payloads from a queue or a web front-end and needs to go as fast as possible. It gets a payload, creates an actor proxy to dispatch the payload to, then it just wants to continue with the next payload. Here is the "Ask" implementation based on the Service Fabric samples:
int i = 0; var timer = new Stopwatch(); timer.Start(); while (true) { var proxy = ActorProxy.Create<IHardWorkingActor>(ActorId.NewId(), "fabric:/Application1"); await proxy.DoWork($"Work ${i++}"); Console.WriteLine([email protected]"Sent work to Actor {proxy.GetActorId()}, rate is {i / timer.Elapsed.TotalSeconds}/sec"); }
Note an
await operator related to every call. That means that the client will
block until the actor work is complete. When we run the client, no surprise that
we get the rate of about 2 messages per second:
Sent work to Actor 1647857287613311317, rate is 1,98643230380293/sec
That's not very exciting. What we want instead is to tell the actor to do the work and immediately proceed to the next one. Here's how the client call should look like:
proxy.DoWork($"Work ${i++}").FireAndForget();
Instead of
await-ing, we make a
Task, pass it to some (not yet existing)
extension method and proceed immediately. It appears that the implementation
of such extension method is trivial:
public static class TaskHelper { public static void FireAndForget(this Task task) { Task.Run(async() => await task).ConfigureAwait(false); } }
The result looks quite different from what we had before:
Sent work to Actor -8450334792912439527, rate is 408,484162592517/sec
400 messages per second, which is some 200x difference...
The conclusions are simple:
Service Fabric is a powerful platform and programming paradigm which doesn't limit your choice of communication patterns
Design the communication models carefully based on your use case, don't take the defaults for granted
Like this post? Please share it! | https://mikhail.io/2016/01/fire-and-forget-in-service-fabric-actors/ | CC-MAIN-2018-09 | refinedweb | 561 | 57.16 |
Maven instance-wide endpoint returns 403 for anonymous access
SummarySummary
Maven instance-wide endpoint returns 403 for anonymous access
Steps to reproduceSteps to reproduce
- Create a package in a public project
- Create a maven project that uses that package as a dependency
- Do not add any token to your
settings.xml
- Run
mvn package
What is the current bug behavior?What is the current bug behavior?
mvn package fails saying that the endpoint returned 403
What is the expected correct behavior?What is the expected correct behavior?
mvn package should be able to download the dependency successfully
Output of checksOutput of checks
Results of GitLab environment infoResults of GitLab environment info
Expand for output related to GitLab environment info
System information System: Ubuntu 18.04 Proxy: no Current User: git Using RVM: no Ruby Version: 2.6.3p62 Gem Version: 2.7.9 Bundler Version:1.17.3 Rake Version: 12.3.2 Redis Version: 3.2.12 Git Version: 2.22.0 Sidekiq Version:5.2.7 Go Version: unknown
GitLab information Version: 12.2.0-ee Revision: 30032e00da9 Directory: /opt/gitlab/embedded/service/gitlab-rails DB Adapter: PostgreSQL DB Version: 10.9 URL: HTTP Clone URL: SSH Clone URL: [email protected]:some-group/some-project.git Elasticsearch: no Geo: no Using LDAP: no Using Omniauth: yes Omniauth Providers:
GitLab Shell Version: 9.3 >= 9.3.0 ? ... OK (9.3 ... OK
Checking Gitaly ... Finished
Checking Sidekiq ...
Sidekiq: ... Running? ... yes Number of Sidekiq processes ...? ... skipped (no tmp uploads folder yet) Init script exists? ... skipped (omnibus-gitlab has no init script) Init script up-to-date? ... skipped (omnibus-gitlab has no init script) Projects have namespace: ... 1/1 ... yes 4/2 ... yes 1/3 ... yes Redis version >= 2.8.0? ... yes Ruby version >= 2.5.3 ? ... yes (2.6.3) Git version >= 2.22.0 ? ... yes (2.22.0) Git user has default SSH configuration? ... yes Active users: ... 2 Elasticsearch version 5.6 - 6.x? ... skipped (elasticsearch is disabled)
Checking GitLab App ... Finished
Checking GitLab subtasks ... Finished
Possible fixesPossible fixes
@sahbabou and I debugged this, for some reason the call to
authorize! is the one returning forbidden:
Reading the policy code that should be authorised:
ZD (internal) | https://gitlab.com/gitlab-org/gitlab/issues/32102 | CC-MAIN-2020-16 | refinedweb | 366 | 53.58 |
13, 2007
This article was contributed by Sébastien Cevey
The number of music players on Linux has been steadily increasing
lately, but while these projects have been getting more and more
polished, we have yet to see revolutionary improvements in terms of
user experience. Indeed, the trend has been to borrow as many
features as possible from other projects, rather than questioning the
reasons behind their design.
This article describes XMMS2's attempt to address long-standing
limitations of music players, through its new support for
Collections.
I have been concerned with the state of music players for a long
time. Two years ago, I wrote a Manifesto
for a Better Music Player. Although my ideas have evolved since
then, the general conclusions of that article still hold.
One important argument I made is that the design of a music player
should focus on the users' needs, rather than on a list of well-known
features. All the traditional features (playlist, media library,
cover browsing, etc) and hacks (play queue, random mode, etc) stem
from the needs users have for:
Non-linear playback was first introduced in a crude
form as the "random mode", directly inspired from legacy CD
players. iTunes later
popularized its "Party
Shuffle" mode, which solved the unpredictability of playback by
maintaining a queue of randomly selected songs. What we are still
waiting for, though, is a smarter mode that would also take into
account beat, artist similarity, or other semantic information.
Music players that are based on a media library typically provide a
search feature. Unfortunately, the power of the search
function is often
hindered by annoyingly complex forms used to choose the fields to
query. Few developers seem to have noticed the success of Google's
search interface: minimalistic, but enriched by rating heuristics and
a rich syntax for advanced users.
The other axis required by our ever-growing music libraries is
browsing. Media library browsing is always present in
some form, although mostly simplistic and uninspired. When they are
not cloning iTunes genre/artist/album filters or the browsing of cover art,
most music players simply present the users with the list of all their
media in a plain multi-column layout. Easy to implement, but hard on
the eyes for the users. Interestingly, Foobar2000 (freeware) is the
only popular player to allow a rich
customization of the layout, which greatly improves readability.
The lack of features that help users organize their
media library contributes to the difficulty of addressing the two
previous issues. In the physical world, users can arrange their CDs
spatially in their own personal way (by artist, date of release, mood,
etc), set a couple of albums aside for playing at a party, or
highlight their latest acquisitions on a shelf. This lets them build a
cognitive map of the location of items. On computer-based music
players, however, they are barely provided with the possibility to create
playlists, possibly dynamic, but seldom integrated well enough to be
used powerfully. Even bare files have richer organizational
possibilities, using directories!
The reason behind these limitations is not that they are inherently
unsolvable. The truth is that a lot of effort is required to implement
new approaches in any of these fields. Experimentation, either
conceptual or in terms of interface, is expensive.
The goal of Collections is to address this problem by creating a
common abstraction layer. Search, browsing and organization all share
one property: they act on subsets of the media library. Computers are
especially good at handling sets, but music players haven't really
exploited that fact yet.
A collection is defined as a subset of the media library.
This set of media (songs) can be dynamic, for instance "All media by
Kraftwerk released prior to 1980" or "All media added to the media
library last week, except those by Justin Timberlake". A static set,
for instance hand-picked media selected for parties, is just a special
case of dynamic sets.
Note that a collection is not merely what some players call a "Smart
Playlist" (or "Dynamic Playlist"). A "Smart Playlist" is only used to
play an arbitrary list of media, while a collection is a generic
representation of a set of media. For instance, this includes the
results of a search, a filtered view of the media library, the list of
tracks from a given album, etc.
Because a collection is an abstract representation, it can be used
ubiquitously throughout all the features of the music player:
browsing, searching in the media library or the playlist, enqueuing,
jumping, etc. A collection can also be saved on the server, thus
allowing the users to organize their music and reuse their selection in
homogeneous and flexible ways.
The XMMS2 project turned out to be the perfect ground to implement
collections. Unlike its popular predecessor XMMS, XMMS2 hasn't gathered much
attention yet. However, it features all that you would expect from a
recent music player: a media library, support for many audio formats
and multiple platforms (Linux, *BSD, OS X, Windows, etc), bindings for
many languages (C, C++, Ruby, Python, Perl, Java), and a friendly
community open to innovation.
In addition, the player was designed according to a client-server
architecture, so that the server is responsible for all the boring
work (audio decoding, media library management, tag extraction, etc),
while any flavor of user interface can be implemented as a client
connected to the server, possibly across the network.
Collections have been implemented in XMMS2 as a student
project during the Google Summer of Code 2006, and finally merged
into the stable tree on May 20, 2007 as part of the DrJekyll
release.
Support for collections was implemented on the server as a layer
above the media library, and playlists are exposed to the clients
through a collections API.
This API allows clients to save collections on the
server, query the media library, enqueue the content of a collection,
etc. Thus, although the user interface depends on the client, the
server and the clients all share the same abstract representation.
Clients are also freed from the need to generate complex SQL queries
themselves; instead, they can easily build a (DBMS-agnostic)
collection and the tedious query is performed by the server. In
addition, a parser is provided to generate a collection from a string
with an enriched search syntax.
Collections make it essentially trivial to browse and search the media
library. Moreover, advanced features are either natively available or
very easy to implement: iTunes-like Party Shuffle, recursive filtering
(e.g. search inside the playlist), display Top 10 or never played
songs, changing the equalizer settings if the playing song is in a
particular collection (e.g. "Jazz Vinyl rips"), etc.
Strictly speaking, collections are implemented as a
directed acyclic graph (DAG), each node of which is a collection
operator. In fact, because the structure is recursive, each node of
the graph corresponds to a collection. This model was chosen to
emphasize the aggregated nature of users' music collections.
Collection operators come in four different flavors:
The set operators take an arbitrary number of
operands and returns the collection obtained by applying the
corresponding set operation to them. For instance, "any music by The
Beatles or any music by The Rolling Stones". Available set
operators: union, intersection, complement.
The filter operators enforce conditions on properties
of the media; the resulting collection only contains the media that
match the filtering attributes. For instance, "all the songs with
'stairway' in their title". Available filter operators: equals,
match (partial matching of strings using wildcards), larger/smaller
(for numbers), has (checks whether a property is present).
The list operators are a bit special. The basic list
operator (called "idlist") does not accept any operands; instead, it
simply generates the collection corresponding to the custom list of
media it contains. Because list operators store static, ordered lists
of media, they are used as playlists in XMMS2. Available list
operators: list, queue (pop songs once they have been played), Party
Shuffle (takes an operand, used to randomly feed the list with new
entries).
The reference operator is simply used to refer to the
content of a saved collection or playlist. For instance, "all the
songs released in 2007 in the Foo playlist". A reference
operator is also used to refer to the whole media library (all media).
Now, let's illustrate all this with a sample collection structure:
The nodes represent collection operators, while edges simply connect
operands to operators.
Here, "All Media" is a reference to the whole media library, and we use
a Match operator to only keep media for which the artist has a name
starting by "A" (1). We then take the union (3) of this and the
content of the "Rock 90's" saved collection (2). The result is passed
as an operand to a Party Shuffle operator (4), which we save under the
name "Interesting" (5).
When the user plays the "Interesting" playlist, songs are popped from
the list as soon as they are finished, and new songs matching the
operand collection (3) are automatically enqueued, so that the list
always contains at least 20 items. This is specified by the "size"
attribute of the Party Shuffle. Of course, the user can also edit the
playlist and add tracks to it manually.
This is only one example of collections among many. As you can see,
the modular structure of collections allows virtually unlimited
possibilities. As such, they have been tightly integrated both on the
server and in the client API.
On the server, a dedicated module is responsible for handling
collection features. When a collection is queried, it serializes the
structure into an SQL query, runs it in the media library and returns
the matching media, either as a list of media ids or hashes containing
the requested media properties. When a collection is saved on the
server, it is added to the collection DAG and kept in memory while the
server is running. On shutdown, the whole DAG is serialized into the
database. Note that playlists are nothing but collections, albeit
restricted to list operators and saved into a dedicated namespace.
In the client API, collections introduced many important
changes. First, executing raw SQL queries has been deprecated; all
queries are now to be performed using collections. Collection data
structures can be built either using a set of dedicated functions, or
by calling the collection parser on a string given by the
user. Finally, many XMMS2 methods have been extended to support
collections (e.g. to enqueue media) and new methods allow clients to
query, save and retrieve collections from the server.
If you want to learn more about the concept of collections, please
have a look at the
collections concept page
on the XMMS2 wiki. For more details about the
implementation, check the
collections design page and the
API documentation.
Several
XMMS2 clients have started offering features based on collections,
including Abraca (GTK2
client) and gntxmms2
(console client). Other clients have ported search and browsing to the
collections API: Esperanza
(Qt4 client), gxmms2
(GTK2 client) and the official command-line interface.
Hopefully, client developers will start exploring new directions now
that collections are in the main release. The XMMS2 CLI client has
already been scheduled
for a full rewrite.
Several improvements are also expected to address current limitations
of the collections implementation. One limitation is that all
collections are treated equally as media sets; if a filter is applied
on a playlist, the order and duplicated items will be lost. A smarter
internal distinction between lists and sets inside the DAG is in the
works. An ordering collection operator could then be introduced to
transform a set into an ordered list, as well as an operator to select
subsequences of such lists, similarly to SQL LIMIT operation. They
could be used to create a collection containing the "list of the 20
most recently added media". The SQL query generator could also be
further optimized, unless we decide to replace the database backend
completely.
Collections have just made it into the official XMMS2 distribution,
but people already use them through features like search, Party
Shuffle or groups of songs saved in the media library. They are a
powerful toy for developing new features in the clients and hopefully
helping users organize and use their music library.
It's an exciting time to come up with fresh ideas in the XMMS2 world,
and I hope the rest of the developers in the music player community
will take the time to reflect on and discuss all these questions
earnestly!
Collections in the XMMS2 music player
Posted Jun 14, 2007 9:19 UTC (Thu) by vblum (guest, #1151)
[Link]
Posted Jun 14, 2007 15:57 UTC (Thu) by nix (subscriber, #2304)
[Link]
The case we'd generally like is `shuffle entire works'. You might be able to do this in the structure described by defining `media' to be `a list of MP3 files in a specific directory' or something, but this seems crocky. Essentially you'd want a way to chain media together (into little lists which may themselves have sublists and so on, so you get a tree of media?) such that operators such as sorting only apply to the first element in the list (or to a selected one? XPath for music collections! ;) ).
Rockbox acquired something that sort of works in this area by providing random directory autochanging. This is a kludge but it sort of works if you turn shuffle off and keep one work in each directory: but it's a lot less elegant than having a shuffling/selection mechanism that actually *understands* that certain media should only be played in sequence with others (unless explicitly requested: maybe you *want* to listen to only the third movement of some work, but generally speaking you'd want to start with the first movement and play to the last: shuffled movements are basically never desirable).
Posted Jun 14, 2007 16:54 UTC (Thu) by scevey (guest, #45734)
[Link]
Good point.
This has been discussed in the XMMS2 community, and so far the idea was to introduce a "selector property" attribute to the Party Shuffle. This attribute would define what property is used to randomly enqueue media from the operand collection. For instance, if you set it to "album", whenever the Party Shuffle needs to be refilled, it will select a set of song sharing the same "album" property, thus enqueing a full work rather than a single song. The current behaviour would still be activated by default by setting the "selector property" to "id".
It's not implemented yet, but it's definitely planned (Issue #1352)!
Thanks for your input!
Posted Jun 15, 2007 19:29 UTC (Fri) by nix (subscriber, #2304)
[Link]
I wonder if what we have isn't a concept of a `subcollection', where
collections of files can be grouped into a `subcollection' which has its
own operators which apply to its members by default. So the shuffling
would apply to a bunch of individual things, and a bunch of subcollections
(`albums'), and *those* then have, by default, play-in-sequence operators
applied to them.
(But maybe I'm babbling and should shut up.)
Posted Jun 14, 2007 17:43 UTC (Thu) by thoffman (subscriber, #3063)
[Link]
Note also it isn't just classical music that has this restriction. I have a bunch of "non-stop dance mixes" of various techno and electronica which meant to be played all the way through the whole CD with no breaks between songs. Or, at least between most songs.
Although, long stretches from a single CD can kind of defeat the purpose of shuffle play... maybe an option like: When shuffle play is enabled, only shuffle in the middle of this sequence with a given probability. For instance, p=0.25 would on average break the sequences into four-track lengths.
Hmmm.... you know, that would be useful for non-sequenced music too. I sometimes find it disconcerting when shuffle play jumps between radically different styles of music. If I could adjust a probability (or some other tunable) so that once shuffle chose a track from a CD, there would be a strong probability that it would then play at least one or two more tracks from that CD before going on to a different CD, I'd probably use that..
Of course, with all these features there has to be a really, really convenient user interface, otherwise it's just too much trouble to bother with. On my IPod I usually end up shuffle-playing within a single genre just because it's easier than putting together a playlist.
I mainly want to listen to music, not futz around in a clunky UI for hours.
Posted Jun 19, 2007 7:34 UTC (Tue) by scevey (guest, #45734)
[Link]
I can see different ways to solve your complex shuffling use case. It's quite similar to nix's requirements in the post above.
Because we're trying to avoid adding too much complexity to the server, different new projects were discussed to make PShuffle more customizable. The first one was a Lisp interpretor that would allow all collection operators to be written in Lisp, and clients could write and save their own on the server. Obviously, it's far from being trivial, and we didn't get enough GSoC2007 slots to have it sponsored this year. An alternative could be to move the PShuffle inside a service client, which is a new GSoC2007 project by Ning Shi, which I'm mentoring. Clients could then rely on a more customizable client to do the shuffling, or even write their own shuffling service with special rules like the ones you and nix proposed.
As long as you either tag (using media properties) the media or put them in a dedicated collection, i.e. as long as they can be identified using collections, there isn't any reason why crazy shuffling methods wouldn't be possible :-).
The filtering operator work on media properties, which are automatically extracted from tags (ID3, Ogg comments, etc). So your use case is already supported by the current state of Collections!
Of course, with all these features there has to be a really, really convenient user interface, otherwise it's just too much trouble to bother with.
Of course, it's an important point. Work is still needed in that area, but it's certainly possible. So far, the text pattern syntax is one powerful way to build collections.
For instance:
(artist:"Pink Floyd" l:Meddle) OR (genre:Rock AND +compilation) OR (title:The* in:Playlists/Foo (NOT year>2000))
Builds a collection containing Meddle by Pink Floyd, all media that have "Rock" as genre and which are flagged (using a media property) as compilations, and all media in the Foo playlist whose title starts by "The" and which were release prior to 2000. Note: the AND is implicit when several conditions are put in a sequence.
Posted Jun 20, 2007 21:07 UTC (Wed) by nix (subscriber, #2304)
[Link]
(Read McCarthy's original paper sometime, it's brilliant. That most
ancient of Lisps isn't one I'd recommend writing an interpreter for today,
but it does show how easy it can be.)
(btw, I'm not really sure `play these as a unit' really counts as
`special'. It's something *everyone* listening to classical music will
want, for example, and we're not as rare as all that. Not just classical
stuff, either: Steve Reich's not exactly classical, but shuffling the
movements of _The Desert Music_ would leave you with a meaningless
jumble.)
Posted Jun 21, 2007 7:03 UTC (Thu) by scevey (guest, #45734)
[Link]
I qualified your use-case (play these as a unit, and these not) as special because I don't know any player (physical or computer-based) that would support it. However using collections it would be quite trivial to implement it in the client, as you have all the facilities to do it. A 30-line Python script would probably do.
Posted Jun 21, 2007 20:09 UTC (Thu) by nix (subscriber, #2304)
[Link]
And Rockbox can support this use case, sort of (the random
auto-change-directory feature).
And collections are deeply cool: combined with the client-server part of
XMMS2 (so I don't have to put up with the IMHO odious skins) it looks like
it might be a worthwhile replacement for/addition to MPD on my local
net :)
Posted Jun 24, 2007 10:42 UTC (Sun) by scevey (guest, #45734)
[Link]
It does not to offer all these features per se, but if you want to allow advanced operators that, for instance, use Artist similarity from Last.Fm, or other data feeds, you would need them.
Anyway, right now I think we'll be focusing on Collections API + Service clients for this kind of thing.
Please feel free to let us know if you have suggestions or comments on XMMS2, either through the Mailing-List or the IRC channel (#xmms2 on freenode)!
Posted Jun 21, 2007 18:47 UTC (Thu) by TRauMa (guest, #16483)
[Link]
It also has dynamic playlist based on search and filter terms, although no SoC-Projects and new terms like "Collection".
Posted Jun 28, 2007 15:04 UTC (Thu) by scevey (guest, #45734)
[Link]
In the case of collections, the challenge was to abstract the concept and the API so that it could be used by all clients ("interfaces", if you want) and all the internal parts of XMMS2 transparently. In fact, it's only a building block for more advanced features (Party Shuffle, tagging, mlib organization, etc).
Posted Jun 15, 2007 23:19 UTC (Fri) by wolfgang.oertl (subscriber, #7418)
[Link]
You're probably aware of gjay for xmms () which implements at least part of this, i.e. beat and frequency analysis. This project hasn't had any releases in the last 3 years or so, though...
Posted Jun 18, 2007 11:27 UTC (Mon) by KaiRo (subscriber, #1987)
[Link]
Posted Jun 19, 2007 7:38 UTC (Tue) by scevey (guest, #45734)
[Link]
Posted Jun 21, 2007 16:51 UTC (Thu) by leandro (guest, #1460)
[Link]
I am not sure the current DBMS we're using (SQLite) would be appropriate
That is the problem with DBMS-agnostic applications: the lowest common denominator. I think what Apple did, in using full-featured but yet light enough PostgreSQL everywhere, just makes more sense.
Posted Jun 28, 2007 14:58 UTC (Thu) by tru (guest, #30161)
[Link]
Posted Jul 27, 2012 21:41 UTC (Fri) by leandro (guest, #1460)
[Link]
Not ðat it matters much, noƿ ðat Apple effectively gave up on ſervers.
Posted Jul 6, 2007 17:26 UTC (Fri) by KaiRo (subscriber, #1987)
[Link]
Yammi has/had used XML as the storage backend, probably loading the whole "database" into memory, which obviously might sound a bit heavy on memory but is very fast (and I never saw memory problems with it).
The fuzzy search code it used is at... and is under the GPLv2 (but I guess Oliver might be open to relicensing if you need a different license).
Linux is a registered trademark of Linus Torvalds | http://lwn.net/Articles/237952/ | CC-MAIN-2013-20 | refinedweb | 3,876 | 58.21 |
21. [Hindi]Machine Learning : Subplot Plot in Matplotlib | 2018 |Python 3
[ad_1]
🔵Don’t forget to Subscribe:
In this video tutorial i am going to teach you how subplot works in Matplotlib.
Code link :
Finally, we are launching our Mastery in Machine Learning with Python 2020 training program. those who want to join. Please check out the link mention below:
Registration link for Machine Learning Training:
Instagram:
Twitter:
Google+ :
#Matplotlib#Subplot#machinelearning#Python3#Python#trending
Source
[ad_2]
Comment List
Very useful video , thnx sir👍
sir, had u mentioned tkinter and how to create new window using python in this series.
plot ka kya mtlb hota h
sir i think, u need to say basic of maths , or say that its maths , u can learn and come
because we got so confusion
Sir why linspace , why not arrange?
sir aap machine learning ke maths ke baare videos banaoo ifg u can?
Mujhe samjh nhi aaya kuch bhi ,please puri defination ke sath batayie
whats the meaning of x1 =np . linspace(0,0,5.0)?
whats the meaning of plt.subplot(2,1,2)?
Keep going
Sir ! I Am REally SOrry YOu r JUst Teaching Syntax … 🙁 , Not fu**ing library.
Sir I cant understand the functions u r writing like y1 and y2… pls explain
Help! help! help! following code that plot cos(x) trigonometry function using numpy and matplotlib libraries:
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(0.0,2.0,0.01)
s = 1 + np.cos(2*np.pi*t) # I have no idea about this statement = 1 +np.cos(2*np.pi*t)
plt.plot(t,s,'*')
plt.xlabel('Time(t)')
plt.ylabel('Voltage(mv)')
plt.title('Cosin wave Plot(cos(x))')
plt.grid()
plt.show()
sir how it gives 2 rows one column?
Nice sir, just will you explain purpose of every method and what is the parameters of methods.
Thanks
Cna you explain a little bit about linspace
Thank you
hello sir..what is the use of linspace….can you pls explain???
👌👌👌👌👌ooosm sir ji thanks a lot
but i m confused ki kha per cos lgana hain kha per sin guide me
i cant do the second one sir
thank you
Bhai I like your video and you explain it good also but I am request you that make video in 5 to 7 minutes or as you can reduce time.
It's just suggestion by the way you are explain it very good.
Thank you 🙂
Sir how can plot a live audio stream and live data
☺good got notification | http://openbootcamps.com/21-hindimachine-learning-subplot-plot-in-matplotlib-2018-python-3/ | CC-MAIN-2021-25 | refinedweb | 429 | 72.46 |
15 February 2011 20:07 [Source: ICIS news]
TORONTO (ICIS)--German fertilizer major K+S has sufficient financial strength to develop its $2.5bn (€1.9bn) potash project in ?xml:namespace>
K+S acquired the
Carsten Muller, an analyst at Berlin-based FM Research, said K+S had a “comfortable cash buffer” and could raise additional debt finance to develop the project, if necessary.
Also, the money would not be needed immediately but rather would be raised over several years, Muller said. First potash from the Canadian project is expected to be produced in 2015, at the earliest, K+S said in November when it announced the takeover.
Muller also said K+S would benefit from rising potash prices on the back of higher global prices for agricultural commodities and food.
K+S now holds over 90% of Potash One, a level that would allow K+S to squeeze out the remaining shareholders under Canadian corporate law, Muller said.
K+S is one of a number of firms planning potash projects | http://www.icis.com/Articles/2011/02/15/9435670/k-s-can-handle-2.5bn-canada-potash-investment-analyst.html | CC-MAIN-2014-35 | refinedweb | 171 | 60.04 |
Re: [rng-users] Lets standardize PI for associating Relax NG schema with XML document
Expand Messages
- [I'm forwarding message from Hussein because Yahoo is refusing his emails.]
-------- Original Message --------
Subject: Re: Lets standardize PI for associating Relax NG schema with
XML document
Date: Fri, 15 Jul 2005 08:16:34 -0000
From: hussein_shafie <hussein_shafie@...>
To: Jirka Kosek <jirka@...>
Hello,
I'm Hussein SHAFIE, the project manager of XMLmind XML Editor.
Here's what we have implemented in our XML editor (see):
====================================================================
<?xxe-relaxng-schema
location=anyURI
[ name=non empty token ]?
[ compactSyntax=boolean ]?
[ encoding=any encoding supported by Java™ ]?
?>
* location
Required. Specifies the URL of the RELAX NG schema.
This location may be resolved with the help of an XML catalog.
* name
A unique name for the RELAX NG schema (similar to the public ID of a
DTD). Without such name, a RELAX NG schema cannot be cached.
When possible, the ``target namespace'' of the RELAX NG schema is a
sensible choice for this attribute.
* compactSyntax
Specifies that the RELAX NG schema is written using the compact
syntax. Without this attribute, if location has a "rnc" extension, the
schema is assumed to use the compact syntax, otherwise it is assumed
to use the XML syntax.
* encoding
Specifies the character encoding used for a RELAX NG schema written
using the compact syntax. Ignored if the XML syntax is used. Without
this attribute, the schema is assumed to use the native encoding of
the platform.
Example:
---
<?xxe-relaxng-schema name="-//OASIS//RELAX NG DocBook V4.3//EN"1
location=""
compactSyntax="true" encoding="US-ASCII" ?>
---
====================================================================
All pseudo-attributes except "name" are really needed in order to have
something that works.
We do not take this proprietary processing-instruction very seriously
because we firmly believe in the implicit association of the instance
with its schemas (approaches such as Namespace Routing Language (NRL)
--).
However,
* such processing-instructions is trivial to understand and therefore
encourages experimenting with RELAX NG;
* if a simple standard is specified, we'll of course replace our
proprietary PI by the standard one.
--
Hussein SHAFIE, hussein@...,
Pixware, Immeuble Capricorne, 23 rue Colbert,
78180 Montigny Le Bretonneux, France,
Phone: +33 (0)1 30 60 07 00, Fax: +33 (0)1 30 96 05 23
- MURATA Makoto (FAMILY Given) wrote:
> If the camp trying to standardize PIs is not the majoritySorry to answer so late, I've been on vacation. Just for the record, if
> of the RELAX NG community, I do not think that PIs will take off.
>
> Here is my understanding of the current status. Please let me know
> if I misinterprets somebody.
>
> For schema-associating PIs
> Jirka Kosek
> Robin Berjon
> George Cristian Bina
you are going to be counting heads in the RNG community, I think I would
be best counted as "neutral". My take on this is that a schema PI is
just as bad an idea as a stylesheet PI, which is to say that it's most
of the time a very bad idea (and in the absence of a processing model,
dreadfully underspecified in its interactions with other specs at that),
but *if* people are going to be doing it anyway (as seems to be the
case) then I would prefer that there is a standard made by people who
understand the issues and limitations of this approach rather than ad
hoc proprietary options mades by people who are probably smart and
probably understand some of the problems, but won't benefit from the
head-banging that some form of community standard would get (or rather,
is getting).
--
Robin Berjon
Senior Research Scientist
Expway,
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/rng-users/conversations/topics/101?var=1 | CC-MAIN-2015-14 | refinedweb | 610 | 51.99 |
[
]
John Meagher commented on XMLBEANS-60:
--------------------------------------
This is useful when using schema extension. Some users only need to know about the base types
(if they're basically just pass throughs). The end consumers and producers of the xml need
to know about the specific types. It makes sense to generate the code for the base type schemas
only once then have a separate code generation task for each specific derived type.
> New option for namespace excludes
> ---------------------------------
>
> Key: XMLBEANS-60
> URL:
> Project: XMLBeans
> Issue Type: New Feature
> Components: Compiler
> Affects Versions: unspecified
> Reporter: Sal Campana
> Assigned To: Radu Preotiuc-Pietro
> Fix For: TBD
>
>
> SchemaCompiler should provide a way for certain namespaces to be excluded from compilation.
> As it stands today, regeneration of schemas is dependant on a combination of namespace
and location.
> A typical scenario is that the same schema file is located in 2 different locations.
The schema from the first location has been precompiled and included in the classpath. When
compiling the schema from the second location SchemaCompile should be able to determine that
these are the same file and not regenerate. Since the combination of namespace and location
is being used to determine if a schema has been generated, and since the locations are different,
the schema is generated twice and may cause classloader problems at runtime.
> We have 2 different suggestions on how this issue may be addressed:
> 1. Provide the ability to specify 1 or more schemaExclude namespace options. This is
similar to how Axis' Wsdl2Java addresses this issue.
> and/or
> 2. Provide an option to always ignore the schema location (Global) when determining if
the schema has already been generated.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected] | http://mail-archives.apache.org/mod_mbox/xmlbeans-dev/200705.mbox/%3C30611363.1178111595913.JavaMail.jira@brutus%3E | CC-MAIN-2017-22 | refinedweb | 315 | 54.52 |
how can I get a fixed sampling rate of 1 ksps when interfacing MCP3008 (10 bit ADC) with raspberry pi using SPI?
0
Whenever I take data from the MCP3008, my sample rate keeps changing. Sometimes it’s 24ksps, sometimes 200sps. I want to keep this fixed. Python Code for SPI communication on raspberry pi:
import time import sys import spidev spi = spidev.SpiDev() spi.open(0,0) spi.max_speed_hz=(3600000) def readadc(adcnum): if ((adcnum > 7) or (adcnum < 0)): return -1 r = spi.xfer2([1, (8+adcnum) <<4, 0]) adcout = ((r[1]&3) << 8) +r[2] return adcout ch0 = 0 seconds = time.time() while (time.time() < seconds+1): pc_value = readadc(ch0) print pc_value output_file.write("%s \n" %(pc_value))
Thanks for your help!
New contributor
- 1What does “a sampling rate of 1KHz” mean? You need to talk in terms of KSPS (thousands of samples per second) and the max for the MCP3008 is 200KSPS according to MCP3008 datasheet. Also what does this have to do with a Raspberry Pi. – Dougie Apr 4 at 14:05
- 1I think raspberrypi.org/forums is the better place to ask. This question has been asked and answered many times. – joan Apr 4 at 14:11
- @Dougie, I’ve edited the question. Is it clear now? The max is 200ksps, but I want it at a fixed value of 1ksps. Any insight? – Supriya Asutkar Apr 4 at 14:18
0
Question
Can I fix the MCP3008 sample rate?
Short Answer
Yes, you can. You make one conversion by one SPI writing/reading. In other words, more SPI write/read means more frequent samples/conversions.
Long Answer
I would suggest you skim through the datasheet quickly once, and read very slowly Section 5.0, Fig 5.1, and 5.2 a couple of times. I have made a summary below to refresh your memory.
You stare at my summary for a couple of minutes, then I will explain.
/ to continue,
Categories: Uncategorized | https://tlfong01.blog/2019/04/09/mcp3008-sample-rate-notes/ | CC-MAIN-2019-35 | refinedweb | 326 | 78.45 |
You can find all the code for this post at github.com/uidotdev/react-router-v5-server-rendering,.
v5. For max knowledge gain, we’re not going to use Create React App so we’ll have to roll our own configuration. For the sake of keeping this tutorial as focused as possible, I’ll paste the
webpack.config.js file and the
package.json file below then highlight the important parts.
// webpack.config.jsconst"})]"}),]}module.exports = [browserConfig, serverConfig]
Notice we have two different configurations: one for the browser and one for the server.
const browserConfig = {mode: "production",entry: './src/browser/index.js',output: {path: path.resolve(__dirname, 'dist'),filename: 'bundle.js'},module: {rules: [{ test: /\.(js)$/, use: 'babel-loader' },{ test: /\.css$/, use: [ 'css-loader' ]}]},plugins: [new webpack.DefinePlugin({__isBrowser__: "true"})]}
The browser configuration.
We also use
DefinePlugin to add an
__isBrowser__ property to the global namespace (
window) so we know when we’re in the browser."}),]}
The server configuration.
{"name": "react-router-v5-server-rendering","description": "Example for server rendering with React Router v5.","scripts": {"build": "webpack -p","start": "node dist/server.js","dev": "webpack && node dist/server.js"},"babel": {"presets": ["@babel/preset-env","@babel/preset-react"],"plugins": ["@babel/plugin-proposal-object-rest-spread"]},"devDependencies": {"@babel/core": "^7.9.0","@babel/plugin-proposal-object-rest-spread": "^7.9.5","@babel/preset-env": "^7.9.5","@babel/preset-react": "^7.9.4","babel-loader": "^8.1.0","css-loader": "^5.0.1","mini-css-extract-plugin": "^1.3.0","webpack": "^5.4.0","webpack-cli": "^4.2.0","webpack-node-externals": "^2.5.2"},"dependencies": {"cors": "^2.8.5","express": "^4.17.1","isomorphic-fetch": "^3.0.0","react": "^17.0.1","react-dom": "^17.0.1","react-router-dom": "^5.1.2","serialize-javascript": "^5.0.1"},"version": "1.0.0","main": "index.js","repository": {"type": "git","url": "git+"},"author": "Tyler McGinnis","license": "MIT","homepage": ""}
The big takeaway here is
npm run dev will run
webpack && node dist/server.js. The
webpack command kicks off the Webpack process and bundles our code while
node dist/server.js starts our node server.
The
buildand
startcommands are for hosting our server on a platform like Heroku.
Now that our build process is set up, let’s start building our app. According to our
webpack.config.js file, inside of our
src folder, we’re going to have a
server and a
browser folder. Let’s also add a
shared folder for all the functionality which will be shared between the two.
webpack.config.jspackageimport * as React from 'react'export default function App () {return (<div>Hello World</div>)}.
// src/server/index.jsimport express from "express"import cors from "cors"const app = express()app.use(cors())app.use(express.static("dist"))const PORT = process.env.PORT || 3000app RRv5</title></head><body><div id="app">${markup}</div></body></html>`)})const PORT = process.env.PORT || 3000app.listen(PORT, () => {console.log(`Server is listening on port: ${PORT}`)})
Lastly, we need to include a reference to our
bundle.js file and our
main.css file, both located in
dist and both created by Webpack.
Lastly, we’ll also want to include a script tag which references the
bundle.js file being created by Webpack.
<head><title>SSR with RRv5< recreating it on the client, it should preserve it while attaching any needed event handlers to the existing server rendered markup. RRv5</title><script src="/bundle.js" defer></script><link href="/main.css" rel="stylesheet"></head><body><div id="app">${markup}</div></body></html>`)
At this point, assuming you’ve already ran
npm install and
npm run dev, when you visit
localhost:3000 you should see
Hello World.
That “Hello World” was initially rendered on the server then when it got to the client and the
bundle.js file loaded, React took over.
💻 View the code or View the commit 💻
Cool. Also, anticlimactic.
Let’s mix things up a big so we can really see how this works. What if instead of rendering “Hello World”, we wanted
App to render
Hello {props.data}.
export default function App (props) {return (<div>Hello ReactDOM.hydrate(<App data='Tyler' />,document.getElementById('app'))
// server/index.jsconst markup = ReactDOM.renderToString(<App data='Tyler' />)
💻 View the code or View the commit 💻
Great. So now we see “Hello Tyler” in the UI. Remember earlier when I mentioned that what you render on the server needs to be identical to what is rendered on the client? We can see this in action if we change one of the data props.
ReactDOM.hydrate(<App data='Mikenzi' />,document.getElementById('app'))
💻 View the code or View the commit 💻 just (
window) so the client can reference it.
...import serialize from "serialize-javascript"app.get("*", (req, res, next) => {const name = 'Tyler'const markup = renderToString(<App data={name}/>)res.send(`<!DOCTYPE html><html><head><title>SSR with RRv5< data={window.__INITIAL_DATA__} />,document.getElementById('app'))
Cool. We’ve solved sharing initial data from the server to the client by using the
window object.
💻 View the code or View the commit 💻 a specific language. We’ll start off without any routing; then we’ll see how we can add it in using React Router v5.
The first thing we’ll want to do is make a function that takes in a language and, using the Github API, fetch the most popular repos for that language. Because we’ll be using this function on both the server and the client, let’s make an
api.js file inside of the
shared folder and we’ll call the function
fetchPopularRepos.
// shared/api.jsimport first fetch the popular repositories then call it after giving our React component the that new data. Instead of handling it in
App, let’s make a new component called
Grid that deals with mapping over all the repos.
// src/shared/Grid.jsimport * as React from 'react'export default function Grid ({ repos }) {return (>)}
Now we just need to modify our
App component to pass along
data as
repos to the
Grid component.
// shared/App.jsimport * as React from 'react'import Grid from './Grid'import "./styles.css"export default function App (props) {return (<div><Grid repos={props.data} /></div>)}
Solid. Now when our app is requested, the server fetches the data the app needs and the HTML response we get has everything we need for the initial UI.
Note for this commit I’ve included a bunch of CSS in
src/shared/styles.cssand them in
src/shared/App.js. Because this is a post about server rendering and not CSS, feel free to paste those into your app.
💻 View the code or View the commit 💻
At this point, we’ve done a lot, but our app still has a long way to go, especially around routing.
React Router v5 is a declarative, component-based approach to routing. However, when we’re dealing with server-side rendering with React Router v5, we need to abandon that paradigm and move all of our routes to a central route configuration. The reason for this is because both the client and the server be aware and share the same.
If you’re not familiar with URL Parameters, read URL Parameters with React Router v5 before continuing.
In the case of our app, we’ll have two routes -
/ and
/popular/:id.
/ will render the (soon to be created)
Home component and
/popular/:id will render our
Grid component.
// src/shared/routes.jsimport Home from './Home'import Grid from './Grid'const routes = [{path: '/',exact: true,component: Home,},{path: '/popular/:id',component: Grid,}]export default routes
Before we continue, let’s hurry and create the
Home component.
// src/shared/Home.jsimport * as React from 'react'export default function Home () {return <h2 className='heading-center'>Select a Language</h2>}import, we’ll know we need to invoke
fetchInitialData before we can return the HTML.
Let’s head back over to our server and see what these changes will look like.
The first thing we need to do is figure out which route (if any) matches the current requested URL to the server. For example, if the user requests the
/ page, we need to find the route which matches
/. Luckily for us, React Router v5.
💻 View the code or View the commit 💻
Try it out in your browser. Head to
localhost:3000/popular/javascript. You’ll notice that the most popular JavaScript repos are being requested. You can change the language to any language.
// src
src/browser/index.js since that’s where we’re rendering
App.
import * as React from 'react'import ReactDOM from 'react-dom'import App from '../shared/App'import { BrowserRouter } from 'react-router-dom'ReactDOM v5 = ReactDOM.renderToString(<StaticRouter location={req.url} context={{}}><App data={data}/></StaticRouter>)... also invoke it if it doesn’t already have the data from the server.
Why we’re here, let’s add some extra stuff to make our app look better. Specifically our
ColorfulBordercomponent and a
divwith a
classNameof
containerin our
Appcomponent.
// src/shared/ColorfulBorder.jsimport * as React from 'react'export default function ColorfulBorder() {return (<ul className='border-container'><li className='border-item' style={{ background: 'var(--red)' }} /><li className='border-item' style={{ background: 'var(--blue)' }} /><li className='border-item' style={{ background: 'var(--pink)' }} /><li className='border-item' style={{ background: 'var(--yellow)' }} /><li className='border-item' style={{ background: 'var(--aqua)' }} /></ul>)}
// src/shared/App.jsimport * as React from 'react'import routes from './routes'import { Route } from 'react-router-dom'import ColorfulBorder from './ColorfulBorderimport './styles.css'export default function App (props) {return (<React.Fragment><ColorfulBorder /><div className='container'>{routes.map(({ path, exact, fetchInitialData, component: C }) => (<Route key={path} path={path} exact={exact}><CfetchInitialData={fetchInitialData}repos={props.data}/></Route>))}</div></React.Fragment>)}
Before we move on, let’s also add a Navbar and a catch all - 404 route to our
App.
//>)}
// src/shared/NoMatch.jsimport * as React from 'react'export default function NoMatch () {return <h2 className='heading-center'>Four Oh Four</h2>}
// src/shared/App.jsimport * as React from 'react'import routes from './routes'import { Route, Switch } from 'react-router-dom'import Navbar from './Navbar'import NoMatch from './NoMatch'import ColorfulBorder from './ColorfulBorder'import './styles.css'export default function App (props) {return (<React.Fragment><ColorfulBorder /><div className='container'><Navbar /><Switch>{routes.map(({ path, exact, fetchInitialData, component: C }) => (<Route key={path} path={path} exact={exact}><CfetchInitialData={fetchInitialData}repos={props.data}/></Route>))}<Route path='*'><NoMatch /></Route></Switch></div></React.Fragment>)}
💻 View the code or View the commit 💻
At this point our app is coming along nicely, but there are some pretty glaring issues with it. The biggest being with our
Grid component and how it gets and manages its own data.).
Let’s focus on that first server rendered state right now and how we can improve on what we currently have. Currently on the server we’re invoking
fetchInitialData, passing the response as a
data prop to
App, then passing it down as
repos to all components rendered by React Router. Now there’s nothing wrong with doing a little prop plumbing, but React Router has an easier way that utilizes React Context.
Remember inside of our server file when we used
StaticRouter passing it a prop of
context that we gave an empty object?
const markup = ReactDOM.renderToString(<StaticRouter location={req.url} context={{}}><App data={data}/></StaticRouter>)
Whatever we pass to
context will be available to any component that React Router renders as a property on the
staticContext prop. What that means is that no matter how nested our component tree is, any React Router rendered component that needs access to
repos can easily get it.
The first change we’ll make is adding
data to our
context object on the server and remove passing it to
App.
promise.then((data) => {const markup = ReactDOM.renderToString(<StaticRouter location={req.url} context={{ data }}><App /></StaticRouter>)...
Now since we’re no longer passing
data as a prop to
App, we need to modify our
App component. There are two changes we need to make. First, we’re no longer receiving
data as a prop which means we can no longer pass
repos={data} as a prop to the component being rendered by React Router (
C). Next, instead of passing React Router a
children element, we want to use the
render prop. The reason for this is how React Router handles
children elements vs
render functions. If you look at the code, you’ll notice that React Router doesn’t pass along any props to
children elements. Typically this is fine but we already established we want React Router to pass our components
staticContext so we can get access to our
repos.
export default function App () {return (<React.Fragment><ColorfulBorder /><div className='container'><Navbar /><Switch>{routes.map(({ path, exact, fetchInitialData, component: C }) => (<Route key={path} path={path} exact={exact} render={(props) => (<C fetchInitialData={fetchInitialData} {...props} />)} />))}<Route path='*'><NoMatch /></Route></Switch></div></React.Fragment>)}
By utilizing the
render prop, the function we pass to
render will be passed
props from React Router which we can then take and spread across the component it renders.
Now the only other change we need to make is in our
Grid component. Instead of receiving
repos as a prop, it’s going to receive
staticContext which will have a
data prop.
export default function Grid ({ staticContext }) {const repos = staticContext.datareturn (>)}
At this point we’ve solved prop plumbing on the server by utilizing
StaticRouter’s
context prop, however, we still have a few large issues with our app. Earlier I said that ).”
We just clean up the first, data fetching on the server. Now let’s move to the second - when the client picks up the server rendered app. If you were to run the app in it’s current form, you’d notice that it’s broken. The reason for that is because our
Grid component is always expecting to get its data via
staticContext. However, as we just saw, it’ll only get it’s data from
staticContext when it’s first rendered on the server. When the client takes over, it’s going to get its data from
window.__INITIAL_DATA__ as we talked about earlier.
Let’s make this fix to our
Grid component. We can tell if we’re on the server or in the browser by the
__isBrowser__ flag we set up in our
webpack.config.js file.
export default function Grid ({ staticContext }) {const repos = __isBrowser__? window.__INITIAL_DATA__: staticContext.datareturn (...)}
💻 View the code or View the commit 💻
At this point we’ve solved our data needs when the app is rendered on the server via
context and when the app is rendered on the client via
window. However, there’s still one last data puzzle piece we need to put in place and that’s when the user navigates around our app via React Router.
Before we solve that it’s important that you understand why we have this problem. You can think of our app as having three phases - server rendered -> client pickup ->.
The good news is at this point the hardest parts are behind us. Now we’re only dealing with client-side React which is probably the mental model you’re used to.
What we’ll do now is give our
Grid component the ability to fetch the popular repositories of whatever language the user selects. To do this, we’ll use some combination of Hooks, the
fetchInitialData property on our
routes, and React Router v5’s URL parameters.
The first thing we’ll do is move
repos to be a piece of state rather than just a variable since we’ll be modifying it as the user selects different languages.
export default function Grid ({ staticContext }) {const [repos, setRepos] = React.useState(() => {return __isBrowser__? window.__INITIAL_DATA__: staticContext.data})...}
Next we’ll add a new
loading state to our component. We’ll want the default value to be
false if
repos is truthy and
true if it isn’t. (Another way to put that - we want
false if we already have
repos, which means they were created on the server).
export default function Grid ({ staticContext }) {const [repos, setRepos] = React.useState(() => {return __isBrowser__? window.__INITIAL_DATA__: staticContext.data})const [loading, setLoading] = React.useState(repos ? false : true)if (loading === true) {return <i className='loading'>🤹♂️</i>}return (<ul className='grid'>...</ul>)}
Finally, whenever the user selects a new language, we want to fetch the new popular repositories for that language and update our
repos state. To fetch the new popular repositories, we can use the
fetchInitialData prop that we passed in when we created our
Routes.
{routes.map(({ path, exact, fetchInitialData, component: C }) => (<Route key={path} path={path} exact={exact} render={(props) => (<C fetchInitialData={fetchInitialData} {...props} />)} />))}
Now the questions are, when do we invoke
fetchInitialData and how do we know what language to fetch?
If you’ll remember, the
route for when our
Grid component renders looks like this.
{path: '/popular/:id',component: Grid,fetchInitialData: (path = '') => fetchPopularRepos(path.split('/').pop())}
We’re using a URL Parameter (
id) to represent the language. We can get access to that URL Parameter (and therefor language) via React Router 5.1’s
useParams Hook.
Next.
...import { useParams } from 'react-router-dom'export default function Grid ({ staticContext }) {const [repos, setRepos] = React.useState(() => {return __isBrowser__? window.__INITIAL_DATA__: staticContext.data})const [loading, setLoading] = React.useState(repos ? false : true)const { id } = useParams()React.useEffect(() => {setLoading(true)fetchInitialData(id).then((repos) => {setRepos(repos)setLoading(false)})}, [id])...}
💻 View the code or View the commit 💻.
export default function Grid ({ fetchInitialData, staticContext }) {...
💻 View the code or View the commit 💻
And with that, we’re finished! The first request will be server rendered and every subsequent path change after that React Router will own.
Now you tell me, is this complexity worth the benefits to your app? 🤷
You can find all the code for this post at github.com/uidotdev/react-router-v5-server-rendering. | https://ui.dev/react-router-v5-server-rendering/ | CC-MAIN-2021-43 | refinedweb | 2,958 | 50.33 |
I'm trying to update a C program written 20+ years ago. I want to use current compilers and standards. I'm looking at this as a good learning process, beyond reading C++ programming guides and reading code that has no real-world applications. I've already updated all the function headers to current ANSI standard headers. I'm now trying to write a class with functions.
The existing code is:
My new code is:My new code is:Code:typedef struct xycoord { int x, y; } coord;
I know the new code I've written will compile when I put it in a fresh C++ project. However, when I try to compile it in the old program, I get thousands of errors.I know the new code I've written will compile when I put it in a fresh C++ project. However, when I try to compile it in the old program, I get thousands of errors.Code:class coord { public: int x, y; coord(int xin, int yin) { x = xin; y = yin; } coord(); void operator=(coord &rhs); coord operator+(coord &other); bool operator==(coord &other); }; void coord::operator=(coord &rhs) { x = rhs.x; y = rhs.y; } coord coord::operator+(coord &other) { return coord(x + other.x, y + other.y); } bool coord::operator==(coord &other) { return (x == other.x && y == other.y); };
Compiling...
makedefs.c
c:\...\coord.h(20) : error C2061: syntax error : identifier 'coord'
c:\...\coord.h(20) : error C2059: syntax error : ';'
c:\...\coord.h(21) : error C2449: found '{' at file scope (missing function header?)
c:\...\coord.h(31) : error C2059: syntax error : '}'
c:\...\coord.h(39) : error C2061: syntax error : identifier 'coord'
c:\...\coord.h(39) : error C2059: syntax error : ';'
...
I'm sure I'm missing something fundamental. Please help. I don't know where to look. | http://cboard.cprogramming.com/cplusplus-programming/104006-migrating-c-want-implement-class.html | CC-MAIN-2014-15 | refinedweb | 299 | 52.15 |
What's New in this Release?
Aspose team is pleased to announce the release of Aspose.Pdf for Java 11.8.0. This version includes PDF to PDFA conversion improvement in addition to the improvements/enhancements made in its equivalent .NET version. We have fixed number of issues in this release, reported by our customers in previous releases. Some of these are PDF to PDF/A, PDF to HTML, PDF to DOC, Printing issue on iSeries and some others. This release has improved image rendering logic in PDF to PDFA conversion in this release, that fixed a lot of image related issues in PDFA. We have been reported some printing issues on non-Windows OS, in previous versions. We have fixed these issues PdfViewer class in current release and it will improve the API reliability. The basic API change in this release is removal of Interfaces namespace. It has also enhanced creating PDF by API, XML and XSL-FO files as well as converting HTML, XSL-FO and Excel files into PDF. Some important improved features included in this release are given below
- PDF to HTML - Contents missing and wrong background color
- PDF to PDF/A - Image starts appearing on top of page title
- Error opening readonly PDF
- PDF to Doc: Redundant tab character before a last bullet item
- PDF to Doc: Redundant characters after bullet items
- setKeptWithNext is not working as expected
- PDF to PDF/A_3a - Resultant file is not PDF/A compliant
- PDF to PDF/A - Text formatting is lost and contents are missing
- PDF to PDF/A - Logo orientation is changed and background information is lost
- PDF to PDF/A - Background becomes black
- PDF to PDFA conversion duplicates background image
- PDF to PDF/A - Issue with images
- PdfPrinterSettings crash on iSeries (Java)
- Concatenation: Bookmarks are not being copied to Merged Document
- PDF to PDF/A - Resultant file is not Tagged users can create PDF by API, XML and XSL-FO files. It also enables users to converting HTML, XSL-FO and Excel files into PDF.
More about Aspose.Pdf for Java
- Homepage of Aspose.Pdf for Java:
- Download Aspose.Pdf for Java at: | https://www.theserverside.com/discussions/thread/82150.html | CC-MAIN-2019-30 | refinedweb | 356 | 60.04 |
This C++ program illustrates the bitwise operators. The bitwise operators are like logic gates operators which work on individual bits of binary representations of the data.
Here is the source code of the C++ program which illustrates the bitwise operators. The C++ program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C++ Program to Illustrate Bitwise Operators
*/
#include <iostream>
using namespace std;
int main() {
int a = 7; // a = 111
int b = 5; // b = 101
cout << "Bitwise Operators\n";
cout << "a & b = " << (a&b) << "\n";
cout << "a | b = " << (a|b) << "\n";
cout << "a ^ b = " << (a^b) << "\n";
cout << "~a = " << (~a) << "\n";
cout << "~b = " << (~b) << "\n";
cout << "a >> b = " << (a>>b) << "\n";
cout << "a << b = " << (a<<b) << "\n";
}
$ gcc test.cpp $ a.out Bitwise Operators a & b = 5 a | b = 7 a ^ b = 2 ~a = -8 ~b = -6 a >> b = 0 a << b = 224
Sanfoundry Global Education & Learning Series – 1000 C++ Programs.
If you wish to look at all C++ Programming examples, go to C++ Programs. | https://www.sanfoundry.com/cpp-program-illustrate-bitwise-operators/ | CC-MAIN-2018-22 | refinedweb | 172 | 59.13 |
- Code: Select all
ImportError: No module named sublime
Here's the distilled unit test, which resides in my-plugin/tests:
- Code: Select all
import unittest
import json
from my_plugin import MyCommand
class MyUnitTest(unittest.TestCase):
def test_read_json_data(self):
...
Very abbreviated plugin:
- Code: Select all
import sublime_plugin
import sublime
class MyCommand(sublime_plugin.TextCommand):
def run(edit):
...
It fails on the 'import sublime' statement above when I try to instantiate 'MyCommand'. In other words, the test code doesn't execute beyond the import statements.
So, where is the sublime module such that I can add it to my path? Or maybe there's a different (better?) approach to writing tests?
Any help is much appreciated!
-bill | http://www.sublimetext.com/forum/viewtopic.php?p=19901 | CC-MAIN-2015-35 | refinedweb | 114 | 52.56 |
- Training Library
- Amazon Web Services
- Courses
- HashiCorp Vault
Vault Secrets
Transcript
Welcome back!
In this lecture we'll introduce you to Vault secret engines and the different types that are shipped with the Vault server. Vault secret engines are components which store, generate, or encrypt data. And as you'll see, are incredibly flexible. The agenda for this Vault secret engines lecture includes the following topics: motivation for Vault secret engines and intended purposes, vault secret engines supported out of the box, the Vault secret engine lifecycle, managing Vault secret engines, and the Key/Value secret engines for storing sensitive static data.
In this section we'll start with an introduction to secret engines. In the context of Vault, secret engines are components responsible for managing secrets. Secrets are pieces of sensitive information that could be used to access infrastructure, resources, and/or data, and so forth. Some secrets engines simply store and read data, like encrypted Redis/Memcached. Other secrets engines connect to other services and generate dynamic credentials on demand. Other secrets engines provide encryption as a service, Time-Based One-Time Password generation, certificates, and much more. Vault comes with a number of secret engines bundled. The Key/Value and Cubbyhole secret engines are enabled by default and cannot be disabled, while other secret engines need to be enabled first before they can be used.
Let's now cover each of the possible secret engines. Cubbyhole secret engines store arbitrary secrets with the configured physical storage for Vault namespace to a token. Paths are scoped per token. Key/Value secret engines store arbitrary secrets within the configured physical storage for Vault. Also known as generic secrets. AWS secret engines generate AWS access credentials dynamically based on IAM policies. Consul secret engines generate Consul API tokens dynamically based on Consul ACL policies. Database secret engines generate database credentials dynamically based on configured roles. This secret engine has database specific plugins for Cassandra, HanaDB, MongoDB, Microsoft SQL, MySQL, MariaDB, PostgreSQL, Oracle, and custom. Identity secret engine is the identity management solution for Vault that internally maintains the clients that are recognized by Vault. Nomad secret engine generates Nomad API tokens dynamically based on pre-existing Nomad ACL policies. PKI secret engine generates dynamic X.509 certificates. RabbitMQ secret engine generates user credentials dynamically based on configured permissions and virtual hosts. SSH secret engine provides secure authentication and authorization for access to machines via the SSH protocol. Supported modes are signed SSH certificates and one-time SSH password. Time-Based One-Time Password secret engine generates time-based credentials according to the Time-Based One-Time Password, or TOTP, standard. Transit secret engine handles cryptographic functions on data in transit.
Secret engines must be enabled at a path so that the request can be routed. Enable operation enables a secret engine at a given path. With few exceptions, secret engines can be enabled at multiple paths. Each secret engine is isolated to its path. By default, they are enabled at their type. Such as, aws enables at aws/. Disable operation disables an existing secret engine. When a secret engine is disabled, all of its secrets are revoked, if they support it, and all of the data stored for that engine in the physical storage layer is deleted. Move operations move the path for an existing secret engine. This process revokes all secrets, since secret leases are tied to the path they were created at. The configuration data stored for the engine persists through the move. Tune operations tune global configuration for the secret engine such as time-to-lives, or TTLs. Secrets engines receive a barrier view to the configured Vault physical storage. an enabled secrets engine to access other data. This is an important security feature in Vault. Even a malicious engine cannot access the data from any other engine.
In this section, we'll cover how to manage and maintain secret engines. Secret engines can be managed by running the vault secrets command together with one of its subcommands. The possible sub commands are disable, which disables a secret engine; enable, which enables a secret engine; list, which lists the currently enabled secret engines; move, which moves an already enabled secret engine to a new path; or tune, which tunes a secret engine configuration. For example, altering the time-to-lives. In this example shown here, we mount the database secret engine twice, highlighting the fact that we can have multiple occurrences of the same secret engine as long as they are mounted to different and unique paths.
In this section, we'll discuss how you can store sensitive data using the Key/Value secret engine. Most organizations own and retain some form of sensitive data. Sensitive data is data that shouldn't be seen or shared, and should remain confidential both at rest and in transit. Sensitive data in this context is more generally referred to as secrets. Storing and managing secrets in a secure way is often a challenge that requires careful planning. Some examples of secrets include customer payment data, such as credit card information; Cluster configurations, including passwords; SSL private keys; API keys; or access tokens.
All of these types of secrets can be stored in the Vault Key/Value secret engine. Accessing these secrets can be achieved either by using the CLI or programmatically via the API. The Key/Value, or KV, secrets engine is used to store arbitrary secrets within the configured physical storage for Vault. All secrets stored within KV engine are encrypted using 256-bit AES in GCM mode with 96-bit nonces. The nonce is randomly generated for every encrypted object. The KV secret engine is enabled by default and is exposed via the secret/path prefix. This path prefix tells Vault to route traffic to the KV secret engine. It is possible to mount the KV secret engine to alternative paths concurrently. In doing so, each concurrent KV secret engine mount will be isolated and unique. Secrets are always stored as Key/Value pairs. Writing to an existing key in the Key/Value secret will replace the previous value. Subfields are not merged together. Let's take a look at some example commands for writing secrets using the KV secret engine. For starters, let's say we have a requirement to store an API key for Splunk. We would execute the following command: vault kv put secret/apikey/splunk apikey="the api key itself". Next, we can read values from within files stored in the local filesystem simply by appending the @ character to the name of the file. In this case, we would execute the following command: vault kv put secret/apikey/splunk [email protected].
We can also supply multiple Key/Value pairs within a single execution of the vault kv put command as shown in the last example. In this example, the acme.txt file contains a JSON formatted collection of Key/Value pairs. When the vault kv put command references this file, it creates the same set of Key/Value pairs under the secret/customer/acme path. Retrieving secrets back out of the KV secret engine is simple and intuitive. The first example shows how to retrieve all Key/Value pairs stored under the secret/customer/acme path. The second example demonstrates how to selectively retrieve just the value stored against the contact_email key. In the next example, we highlight what happens when updating an existing key within an existing path. It's important to understand that in this scenario, a merge does not take place. Instead, the Key/Value engine replaces the previously stored secret with the new secret. Secrets can be easily deleted by executing the vault kv delete command together with the path where the secrets are stored. As with other parts of the Vault server, you can forgo the Vault CLI in favor of the Vault API. The Key/Value secret engine API supports all expected CRUD operations for secrets.
Key points when working with the Key/Value secret engine via the API are: First, the API is accessed over a TLS connection at all times. This ensures all secrets remain encrypted on the wire while in transit. Second, API routes should be prefixed with the Key/Value version. Third, a valid Vault token must be supplied in the X-Vault-Token HTTP header. The Vault API can be used to write, read, and delete secrets. The examples as shown here use the curl utility to craft the different types of API operations, and are sent over HTTPS to the Vault server running behind the domain name vault.rocks. When using the Vault API, you need to use the correct HTTP verb, POST, GET, or DELETE, when either writing, reading, or deleting the secrets to the secret engine.
Okay, that completes this lecture on Vault secret engines.. | https://cloudacademy.com/course/hashicorp-vault/hashicorp-vault-secrets/ | CC-MAIN-2019-30 | refinedweb | 1,476 | 56.35 |
Introduction
This article shows how the SPI bus functionality of the SAMA5D2 Series ARM® Cortex®-A5 Microprocessor Unit (MPU) is enabled in the Linux® kernel and how to access the SPI bus in user space.
Since the SPI device interface was introduced into the Linux kernel, you can access the SPI driver via spi_register_driver() interface via the structure spi_device handle.
You can also access the SPI driver in user space via the /dev/spidev device node. SPI devices have a limited user-space API, supporting basic half-duplex read() and write() access to SPI slave devices. Using ioctl() requests, full-duplex transfers and device I/O configuration are also available. We show you how, using a C-language program.
Prerequisites
This application is developed for the ATSAMA5D27-SOM1-EK1 development platform:
This application is developed using the Buildroot build system.
Hardware
For this application, you will be controlling the SPI bus of the mikroBUS 1 expansion socket of the ATSAMA5D27-SOM1-EK1. The figure below shows the expansion capability of the SOM1-EK1.
The ATSAMA5D27 SOM1 contains five Flexible Serial Communications Controller (FLEXCOM) peripherals to provide serial communications protocols: USART, SPI, and TWI.
You will control pins PD0, PC30, PC29 and PA28 from the ATSAMA5D27 SOM1 which connects to J24 pins 3, 4, 5 and 6 of the mikroBUS 1 connector (labeled NPCS1, SPCK_mBUS1, MISO_mBUS1 and MOSI_mBUS1 on the schematic).
For more details on the SAMA5D2 Package and Pinout, refer to Table 6-2. Pinouts in SAMA5D2 series data sheet.
Buildroot Configuration
Objective:
Using Buildroot, build a bootable image and FLASH onto an SD Memory Card for the ATSAMA5D27-SOM1-EK1 development board.
Follow the steps for building the image in the "Buildroot - Create Project with Default Configuration" page. You will use the default configuration file: atmel_sama5d27_som1_ek_mmc_dev_defconfig.
Device Tree
Objective:
Observe how the FLEXCOM4 peripheral was configured for SPI in the device tree. A small addition is shown for the at91-sama5d27_som1_ek.dts file below for the ability to communicate in user space.
Once Buildroot has completed its build, the SPI definitions for the ATSAMA5D27-SOM1-EK1 were configured by a device tree. The device tree source include (*.dtsi and *.dts) files are located in the Buildroot output directory: /output/build/linux-linux4sam_6.0/arch/arm/boot/dts/.
1
Examine the sama5d2.dtsi file and observe the FLEXCOM4 device assignments:
697 flx4_clk: flx4_clk { 698 #clock-cells = <0>; 699 reg = <23>; 700 atmel,clk-output-range = <0 83000000>; 701 }; . . 1424 flx4: flexcom@fc018000 { 1425 compatible = "atmel,sama5d2-flexcom"; 1426 reg = <0xfc018000 0x200>; 1427 clocks = <&pmc PMC_TYPE_PERIPHERAL 23>; 1428 #address-cells = <1>; 1429 #size-cells = <1>; 1430 ranges = <0x0 0xfc018000 0x800>; 1431 status = "disabled"; 1432 };
Line 699 shows the PID for FLEXCOM4 is 23; this definition of the offset will be used to enable FLEXCOM4 clock in PMC.
Line 700 shows the FLEXCOM4 input clock; the max frequency is 83MHz.
Line 1425 specifies which driver will be used for this FLEXCOM device.
Line 1426 shows the FLEXCOM4 base address of 0xfc018000; the size is 0x200.
Line 1427 shows the definition for the FLEXCOM4 clock source.
Line 1431 the status is set to "disabled" by default. It will be set to "okay" in the at91-sama5d27_som1_ek.dts file below.
2
Examine the at91-sama5d27_som1_ek.dts file and observe the SPI device assignments:
258 flx4: flexcom@fc018000 { 259 atmel,flexcom-mode = <ATMEL_FLEXCOM_MODE_SPI>; 260 status = "okay"; 261 262 uart6: serial@200 { 263 compatible = "atmel,at91sam9260-usart"; 264 reg = <0x200 0x200>; 265 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 7>; 266 clocks = <&pmc PMC_TYPE_PERIPHERAL 23>; 267 clock-names = "usart"; 268 pinctrl-names = "default"; 269 pinctrl-0 = <&pinctrl_flx4_default>; 270 atmel,fifo-size = <32>; 271 status = "disabled"; /* Conflict with spi3 and i2c3. */ 272 }; 273 274 spi3: spi@400 { 275 compatible = "atmel,at91rm9200-spi"; 276 reg = <0x400 0x200>; 277 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 7>; 278 clocks = <&flx4_clk>; 279 clock-names = "spi_clk"; 280 pinctrl-names = "default"; 281 pinctrl-0 = <&pinctrl_mikrobus_spi &pinctrl_mikrobus1_spi_cs &pinctrl_mikrobus2_spi_cs>; 282 atmel,fifo-size = <16>; 283 status = "okay"; /* Conflict with uart6 and i2c3. */ // the following code is added to enable spidev in userspace 283a spidev@1{ 283b compatible = “spidev”; 283c reg = <1>; 283d spi-max-frequency = <100000>; 283e } // - - - - - - - - - - - - - - - - - - - - - - - 284 }; 285 286 i2c3: i2c@600 { 287 compatible = "atmel,sama5d2-i2c"; 288 reg = <0x600 0x200>; 289 interrupts = <23 IRQ_TYPE_LEVEL_HIGH 7>; 290 dmas = <0>, <0>; 291 dma-names = "tx", "rx"; 292 #address-cells = <1>; 293 #size-cells = <0>; 294 clocks = <&pmc PMC_TYPE_PERIPHERAL 23>; 295 pinctrl-names = "default"; 296 pinctrl-0 = <&pinctrl_flx4_default>; 297 atmel,fifo-size = <16>; 298 status = "disabled"; /* Conflict with uart6 and spi3. */ 299 }; 300 }; . . 473 pinctrl_mikrobus1_spi_cs: mikrobus1_spi_cs { 474 pinmux = <PIN_PD0__FLEXCOM4_IO4>; 475 bias-disable; 476 }; . . 483 pinctrl_mikrobus_spi: mikrobus_spi { 484 pinmux = <PIN_PC28__FLEXCOM4_IO0>, 485 <PIN_PC29__FLEXCOM4_IO1>, 486 <PIN_PC30__FLEXCOM4_IO2>; 487 bias-disable; 488 };
Line 258 specifies SPI mode for this FLEXCOM.
Line 275 enables this device.
Line 275 specifies which driver will be used for this SPI device.
Line 276 sets the register offset address to 0x400, size 0x200.
Line 277 specifies the PID for FLEXCOM4 is 23, high level triggered, priority 7 (used to configure FLEXCOM4 interrupt in the AIC).
Line 278 is the definition for the FLEXCOM4 clock source.
Line 281 is the pin definition for the FLEXCOM4 SPI function.
Line 283 shows the SPI function status is "okay" while the UART and I²C functionality are "disabled".
Line 283a-283e are the changes you make (see the following information box).
See /output/build/linux-linux4sam_6.0/drivers/spi/spi.c of _spi_parse_dt() for more options.
Line 283b specifies which driver will be used for this device.
Line 283c is the definition that will be used as the CS number for SPIDEV.
Line 283d specifies the clock frequency for SPIDEV.
Line 474 assigns pin PD0 to FLEXCOM4_IO4.
Line 484 assigns pin PC28 to FLEXCOM4_IO0.
Line 485 assigns pin PC29 to FLEXCOM4_IO1.
Line 486 assigns pin PC30 to FLEXCOM4_IO2.
It is not recommended to use spidev as a device tree compatible name. It will work, but you will get the following warning:
# dmesg | grep spidev spidev spi1.1: buggy DT: spidev listed directly in DT WARNING: CPU: 0 PID: 1 at drivers/spi/spidev.c:730 0xc045d630
Because spidev is a Linux implementation construct, rather than a description of the hardware, it should never be referenced in a device tree without a specific name. To avoid this warning, use another compatible name instead of spidev, for example:
.
spidev@1 { compatible = "atmel,at91rm9200-spidev"; reg = <1>; spi-max-frequency = <1000000>; };
.
Next, edit spidev driver file /output/build/linux-linux4sam_6.0/drivers/spi/spidev.c and add a new compatible name to the compatible table of spidev driver:
.
static const struct of_device_id spidev_dt_ids[] = { { .compatible = "rohm,dh2228fv" }, { .compatible = "lineartechnology,ltc2488" }, { .compatible = "ge,achc" }, { .compatible = "semtech,sx1301" }, { .compatible = "atmel,at91rm9200-spidev" }, {}, };
Kernel
Objective:
Observe how SPI functionality was configured in the Linux kernel.
In this exercise, you will configure the Linux kernel for spidev functionality.
Device Driver
2
Select Device Drivers - - ->
Rootfs
There are two ways to access SPI bus driver:
Kernel space:
Register your SPI driver via spi_register_driver() interface, then access SPI bus driver via the structure spi_device handle.
There is no definition for SPI bus number in device tree files and the bus number of SPI controller will be allocated automatically when registering. For example, the first registered SPI controller will be assigned with bus number 0, and the second assigned bus number should be 1, and so on. The following device node will be used to access SPI bus driver. The first number 1 refers to bus number, and the second number 1 refers to the CS number: /dev/spidev1.1
User space:
You can access the SPI device from user space by enabling the SPIDEV kernel feature as shown above and then access SPI bus driver via /dev/spidev device node as shown in the "Application" section below. SPIDEV is a good choice because all application codes are running in user space (it’s easier for developing). However, you will have to configure the SPIDEV feature first in the Linux kernel.
Application
The following is a C-Language demonstration program for accessing the SPI bus driver via the SPIDEV node:
To compile:
$ buildroot/output/host/bin/arm-buildroot-linux-uclibcgnueabihf-gcc spi_dev.c -o spi_test
Be sure to type in the location of the cross-compiler on your host computer.
Source code:
#include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <fcntl.h> #include <sys/ioctl.h> #include <linux/spi/spidev.h> #define DEV_SPI "/dev/spidev1.1" int main(int argc, char *argv[]) { int fd; int ret; unsigned int mode, speed; char tx_buf[1]; char rx_buf[1]; struct spi_ioc_transfer xfer[2] = {0}; // open device node fd = open(DEV_SPI, O_RDWR); if (fd < 0) { printf("ERROR open %s ret=%d\n", DEV_SPI, fd); return -1; } // set spi mode mode = SPI_MODE_0; if (ioctl(fd, SPI_IOC_WR_MODE32, &mode) < 0) { printf("ERROR ioctl() set mode\n"); return -1; } if (ioctl(fd, SPI_IOC_RD_MODE32, &ret) < 0) { printf("ERROR ioctl() get mode\n"); return -1; } else printf("mode set to %d\n", (unsigned int)ret); // set spi speed speed = 1*1000*1000; if (ioctl(fd, SPI_IOC_WR_MAX_SPEED_HZ, &speed) < 0) { printf("ERROR ioctl() set speed\n"); return -1; } if (ioctl(fd, SPI_IOC_RD_MAX_SPEED_HZ, &ret) < 0) { printf("ERROR ioctl() get speed\n"); return -1; } else printf("speed set to %d\n", ret); // transfer data tx_buf[0] = 0xa5; xfer[0].tx_buf = (unsigned long)tx_buf; xfer[0].len = 1; xfer[1].rx_buf = (unsigned long)rx_buf; xfer[1].len = 1; do { if (ioctl(fd, SPI_IOC_MESSAGE(2), xfer) < 0) perror("SPI_IOC_MESSAGE"); usleep(100*1000); } while (1); // close device node close(fd); return 0; }
spidev_test Application
There is a spidev_test application that you can configure in Buildroot.
1
Select Target packages - - ->
2
Select Debugging, profiling and benchmark - - ->
Hands On
Copy the spi_test program to the target and execute it.
# chmod +x spi_test
# ./spi_test
The SPI waveform can be monitored on a logic analyzer:
Legend:
- Yellow – NPCS1
- Green – SPCK_mBUS1
- Blue – MOSI_mBUS1
- Red – MISO_mBUS1
Tools and Utilities
spi-tools () is a tool for SPI bus testing. It is included in the default configuration of Buildroot. You can view the selection by performing the following:
1
From the Buildroot directory, start menuconfig:
$ make menuconfig
2
Select Target packages - - ->
3
Select Hardware handling - - ->
spi-tools Commands
There are two commands in spi-tools:
spi-config:
# spi-config -h
usage: spi-config options…
options:
-d —device=<dev> use the given spi-dev character device.
-q —query print the current configuration.
-m —mode=[0-3] use the selected spi mode:
0: low iddle level, sample on leading edge,
1: low iddle level, sample on trailing edge,
2: high iddle level, sample on leading edge,
3: high iddle level, sample on trailing edge.
-l —lsb={0,1} LSB first (1) or MSB first (0).
-b —bits=[7…] bits per word.
-s —speed=<int> set the speed in Hz.
-h —help this screen.
-v —version display the version number.
# spi-config -d /dev/spidev1.1 -q
/dev/spidev1.1: mode=0, lsb=0, bits=8, speed=1000000
spi-pipe:
# spi-pipe -h
usage: spi-pipe options…
options:
-d —device=<dev> use the given spi-dev character device.
-b —blocksize=<int> transfer block size in byte.
-n —number=<int> number of blocks to transfer (-1 = infinite).
-h —help this screen.
-v —version display the version number.
# spi-pipe -d /dev/spidev1.1 -b 6 -n 1
111111
Input "111111" (six ones) and press the Enter key. The waves could be captured from an oscilloscope accordingly.
The SPI waveform can be monitored on a logic analyzer:
Legend:
- Yellow – NPCS1
- Green – SPCK_mBUS1
- Blue – MOSI_mBUS1
- Red – MISO_mBUS1
Summary
In this article, you used Buildroot to build an image with SPI Bus support for the ATSAMA5D2 Series MPU. You accessed the SPI Bus via two different methods: kernel space using SPI driver via spi_register_driver() interface, then access SPI bus driver via structure spi_device handle, and user space by enabling the SPIDEV kernel feature and then accessing the SPI bus driver via /dev/spidev device node. You walked through the device tree and kernel to observe how the embedded Linux system configures the source code for building. | https://microchip.wikidot.com/32mpu:apps-spi | CC-MAIN-2021-04 | refinedweb | 1,997 | 55.34 |
In this C++ tutorial, let us see about Signal handling in C++ with appropriate example programs.
Introduction of Signal Handling
Signals are the interrupts that force an OS to stop its ongoing task and process the task for which the interrupt has been sent. These interrupts can pause an ongoing process in any programs of the operating system. Likewise, C++ allows several signals which can catch and process that in a program.
Different types of Signals
Have a look at the list of different signals and their operations in C++.
-
Invalid access to storage.
- SIGTERM
A termination request sent to the program.
The signal() Function
C++ signal-handling library provides signal() function to trap unexpected events.
Syntax
void (*signal (int sig, void (*func)(int)))(int);
Program for signal() function
The following C++ program is to catch SIGINT signal using the signal() function.
#include <iostream> #include <csignal> using namespace std; void signalHandler( int signum ) { cout << "Interrupt signal (" << signum << ") received.\n"; // cleanup and close up stuff here // terminate program exit(signum); } int main () { // register signal SIGINT and signal handler signal(SIGINT, signalHandler); while(1) { cout << "Going to sleep...." << endl; sleep(1); } return 0; }
Output
Going to sleep.... Going to sleep.... Going to sleep....
Now, press Ctrl+c, program will catch the signal and print as follows.
Going to sleep.... Going to sleep.... Going to sleep.... Interrupt signal (2) received. | https://www.codeatglance.com/cplusplus/cpp-signalhandling/ | CC-MAIN-2020-10 | refinedweb | 227 | 59.8 |
Objective
In this article, I will give explanation on XElement class. This class is used to construct XML Elements.What is Elements in XML?
XML Element is fundamental XML constructs. An Element has a name and optional attributes. An XML Elements can have nested Elements called Nodes also.
XElement class
XElement Class is defined as below in namespace System.Xml.Linq. And it inherits the class XContainer that derives from XNode .
5 Facts
- It is one of the base and fundamental class in LINQ to XML.
- It represents XML element.
- This class can be used to change the content of the element.
- It can add child element.
- It can delete child element.
- It can change child element.
- It can add attributes to an element.
- This class can be use to serialize the content in a text form.
- This class can be used to create XML tree.
Constructor of XAttribute
If you see above definition of XAttribute class; there are five constructors. Usually we use the below constructor
public XElement(XName name, object content);
name: It is unique name of Attribute in XML tree.
content: It is content_1<<
Conclusion
In this article; I explained about XElement class. In next article, I will explain about CRUD operation on XML using LINQ to XML Thanks for reading. | https://debugmode.net/2010/02/21/linq-to-xml-part-2xelement-class/ | CC-MAIN-2022-40 | refinedweb | 214 | 69.79 |
This documentation is archived and is not being maintained.
System Management
Customize SMS Using Local Policies
Jeff Tondt
At a Glance:
- Overriding remote tool settings with local policy
- Collecting special inventory from targeted clients
- Advertising to untargeted client machines
- Configuring local policy using customized MOF files
Download the code for this article: SMS2006_09.exe (155KB)
Have you ever wanted to override SMS site settings for just a few machines? How about sending an advertisement without targeting the client? Or collecting special inventory from just a few machines
while the rest report what the site has set? Or even configuring specific remote control settings for some clients while leaving others based on site settings? You can do it all with Microsoft® Systems Management Server (SMS) 2003 local policies.
Because Agent Policy settings apply on a site-wide basis, it can be useful to configure a custom SMS policy on a local, per-client basis. (Note that you need to be a local administrator to configure a local client policy.) A custom local SMS policy can override either a full policy instance or individual properties of an instance. Local SMS policy settings can be added either by compiling a Managed Object Format (MOF) file or programmatically through Windows® Management Instrumentation (WMI). In this article, I'll cover only the MOF approach. The SMS 2003 SDK version 3.1, available from the SMS downloads page, explains the techniques used in this article.
Remote Tools
Say you have both servers and desktops in a single location. You don't want two SMS sites, but you want to force the SMS 2003 Advanced Client to ask users for permission when remote controlling desktops, even while you automatically control servers. You can accomplish this through a local policy that overrides the site settings.
You can use a MOF file to define a new policy instance. The existing instances of the same class are not overwritten when you compile the new MOF. During the evaluation of the policy, the settings need to be combined. For this process, it's not important whether the policy instance is complete or partial. A complete policy will not be merged with others from other sources, while a partial policy will.
A new policy will not be applied until the Advanced Client has completed a policy evaluation cycle, which is invoked automatically within two minutes after the Advanced Client has retrieved new policy from the management point. Similarly, if local policy is applied to the client, it will not take effect until a policy evaluation cycle has taken place. If this is required sooner than the scheduled policy retrieval, the policy evaluation may be invoked using the Advanced Client Control Panel applet or programmatically using the CPApplet client action for "machine policy retrieval and evaluation cycle". There are many ways to invoke policy evaluation, including the SendSched.vbs script found in the SMS 2003 Toolkit 2.
To see an example, take a look at TypicalMPpolicy.mof in the downloadable code for this article. This MOF file has an instance of the RemoteToolsConfig class as found on a management point. Note that the PermissionRequired property has been set to 1. Each client that uses this management point will, as a site-wide policy setting, have this policy property value applied for PermissionRequired.
Figure 1 shows the full local policy that will be used to override the site-wide policy settings. To invoke the local policy, first save the text in Figure 1 as remote.mof, then run the following command on the machine for which you want to override the site wide settings:
; };
Remember this only works on a SMS 2003 Advanced Client. An alternative, more flexible approach is to only override certain settings from the management point. The following text can be used with the same method to compile the MOF:
Collecting Special Inventory
You might want to collect additional hardware inventory details on just a few clients. Refer to SpecialInventory.mof in the download for this article.
In this configuration, PolicySource set to Local indicates that the policy source for this inventory is the local computer. As the Inventory Agent performs hardware and software inventory, the action identifier InventoryActionID is used to identify the inventory type. In this case a value of 1 indicates a hardware inventory. The remainder of the MOF file expresses the class, namespace, plus various other properties to be collected as part of the inventory.
Just compile the MOF on the SMS 2003 Advanced Clients for which you want to collect additional hardware inventory details. Of course the site SMS_DEF.MOF file must be configured to allow this information to be processed. For more information on configuring SMS_DEF.MOF refer to the SMS 2003 Operations Guide.
Advertise to Untargeted Clients
Why would you want to send advertisements to an untargeted client? Say you have a package you want to make available to users, but you expect they will find it by means other than SMS. For example, you might want to put the package on a commonly used file share. You do want all other SMS services, such as status reporting, to work. Or you might want to force the advertisement to run as part of the logon script as soon as the client is installed. You don't want to wait for the discovery data records (DDR) to flow to the site, the collection to be evaluated, and the policies to be received.
Sending advertisements to untargeted clients is a little more complicated because you need to gather information from an existing advertisement in order to create the local policy with the correct information. Let's go through the steps.
First, you must create an advertisement that targets any client. This does not have to be the clients you want to apply the local policy to. In fact, you could just use a test client so the advertisement will not interfere with operations. To keep things simple, it's a good idea to advertise to a brand new SMS 2003 Advanced Client or to a machine that has never been targeted for an advertisement. If you target such a machine when you set up your example advertisement, the next steps of retrieving the information to make a functional local policy are straightforward. This is because there are no other advertisements to complicate the data retrieval necessary to create the local policy. If you are brave and like matching policy IDs, you can use any machine for setting up the example advertisement. I recommend using the clean machine approach, though, as this technique can get complicated quickly. Be warned.
I like to use Wbemtest.exe to gather the information, but you can also use CIM Studio. Beware of PolicySpy from the Systems Management Server 2003 Toolkit 2, though, as it does not provide all the advertisement instances needed to create a functional local policy. On systems running Windows XP, Wbemtest.exe is usually installed in the %WINDIR%\system32\Wbem folder.
Now let's get the CCM_SoftwareDistribution advertisement instance. Execute Wbemtest.exe and connect to root\ccm\policy\machine\requestedconfig on the test client you advertised previously. Click the Open Class button and open the CCM_SoftwareDistribution class. Click the Instances button and then double-click the CCM_SoftwareDistribution policy instance. As you can see, the policy ID is part advertisement ID and package ID. But if you kept things simple and created just one advertisement, there will be no confusion. Simply click the Show MOF button, highlight the instance text, copy it, and then paste it into Advert.mof.
Your Advert.mof file should look similar to the snippet here. Refer to the Advert.mof file in the downloadable code for the full listing. Of course your example will vary, but having a frame of reference is helpful as it's easy to get lost:
Next, follow the same basic steps, but this time for the CCM_Scheduler_ScheduledMessage class. Again, if you have only one advertisement, you won't have to decode the policy ID to find the one that corresponds to the CCM_SoftwareDistribution you copied in the first step.
Exit all the Wbemtest.exe windows from the previous steps so we have a common place to start. Execute Wbemtest.exe and connect to root\ccm\policy\machine\requestedconfig on the test client you advertised in the preceding section. Click the Open Class button and open the CCM_Scheduler_ScheduledMessage class. Click the Instances button and double-click the instance that has the same Policy Rule ID as the one for the CCM_SoftwareDistribution instance you selected in the previous section. Now simply click the Show MOF button, highlight the instance text, copy the text and paste it into Advert.mof. Your Advert.mof should look similar to the snippet that follows. Again, refer to the full listing of Advert.mof you downloaded. Now we're halfway there—two more classes and we'll be ready to fire it up:+***"; ...
Again we'll follow essentially the same steps, this time with the CCM_Policy_Rules class. As before, exit all the windows so we have a common starting place. Execute Wbemtest.exe and connect to root\ccm\policy\machine\requestedconfig on the same test client. Click the Open Class button and open the CCM_Policy_Rules class. Click the Instances button and double-click the instance that has the same value for the CCM_Policy_Rules.PolicyID as the value for the CCM_SoftwareDistribution.PolicyID and CCM_Scheduler_ScheduledMessage.PolicyID you selected last time. Click the Show MOF button, highlight the instance text, and copy and paste it to Advert.mof. Be sure to write down the RuleStateID as you'll need it for the next step. Your Advert.mof should look similar to this:
Our last class is the CCM_Policy_Rule_State class. Once again, exit all windows, execute Wbemtest.exe, and connect to the test client. Click the Open Class button and open the CCM_Policy_Rule_State class. Click the Instances button and double-click the instance that has the same RuleStateID you wrote down in the last section. Click the Show MOF button, highlight the instance text, and copy and paste it to your Advert.mof, which should look similar to this:
Now you have all the necessary advertisement instances to create a valid MOF. All you need to do is to run a script to change some key parts, then fire up Mofcomp.exe to instantiate the local policy. Refer to the file SWD_RunAtUntargeted.vbs in the download for this article to get the full listing.
Before executing the script, replace the sourcesite variable with the site code for your site. Also replace the original_schedule variable with the time from the ScheduleString value for the Triggers part within the CCM_Scheduler_ScheduledMessage instance. Then execute SWD_RunAtUntargeted.vbs on the machine you want to have the advertisement run on. This can be any machine with the SMS 2003 Advanced Client installed that has access to the DPs for the site you gathered the information from. The VBScript code creates a file called Changed_AD.MOF in which it has made adjustments to create a valid local policy that executes immediately. You'll notice there is no difference at this point between the client running an advertisement that was targeted from the site server and the local policy you just instantiated. Status messages, reporting, and everything else work the same.
Wrapping Up
There are many powerful local policy techniques that can help customize your SMS 2003 implementation. To remove a local policy, you can either change the MOF definition to set CCM_Policy_Override to FALSE for partial instances or delete the entire instance using the key properties. Deleting the instance will have to be done via Wbemtest.exe or a similar application, or programmatically by using WMI.
Note that local policy-based scripts may not be guaranteed to work or be supported in the next release of SMS, which will be known as System Center Configuration Manager 2007, and may have settings or schema not compatible with local policies.
Show: | https://technet.microsoft.com/en-us/library/2006.09.customizesms.aspx | CC-MAIN-2018-34 | refinedweb | 1,992 | 55.95 |
ViewModel (or Model-View-ViewModel) is an emerging pattern in the WPF, Silverlight space which enables a separation of concerns similar to that of the MVC pattern that is popular on for stateless web apps today (for example: ASP.NET MVC). John Gossman was the first one I heard talk about the pattern from his days working on Expression Blend. Of course, this was is simply an application of Martin Fowler’s Presentation Model pattern.
In this example, I will take our ever popular SuperEmployees application and re-write it with the ViewModel pattern. As with any emerging patterns, there are lots of variations, all with their strengths and weaknesses.. I have picked an approach that I felt was best as an introduction.
You can see the full series here.
The demo requires (all 100% free and always free):
- VS2008 SP1
- Silverlight 3 RTM
- .NET RIA Services July '09 Preview
- (Optional) Silverlight Unit Testing Framework
Check out the live site Also, download the full demo files
For this example, we are going to focus exclusively on the client project (MyApp and MyApp.Tests)… check out the previous posts for more information about the server side of this app.
Orientation
Model (SuperEmployeeDomainContext in MyApp.Web.g.cs) – Responsible for data access and business logic.
View (home.xaml) – Responsible for the purely UI elements
ViewModel (SuperEmployeesViewModel.cs) – Specialization of the Model that the View can use for data-binding.
See more ViewModel Pattern in Silverlight at Nikhil’s blog.
There are some other interesting bits in the “PatternsFramework” folder. They contain some useful helper classes that may be applicable to other ViewModel+RIA Services applications.
ViewModelBase – From Nikhil’s ViewModel example
PagedCollectionView– Adds paging support (IPagedCollectionView) in a fairly standard way.
EntityCollectionView – From Jeff Handley blog post handles most of the interfaces needed for binding, and of course works super well with RIA Services.
PagedEntityCollectionView – Added paging support.. This gives us most of what DomainDataSource provides, but is very ViewModel friendly.
Let’s start with just getting some basic data into the application. I am going to do all by databinding against my SuperEmployeesViewModel, so I am going to set it up as the DataContext of the page.
public class SuperEmployeesViewModel : PagedViewModelBase {}
and then from home.xaml
<navigation:Page.DataContext>
<AppViewModel:SuperEmployeesViewModel />
</navigation:Page.DataContext>
Now, we are ready to start.. As you recall from previous posts, the app is very simple master-details setup.
Let’s start by getting the DataGrid and DataForm bindings wired up…
1: <data:DataGrid x:Name="dataGrid1" Height="380" Width="380"
2: IsReadOnly="True" AutoGenerateColumns="False"
3: HorizontalAlignment="Left"
4: SelectedItem="{Binding SelectedSuperEmployee, Mode=TwoWay}"
5: HorizontalScrollBarVisibility="Disabled"
6: ItemsSource="{Binding SuperEmployees}"
7: >
1: <dataControls:DataForm x:Name="dataForm1" Height="393" Width="331"
2: VerticalAlignment="Top"
3: Header="Product Details"
4: CurrentItem="{Binding SelectedSuperEmployee}"
5:
Notice in DataGrid, line 6 we are binding to the SuperEmployees property on the ViewModel. we will look at how that is defined next. Then in line 4, we twoway bind the SelectedSuperEmployee property. This means that the DataGrid will set that property when the user selects an item. Finally in line 3 on DataForm, we bind to that same property.
From the SuperEmployeesViewModel.cs, we see the SuperEmployees and SelectedSuperEmployee properties… Notice we raise the property change notifications such that the UI can update when these values change.
1: PagedEntityCollectionView<SuperEmployee> _employees;
2: public PagedEntityCollectionView<SuperEmployee> SuperEmployees
3: {
4: get { return _employees; }
5: set
6: {
7: if (_employees != value)
8: {
9: _employees = value;
10: RaisePropertyChanged(SuperEmployeesChangedEventArgs);
11: }
12: }
13: }
14:
15: private SuperEmployee _selectedSuperEmployee;
16: public SuperEmployee SelectedSuperEmployee
17: {
18: get { return _selectedSuperEmployee; }
19: set
20: {
21: if (SelectedSuperEmployee != value)
22: {
23: SuperEmployees.MoveCurrentTo(value);
24: _selectedSuperEmployee = value;
25: RaisePropertyChanged(SelectedSuperEmployeeChangedEventArgs);
26: }
27: }
28: }
29:
Ok, that is the wiring, but how did _employees get its value set in the first place? How is the data actually loaded?
Well, check out the SuperEmployeesViewModel constructor.
1: public SuperEmployeesViewModel()
2: {
3: _superEmployeeContext = new SuperEmployeeDomainContext();
4: SuperEmployees = new PagedEntityCollectionView<SuperEmployee>(
5: _superEmployeeContext.SuperEmployees, this);
6:
7: }
We see the SuperEmployees is actually a PagedEntityCollectionView.. We pass this as the IPagedCollectionView, so we get called back on that when data loading is needed (for example, when I move to page 1). The base PageViewModelhandles hands all the plumbing there, but we still need to handling loading the data via our implementation of the LoadData() method.: _superEmployeeContext.Load(q, OnSuperEmployeesLoaded, null);
14: }
You can see this is fairly simple, we just clear the list of what we may have already downloaded, then loads more data. Notice we are not actually handling paging here yet, we will get to that is a bit.
Filtering
Now, we have that nice Origins filter… let’s see how we wire this up such that we only return the entities that have a certain origin. Now it is important that we don’t want to return all the entities and do this filtering on the client.. that would waste way to much bandwidth. We also don’t want to do the filtering on the middle tier (web server) as that could still flood the database.. instead we want to do this filtering all the way down in the database. We can do that via the magic of Linq query composition. We are going to form a Linq query on the client, send it to the web server, who will simply pass it along (via Entity Framework in this example) to the database.
First, in the Home.xaml view, we wireup the databinding:
1: <StackPanel Orientation="Horizontal" Margin="0,0,0,10">
2: <TextBlock Text="Origin: " />
3: <TextBox x:Name="originFilterBox" Width="338" Height="30"
4:</TextBox>
5: </StackPanel>
Notice in line 4, we are doing the binding to the OriginFilterText property on our ViewModel. Let’s take a look at what that looks like.
1: string _originFilterText;
2: public string OriginFilterText
3: {
4: get { return _originFilterText; }
5: set
6: {
7: if (_originFilterText != value)
8: {
9: _originFilterText = value;
10: RaisePropertyChanged(OriginFilterTextChangedEventArgs);
11:
12: PageIndex = 0;
13: LoadData();
14: }
15: }
16: }
Notice whenever the filter text is changed, we need to read load the data… But as you recall from the LoadData method above, it simply loaded all the data.. how do we wire it up such that it loads just the data we matching this filter?(OriginFilterText))
14: {
15: q = q.Where(emp => emp.Origin.StartsWith(OriginFilterText));
16: }
17:
18: _superEmployeeContext.Load(q, OnSuperEmployeesLoaded, null);
19: }
20:
Notice, we added lines 13-16.. we are simply adding a clause to the query… This clause is serialized, sent to the server where it is interpreted by the DAL (EF in this case) and executed on the database.
For the deep linking code, we need to filter on employeeID that we get from the URL, can you see how easy it would be to add in a filter by employeeID? Check out lines 13-16.: _superEmployeeContext.Load(q, OnSuperEmployeesLoaded, null);
24: }
You can see, we simply add another where clause that follows the same pattern.
Paging
Paging is pretty much that same as filtering we just looked at. We bind some UI controls to a property on the ViewModel, customize the load query based on that property. In this case the UI is a DataPager and the property is the CurrentPage.
First, we need to give a PageSize (number of entities to load at one time). I wanted this customizable by a designer, so a made it a property in the View.
1: <navigation:Page.DataContext>
2: <AppViewModel:SuperEmployeesViewModel
3: </navigation:Page.DataContext>
Then we bind the DataPager to this value and to our SuperEmployees list.
1: <data:DataPager x:Name ="pager1" PageSize="{Binding PageSize}" Width="379"
2: HorizontalAlignment="Left"
3: Source="{Binding SuperEmployees}"
4:
I defined the PageSize property in the PagedViewModelBase because it is generic to any data… But it is pretty much as you’d expect.
1: int pageSize;
2: public int PageSize
3: {
4: get { return pageSize; }
5: set
6: {
7: if (pageSize != value)
8: {
9: pageSize = value;
10: RaisePropertyChanged(PageSizeChangedEventArgs);
11: }
12: }
13: }
DataPager works through the IPagedCollection interface that is defined on the PagedViewModelBase. So this base class deals with all the FirstPage, NextPage, MoveTo(page) type of functionality and simply exposes a PageIndex property.
We can use that in our LoadData() method to do the appropriate paging code that should look familiar to anyone that has done data paging in the last 20 years. 😉:
24: if (PageSize > 0)
25: {
26: q = q.Skip(PageSize * PageIndex);
27: q = q.Take(PageSize);
28: }
29:
30: _superEmployeeContext.Load(q, OnSuperEmployeesLoaded, null);
31: }
In lines 24-28, we are adding to the query a Skip() and a Take(). First we skip over the number of entities on a page times the page we are currently on. Then we take the next number of entities on a page. Again, all those this eventually gets turned into TSQL code and executed on the SQL Server.
Sorting
As you might guess, sorting follows the exact same pattern. Some UI element in the view is bind to some property on the ViewModel which we access in the LoadData() method to customize our Linq query that is sent to the server.
In this case DataGrid is bound to the EntityCollectionView which implements ICollectionView.SortDescriptions. So when the DataGrid sorts it changes the SortDescription there.
So in our DataLoad() method we just need to access the SortDescription and add to the Linq query.: if (SuperEmployees.SortDescriptions.Any())
24: {
25: bool isFirst = true;
26: foreach (SortDescription sd in SuperEmployees.SortDescriptions)
27: {
28: q = OrderBy(q, isFirst, sd.PropertyName, sd.Direction == ListSortDirection.Descending);
29: isFirst = false;
30: }
31: }
32: else
33: {
34: q = q.OrderBy(emp => emp.EmployeeID);
35: }
36:
37: if (PageSize > 0)
38: {
39: q = q.Skip(PageSize * PageIndex);
40: q = q.Take(PageSize);
41: }
42:
43: _superEmployeeContext.Load(q, OnSuperEmployeesLoaded, null);
44: }
Check out lines 23-35. Here we are adding OrderBy to the linq query via a little helper method.
1: private EntityQuery<SuperEmployee> OrderBy(EntityQuery<SuperEmployee> q, bool isFirst, string propertyName, bool descending)
2: {
3: Expression<Func<SuperEmployee, object>> sortExpression;
4:
5: switch (propertyName)
6: {
7: case "Name":
8: sortExpression = emp => emp.Name;
9: break;
10: case "EmployeeID":
11: sortExpression = emp => emp.EmployeeID;
12: break;
13: case "Origin":
14: sortExpression = emp => emp.Origin;
15: break;
16: default:
17: sortExpression = emp => emp.EmployeeID;
18: break;
19: }
20:
21: if (isFirst)
22: {
23: if (descending)
24: return q.OrderByDescending(sortExpression);
25:
26: return q.OrderBy(sortExpression);
27: }
28: else
29: {
30: if (!descending)
31: return q.ThenByDescending(sortExpression);
32:
33: return q.ThenBy(sortExpression);
34: }
35: }
This helper method forms the correct sorting expression based on the a propertyname and a order..
Interacting with the View
One of the interesting aspects of how the ViewModel pattern comes together is how the ViewModel can interact with the View. So far we have looked at several examples of the View setting properties on the ViewModel and the view databinding to values on the ViewModel, but we have not yet seen how the ViewModel can do things like raise UI.
A good example of that is how I refactored the ExportToExcel functionality.
Let’s start at the view.. As you maybe seen, there is an Export to Excel button..
1: <Button Content="Export to Excel"
2: Width="105" Height="28"
3: Margin="5,0,0,0" HorizontalAlignment="Left"
4:</Button>
Notice the click is handled by code behind, rather than the ViewModel. This is because the logic is very View specific (raising a FileOpenDialog).
1: private void ExportToExcel_Click(object sender, RoutedEventArgs e)
2: {
3: var dialog = new SaveFileDialog();
4:
5: dialog.DefaultExt = "*.xml";
6: dialog.Filter = "Excel Xml (*.xml)|*.xml|All files (*.*)|*.*";
7:
8: if (dialog.ShowDialog() == false) return;
9:
10: using (var fileStream = dialog.OpenFile())
11: {
12: ViewModel.ExportToExcel(fileStream);
13: }
14: }
Then, in line 12, there is some actual logic that we might want to reuse or test separate, so we put that in the ViewModel.
1: public void ExportToExcel(Stream fileStream)
2: {
3: var s = Application.GetResourceStream(new Uri("excelTemplate.txt", UriKind.Relative));
4: var sr = new StreamReader(s.Stream);
5:
6: var sw = new StreamWriter(fileStream);
7: while (!sr.EndOfStream)
8: {
9: var line = sr.ReadLine();
10: if (line == "***") break;
11: sw.WriteLine(line);
12: }
13:
14: foreach (SuperEmployee emp in SuperEmployees)
15: {
16: sw.WriteLine("<Row>");
17: sw.WriteLine("<Cell><Data ss:Type=\"String\">{0}</Data></Cell>", emp.Name);
18: sw.WriteLine("<Cell><Data ss:Type=\"String\">{0}</Data></Cell>", emp.Origin);
19: sw.WriteLine("<Cell><Data ss:Type=\"String\">{0}</Data></Cell>", emp.Publishers);
20: sw.WriteLine("<Cell><Data ss:Type=\"Number\">{0}</Data></Cell>", emp.Issues);
21: sw.WriteLine("</Row>");
22: }
23: while (!sr.EndOfStream)
24: {
25: sw.WriteLine(sr.ReadLine());
26: }
27: }
Notice it does not interact with the view at all. The way the View gets the Stream to write the excel data to is totally up to the view. This makes unit testing easier and is a more clean separation of concerns.
AddSuperEmployee and the ErrorWindow work in very similar ways.
Unit Testing
No ViewModel post would be complete without at least some mention of unit testing. One of the key motivators for the ViewModel pattern is the ability to test the UI-logic of your application without having to worry about UI automation. The most important thing to do when you are unit testing is to focus on testing YOUR CODE. I happen to know that Microsoft employs lots of great testers and developers to to test our code (in the framework)… You should focus on isolating just your code and testing that. So effectively what we want to do is create another view for our ViewModel (in this case test code) and mock out the networking\data access layer.
First, let’s create a unit test project for our Silverlight client. I am going to use If you got the Silverlight Unit Test Framework installed correctly, you should see a project template.. (check out Jeff Wilco’s excellent post on getting this installed).
Then you need to add references to Microsoft.VisualStudio.QualityTools.UnitTesting.Silverlight.dll and Microsoft.Silverlight.Testing.dll as Jeff says in his post. You will also need to add a Project Reference to the MyApp project, this contains the code we want to test.
You will see we have our first test in place already.
1: [TestClass]
2: public class SuperEmployeesViewModelTest : SilverlightTest
3: {
4: [TestMethod]
5: public void TestMethod()
6: {
7: Assert.IsTrue(true);
8: }
To run it, simply set the new MyApp.Tests project as the startup
and hit F5.
We pass… but that test was clearly not very interesting… let’s look at adding a more interesting test.
But first, let’s recall the most important part of unit testing – only test the code you wrote. So for example, I don’t want to test the code that talks to the server, or the code that talks to the database on the server.. all of those are someone else’s code. So, I want to mock out the connection to the server. Luckily, DomainContext has a built in way to do this level of mocking. DomainContext has a DomainService that is responsible for all communication with the server. We just need to jump in there and provide our own, MockDomainService that doesn’t hit the server to get data, but rather just uses it’s own locally provided data.
1: public class MockDomainClient : LocalDomainClient {
2:
3: private IEnumerable<Entity> _mockEntities;
4:
5: public MockDomainClient(IEnumerable<Entity> mockEntities) {
6: _mockEntities = mockEntities;
7: }
8:
9: protected override IQueryable<Entity> Query(QueryDetails details,
10: IDictionary<string, object> parameters) {
11: var q = _mockEntities.AsQueryable();
12:
13: return q;
14: }
15: }
Here is my starter MockDomainClient.. notice I am deriving from the LocalDomainClient (via Nikhil’s excellent ViewModel post) and later we will look at the QueryDetails (via Jason Allor’s LinqService code).
Now, let’s add our first real test… Let’s verify our logic for dealing with the EmployeeIDFilter is correct. There are three steps to each unit test: (1) setup (2) test (3) verify.
Let’s look at the initialize first.
1: [TestMethod]
2: [Asynchronous]
3: public void TestLoadData_EmployeeIDFilter()
4: {
5: //initialize
6: var entityList = new List<Entity>()
7: {
8: new SuperEmployee () {
9: Name = "One",
10: EmployeeID = 1,
11: },
12: new SuperEmployee () {
13: Name = "Two",
14: EmployeeID = 2
15: }
16: };
17:
18:
19: var client = new MockDomainClient(entityList);
20: var context = new SuperEmployeeDomainContext(client);
21: var vm = new SuperEmployeesViewModel(context);
22:
23: vm.ErrorRaising += (s, arg) =>
24: {
25: throw new Exception("VM throw an exceptions", arg.Exception);
26: };
27:
28:
29:
30: //run test
31: //TODO
32:
33:
34: //check results
35: EnqueueDelay(1000);
36:
37: EnqueueCallback(() =>
38: {
39: //TODO asserts
40: });
41: EnqueueTestComplete();
42: }
Notice in line 2, we are making this an async test, because our ViewModel return results back in an async way, we need our test to do the same. In line 6-16, we are initializing the data. I really like having all the data right here in the test so it is easy to see what it going on and it is isolated. In line 19-21, we are creating a MockDomainClient and initializing it with this test data, then we are creating a SuperEmployeeDomainContext based on this mock DomainClient and finally, we create the ViewModel. In line 23-36, we are handling any errors that may be thrown, useful for debugging test.
Now, let’s flush out the run and verify steps…
1: //run test
2: vm.EmployeeIDFilter = "1";
3:
4:
5: //check results
6: EnqueueDelay(1000);
7:
8:
9: EnqueueCallback(() =>
10: {
11: Assert.IsTrue(vm.SuperEmployees.Count() == 1);
12: var res = vm.SuperEmployees.FirstOrDefault();
13: Assert.IsNotNull(res.EmployeeID == 1);
14: });
15: EnqueueTestComplete();
16: }
To run the tests, we simply set the EmployeeIDFilter to 1.. as a side effect we will load data… Then in lines 11-13 we do some basic asserts to make sure exactly one item is returned and that it has the right EmployeeID.
Now, we just run it and…we pass!
Testing the OriginFilter looks pretty much the same..
1: [TestMethod]
2: [Asynchronous]
3: public void TestLoadData_EmployeeOriginFilter()
4: {
5: //Setup
6: var entityList = new List<Entity>()
7: {
8: new SuperEmployee () {
9: Name = "One",
10: EmployeeID = 1,
11: Origin = "Earth",
12: },
13: new SuperEmployee () {
14: Name = "Two",
15: EmployeeID = 2,
16: Origin = "Earth",
17: },
18: new SuperEmployee () {
19: Name = "Three",
20: EmployeeID = 3,
21: Origin = "Raleigh",
22: }
23: };
24:
25:
26: var client = new MockDomainClient(entityList);
27: var context = new SuperEmployeeDomainContext(client);
28: var vm = new SuperEmployeesViewModel(context);
29:
30:
31: //run test
32: vm.OriginFilterText = "Earth";
33:
34:
35:
36: //check results
37: EnqueueDelay(1000);
38:
39:
40: EnqueueCallback(() =>
41: {
42: Assert.IsTrue(vm.SuperEmployees.Count() == 2);
43: foreach (var emp in vm.SuperEmployees)
44: {
45: Assert.IsTrue(emp.Origin == "Earth");
46: }
47:
48: });
49: EnqueueTestComplete();
50: }
And we run it and pass!
I leave it as an exercise to the reader to finish the other tests ;-)
Closing
I hope you enjoyed this overview of ViewModel and RIA Services. You can download the full demo files.
Thanks to Jeff Handley for going above and beyond to help me with this, and to Nikhil, John Papa and Pete Brown for their very useful feedback.
Update: Vijay Upadya helped me a bit with the unit testing side with LinqUtils…
Just link to full series from Part 25 ( this one )
shows "Modular development" as not released. Is it just list not up to date ( it doesn’t have "ViewModel" link ) , or it was really not released yet.
Yup – still working on Modular Development… that is comming.
I will add ViewModel to it.
This is a great series please keep it going!
I’d like to see these scenarios if possible:
1) authorisation
2) Invoice-Orders-Order lines i.e. N-level add new/save/cancel
3) restrict keyboard and clipboard input
And on a side note, which probably doesn’t belong here but I’ve started typing…. is there any way to check that a page/usercontrol has been cleared from memory like some equivalent to the old Forms collection?
Hi Brad, these series have been a huge help.
I’m wondering what about handling sorting and filtering on the client side? I already have all my entities pulled down via RIA (it’s not very many). I’d much rather have the client handle sorting / filtering w/o having to launch another request to the server.
I’ve tried using a CollectionViewSource, setting the Source property to a PagedEntityCollectionView<T> but it throws a "Unsupported type of source for a CollectionView".
Any thoughts?
Further to my previous comment just want to confirm the error message is:
Unsopported (sic) type of source for a collection view.
Complete with typo. That should probably be fixed!
Instead of commanding you use a thin code-behind layer calling the vm methods. Is this your recommendation or will it be replaced by commands in a future blog entry?
> Instead of commanding you use a thin code-behind > layer calling the vm methods. Is this your >recommendation or will it be replaced by >commands in a future blog entry?
Rolf – Yea, i thought about commanding but decided to start simple… The blog post is already very long (too long?) and i don’t want to add another concept. Nikhil has a great example of commanding on his blog..
Is it possible to host a .Net Ria Service in a WinForm/WPF/Consol or Application or service too? If yes a a Blog (Part 26) would be nice.
I want to have more than one control on the page showing the data. For example, two SuperEnployee lists, each have its filtering, sorting, ordering and paging. But currently action made in one list will affect the view on both lists. Because they are bound to the entity table itself. Anyway we can get around that?
The idea is I want to view up-to-date data ie, if value is changed in one control, all the controls is sync because they all bound to the same EF at the back.
Hi Brad,
I’d like to thank you for this great examples because I’ve learned a lot from it. I think this series should be posted in the oficial RIA Services site, because the samples that are available there aren’t this good.
In this sample you have a few collection implementations designed to be Bound to any control who listen to ICollectionView and other interfaces.
That is great because you can have all your logic in the viewmodel and bind your collections to any Selector control or even datagrids and it should work.
I’d like to know if you coded those classes or if you’ve got it from somewhere else, because the implementation feels incomplete (GroupDescriptors return null, etc), and it would be great to have a full viewmodel ready ICollectionView/IEditableCollectionView implementation.
Maybe we could expect that to be released in a future version of RIA Services? Since the implementation you provided in your sample works *only* with RIA entities.
Best regards,
MF.
Select any item in the grid.
In the DataForm, clear the name.
Now, select another item in the grid. Boom.
How can this be handled?
/ingo
Hi Brad , how can i bind and get selected value in MVVM model ,because in combobox we are not able to get selected value. and how can i provide relative binding in silver light?
Hi,
Unfortunately the file with full demo files is unavailable:”>
It looks like the site is down:
Could you please upload the file to another site?
I downloaded the files and tried to run the app but they don’t work because seo has been eliminated. Do you have an updated set that will run with the latest Silverlight release?
Sorry about the Alan, I have not updated this for the latest RIA Services bits…
Hello Brad,
In this example you implemented an EntityCollectionView class and a PagedEntityCollectionView.
Is there any reason why you didn’t use the System.Windows.Data.PagedCollectionView class?
Thanks,
MF.
I think the only change required to get this to run is to change EntityList to EntitySet. It looks like EntityList was removed in the Beta. | https://blogs.msdn.microsoft.com/brada/2009/09/07/business-apps-example-for-silverlight-3-rtm-and-net-ria-services-july-update-part-25-viewmodel/ | CC-MAIN-2019-35 | refinedweb | 4,060 | 56.25 |
Python is one of the most widely used languages out there. Be it web development, machine learning and AI, or even micro-controller programming, Python has found its place just about everywhere.
This article provides a brief introduction to Python for beginners to the language. The article is aimed at absolute beginners with no previous Python experience, although some previous programming knowledge will help, but is not necessarily required.
I've found that the best way to learn is to try to understand the theory and then implement the example on your own. Remember, you will not get better at programming unless you practice it!
The article is divided into following sections:
- Why Learn Python
- Installation and Setup
- Running your First Program
- Python Variables
- Operators in Python
- Conditional Statements
- Loops
- Lists, Tuples, and Dictionaries
- Example Application
- What's Next
Why Learn Python
The question arises here that why you should learn Python. There are lots of other programming languages; you might have even learned some of them. Then why Python, what is so special about it? There are various reasons to learn Python, most important of which have been enlisted below.
Easy to Learn
Python is considered one of the most beginner-friendly languages. The syntax of Python is the simplest of all. You don't have to learn complex variable types, use of brackets for grouping code blocks and so on. Python is built upon the fundamental principle of beginner-friendliness.
Highly In-Demand
According to a recent survey by indeed.com, Python developers are the second highest paid developers in USA. The huge job potential of Python can be estimated by the fact that in 2014 the average hiring rate for programmers decreased by 5% but Python developers still saw an increase of 8.7%.
Ideal for Web development
Python is lightning fast when compared with other web development languages such as PHP and ASP.NET. Also, Python has myriad of amazing frameworks such as Django, Flask, and Pylons, which makes web development even simpler. Websites like Instagram, Pinterest and The Guardian are all based on the popular Django framework.
Used Heavily for Machine learning and AI
Python is the most widely used language for machine learning and artificial intelligence operations. Python libraries such as TensorFlow and scikit-learn makes AI tasks much simpler when compared to MATLAB or R, which previously was the most widely used environment for data science and AI tasks.
Works with Raspberry Pi
Python is the most popular programming language for the Raspberry Pi, which is a pocket size microcomputer used in a wide range of applications such as robots, gaming consoles, toys. In short, learn Python if you want to build things with the Raspberry Pi.
Corporate Darling
It would not be an exaggeration if we say that Python is the darling of all all the big corporate companies such as google, yahoo, NASA, Disney, IBM etc. These companies have incorporated Python at the core of many of its applications.
Large Community
Python has one of the largest programming communities online and it continues to grow. Python has the fifth largest Stack Overflow community, and third largest meet-up community. And most importantly, it is the 4th most used language at GitHub, which means there is tons of existing code to learn from.
Installation and Setup
Though there are several ways to install Python for Windows, but for the sake of this article we will use Anaconda. It is undoubtedly the most widely used Python environment at the moment. To download Anaconda, go to this link:
Scroll down a bit and you should see the download options. Select, Python 3.6 as shown in the following screenshot:
This will download an Anaconda installer to your computer. Open the installer and you will see the following options:
Follow these steps for installation
- Click the "Next" button. Terms and Condition will appear, you can read if you have enough time but you can click "I Agree" anyways.
- In the next window select the type of installation you want. If you are absolute beginner to Python I would recommend selecting, "Just me" option.
- Next, select the installation folder (Default is best).
- Advance options dialogue box will appear, keep the first option unchecked and the second checked and click "Install". This is shown in the following screenshot.
Now sit back and have some coffee, the installation might take some time.
Once the installation is complete, you will see the message:
Click "Next" and then "Finish" button on the subsequent dialogue box to complete the installation.
Running your First Program
Although you can run Python programs via command line as well, it is typically better for beginners to use a text editor. Luckily, with the installation of Anaconda, you get the Jupyter Notebook installed as well. The "Jupyter Notebook" is a cloud based application that allows users to create, share, and manage their documents. We will use Jupyter to write our Python code in this article.
To open Jupyter, you can go to Start Menu and find the "Jupyter Notebook" application. You can also search for it in Applications. This is shown in the following:
Open the "Jupyter Notebook" application. It will then be opened in your default browser. For compatibility, I would recommend that you use Google Chrome as your default browser, but other browser types like Firefox would work as well.
When the application opens in your browser, you will see the following page:
On the right hand side of the page, you will see an option "New". Click that buttonand a dropdown list will appear. Select, "Python 3" from the dropdown list. This will open a brand new notebook for you, which looks like this:
Here you can easily write, save, and share your Python code.
Let's test and make sure everything is working fine. To do this, we'll create a simple program that prints a string to the screen.
Enter the following code in the text field in your Jupyter notebook (shown in the screenshot above):
print("Welcome to Python!")
The
To run code in "Jupyter Notebook" just press "Ctrl + Enter". The output of the above code should look like the following:
And there you have it, we have successfully executed our first Python program! In the following sections, we'll continue to use Jupyter to teach and discuss some core Python features, starting with variables.
Python Variables
Simply put, variables are memory locations that store some data. You can use variables to store a value, whether it be a number, text, or a boolean (true/false) value. When you need to use that value again later in your code, you can simply use the variable that holds that value. You can almost think of them as simple containers that store things for you for later use.
It is important to mention here that unlike Java, C++ and C#, Python is not a strongly typed language. This means that you do not need to specify the type of variable according to the value it holds. Python implicitly decodes the variable type at runtime depending upon the type of data stored in it. For instance you don't need to specify
int n = 10 to define an integer variable named "n". In Python we simply write
n = 10 and the type of variable "n" will be implicitly understood at runtime.
There are five different core data types in Python:
- Numbers
- Strings
- List
- Tuples
- Dictionaries
In this section we will only take a look at numbers and strings. Lists, tuples, and dictionaries will be explained further in their own respective section later in this article.
Numbers
The number type of variables store numeric data. Take a look at the following simple example:
num1 = 2 num2 = 4 result = num1 + num2 print(result)
Here in the above example we have two numeric variables,
num1 and
num2, with both containing some numeric data. There is a third number type variable,
result, which contains the result of the addition of the values stored in
num1 and
num2 variables. Finally, on the last line the
result variable is printed to the screen.
The output will be as follows:
There are four different number data types in Python:
- Integers, such as real whole-valued numbers: 10
- Long integers, which have "L" at the end for values: 1024658L
- These can also be used in hexadecimal and octal form
- Floating point data, which are numbers expressed in decimals: 3.14159
- Complex data, which is used to represent complex number types: 2 + 3j
Strings
Strings are used to store text data in Python. Take a look at the following example:
fname = "Adam" sname = " Grey" fullname = fname + sname print(fullname)
In the above example we have two string variables:
fname and
sname. These store the first name and surname of some person. To combine these two strings we can use "+" operator in Python. Here we are joining the
fname and
sname variables and store the resultant string in the
fullname variable. Then we print the
fullname variable to the screen.
The output is as follows:
There are hundreds of strings operations in Python, we will have a dedicated article on these functions in future.
Operators in Python
Operators in programming are the constructs that allow you to manipulate an operand to perform a specific function. They are very similar to real life operators, such as arithmetic operators e.g addition, subtraction, greater than, less than, and AND/OR operators, etc.
There are seven types of operators in Python:
- Arithmetic Operators
- Logical Operators
- Assignment Operators
- Comparison Operators
- Bitwise Operators
- Identity Operators
- Member Operators
In this article we'll keep it simple and study only the first four operators. The other operators are beyond the scope of this article.
Arithmetic Operators
Arithmetic operators perform mathematical operations such as addition, subtraction, multiplication, division, and exponential functions on the operands. The detail of arithmetic functions have been given in the following table:
Suppose the variables
n1 and
n2 have values of 4 and 2, respectively.
You may recall seeing an example of the arithmetic addition operator earlier in the Number data variable section. In Python, addition operators can apply to any kind of number, and even strings.
Logical Operators
The logical operators, which help you perform simple Boolean algebra, supported by Python are as follows:
Suppose
o1 and
o2 have values
True and
False, respectively.
The following code helps explain the above operators with an example:
o1 = True o2 = False r1 = (o1 and o2) print(r1) r2 = (o1 or o2) print(r2) r3 = not(o1) print(r3)
The output of the above code is:
False True False
Assignment Operators
Assignment operators allow you to "give" a value to variables, which may be the result of an operation. The following table contains some of the most widely used assignment operators in Python:
Take a look at the following example to see some of the assignment operators in action:
n1 = 4 n2 = 2 n1 += n2 print(n1) n1 = 4 n1 -= n2 print(n1) n1 = 4 n1 *= n2 print(n1) n1 = 4 n1 /= n2 print(n1)
The output of the above code will be:
6 2 8 2.0
Notice how in the last operation we get a floating point number as our result, whereas we get integer numberes in all of the prevoius operations. This is because this is the only mathematical operation in our example that could turn two integer numbers in to a floating point number.
Comparison Operators
Comparison operators are used to compare two or more operands. Python supports the following comparison operators:
Suppose
n1 is 10 and
n2 is 5 in the following table.
Consider the following simple example of comparison operator:
n1 = 10 n2 = 5 print(n1 == n2) print(n1 != n2) print(n1 > n2) print(n1 < n2) print(n1 >= n2) print(n1 <= n2)
The output of the above code is:
False True True False True False
The Complete Python Bootcamp
Conditional Statements
Conditional statements are used to select the code block that you want to execute based upon a certain condition. Suppose in a hospital management system, you want to implement a check that the patient with age over 65 can receive priority treatment while the others cannot, you can do so with conditional statements.
There are four types of conditional statements:
- "if" statements
- "if/else" statements
- "if/elif" statement
- Nested "if/else" statements
Basically, the second and third types are just extensions of the first statement type.
If Statement
The "if statement" is the simplest of all the statements. If the given condition resolves to true (like
1 < 10), then the code block that follows the "if statement" is executed. If the condition returns false (like
1 > 10), then the code is not executed.
Take a look at the following example.
age = 67 if age >= 65: print("You are eligible for priority treatment.") print("Thank you for your visit")
Pay close attention to the syntax of conditional statements. In most of the other programming languages, the code block that is to be executed if the "if" condition returns true is enclosed inside brackets. Here in Python you have to use colon after the "if" condition and then you have to indent the code that you want to execute if the condition returns true.
Python is widely considered to be a much cleaner language than many others because of the absence of brackets. Indentation is used instead to specify scope, which has its own pros and cons.
In the above example we have an
age variable with value 67. We check if
age is greater than 65, and if this condition returns true then we print a message telling the user that he/she is eligible for priority treatment. Notice that this message is indented, which tells us it is the code to be executed following a true condition. Finally, we simply print the thank you message on the screen. The output of this code will be:
You are eligible for priority treatment. Thank you for your visit
Now let's set the value of the
age variable to 55 and see the difference.
age = 55 if age >=65: print("You are eligible for priority treatement.") print("Thank you for your visit")
The output of the above looks like this:
Thank you for your visit
Notice that this time the condition did not return true, hence the statement telling the patient that he is eligible for priority treatment is not printed to the screen. Only greetings have appeared since they were not inside (indented) the body of the "if" statement.
If/Else Statement
The "if/else" statement is used to specify the alternative path of execution in case the "if" statement returns false. Take a look at the following example:
age = 55 if age >=65: print("You are eligible for priority treatment.") else: print("You are eligible for normal treatment") print("Thank you for your visit")
Here the code block followed by the "else" statement will be executed since the
age variable is 55 and the "if" condition will return false. Hence, the "else" statement will be executed instead. The output will be as follows:
You are eligible for normal treatment Thank you for your visit
If/Elif Statment
The "if/elif" statement is used to implement multiple conditions. Take a look at the following example:
age = 10 if age >= 65: print("You are eligible for priority treatment.") elif age > 18 and age < 65: print("You are eligible for normal treatment") elif age < 18: print("You are eligible for juvenile treatment") print("Thank you for your visit")
In the above code we have implemented three conditions. If
age is greater than 65, if
age is between 65 and 18, and if the
age is less than 18. Based on the value of the
age, different print statement will be executed. Here since the
age is 10, the second conditional returns true and you will see the following output:
You are eligible for juvenile treatment Thank you for your visit
If none of the conditionals were to return true then none of the
print() statements would have executed. This differs from the "if/else" example where either "if" is executed or "else" is executed. In the case of "if/elif", this isn't necessarily the case. However, you can add a normal "else" statement at the end that gets executed if none of the conditions above it return true.
Using this method I just described, we could re-write the previous example to look like this:
age = 10 if age >= 65: print("You are eligible for priority treatment.") elif age > 18 and age < 65: print("You are eligible for normal treatment") else: print("You are eligible for juvenile treatment") print("Thank you for your visit")
This code would result in the same output as the previous example.
Nested If Else Statement
Nested "if/else" statements are used to implement nested conditions (i.e. conditions within another condition). Consider the following example:
age = 67 insurance = "yes" if age >= 65: print("You are eligible for priority treatment.") if insurance == "yes": print("The insurance company will pay for you.") else: print("You have to pay in advance.") else: print("You are eligble for normal treatment") print("Thank you for your visit")
Here we have an outer condition that if
age is greater than or equal to 65, then check that if patient has insurance or not. If the patient has insurance, the insurance company will pay the bill later, otherwise the patient has to pay in advance.
Loops
Iteration statements, or more commonly known as loops, are used to repeatedly execute a piece of code multiple times. Consider if you have to print names of 100 persons on the screen. You will either have to write 100 print statements or you will have to use hundreds of escape characters in one print statements. If you have to perform this task repeatedly you have to write hundreds of thousands of tedious lines of code. A better way is to make use of loops.
There are two main types of loops in Python:
- For loop
- While Loop
Keep in mind that you can nest loops just like we did with the conditional statements, but we won't go in to that here.
The For Loop
The "for loop" is used to iterate over a collection of elements. The loop keeps executing until all the elements in the collection have been traversed. Take a look at the simple example of for loop:
nums = [1, 2, 4, 5, 6, 7, 8, 9, 10] for n in nums: print(5 * n)
The above example simply prints the product of each item in
nums and 5. Here we have a list
nums which contains integers from 1 to 10. Don't worry, we will study lists in detail in a later section. For now, just consider it as a collection of elements, which in this case are numbers.
Pay close attention to the code above. It follows following syntax:
for [temp_var] in [collection]: [statements]
In the first iteration of the "for loop" the 1 is stored in the temporary variable
n. This 1 is multiplied by 5 and the result is printed on the screen. In the second iteration the second element from the
nums collection (i.e. 2) is stored in the
n variable and 2 is multiplied by 5. These iterations continue until all the elements in the
nums collection have been traversed. After the last element (10) is encountered, the loop stops and code execution moves past the "for loop".
The output of the above code is:
5 10 20 25 30 35 40 45 50
The While Loop
The "while loop" is different from the "for loop" in that it keeps executing while a certain condition keeps returning true. After each iteration of the while loop, the condition is re-evaluated. When the condition finally returns false, the while loop stops executing and exits.
Take a look at the following example:
x = 50 while x > 0: print(x) x = x - 5
Here the loop will keep executing until the value of
x becomes negative. The
x variable has initially value of 50 and during each iteration we decrement it by 5. So, after 10 iterations the value will become negative and the loop will then stop executing.
The output will look like this:
50 45 40 35 30 25 20 15 10 5
While loops are good for times when you don't already know how many iterations you need. For loops iterate a set number of times, whereas while loops can iterate an unknown number of times, or even an infinite number of times.
Functions in Python
Functions in programming are constructs that perform specific tasks. Functions come handy in scenarios when you have to perform a task multiple times throughout your code. Instead of re-writing the same functionality again and again, instead you can create a function that performs that task and then call that function wherever and whenever you want.
Notice that there is a difference between doing a task repeatedly and doing a task multiple times. Loops are used where you have to perform a task repeatedly in sequence. Functions, on the other hand, are used when you have to perform the same task at different places throughout your code.
Consider a scenario where you have to print a long statement to screen at different times. Instead, write a function that prints the statement you want and then call the function wherever you want to print the statement.
Take a look at the following example:
def displayWelcome(): print("Welcome to Python. This article explains the basics of Python for absolute beginners!") return; displayWelcome() print("Do something here") displayWelcome() print("Do some other stuff here")
There are two things I'd like to point out in this code: the function definition and the function calls.
Function definition refers to defining the task performed by the function. To define a function you have to use keyword
def followed by the name of the function, which is
displayWelcome in the above example. You can use any function name, but to use semantic function. The function name is followed by opening and closing parenthesis. The parenthesis are used to define parameters or any default input values, which we will see this in next example. After the parenthesis you have to use colon and on the next line the body of the function is defined. A function usually ends with a
return statement, but it is not required if a value is not being returned.
In the second part of our example code you'll see the function call. To call a function you simply have to write the function name followed by pair of parenthesis. If a function accepts parameters, you have to pass them inside parenthesis.
The output of the above code will be:
Welcome to Python. This article explains the basics of Python for absolute beginners Do something here Welcome to Python. This article explains the basics of Python for absolute beginners Do some other stuff here
You can see that our long string was printed twice. Once before the "Do something here" statement, and once after it, which matches the order of our function calls within the code.
You can imagine how important this is to programming. What if we needed to perform a more complex task like downloading a file or performing a complex calculation? It would be wasteful to write out the full code multiple times, which is where functions come in to play.
Functions with Parameters
Now let's see how to pass parameters to a function. A parameter is just a variable that is given to the function from the caller.
Let's write a function that adds two numbers passed to it as parameters in the parenthesis:
def addNumbers(n1, n2): r = n1 + n2 return r; result = addNumbers(10, 20) print(result) result = addNumbers(40, 60) print(result) result = addNumbers(15, 25) print(result)
In the above code we have the
addNumbers function, which accepts two values from the function call. The values are stored in the
n1 and
n2 variables. Inside the function these values are added and stored in the
r variable. The value in the
r variable is then returned to the caller of the function.
In the first call to
addNumbers we pass two values, 10 and 20. Note that the order of parameters matter. The first value in the function call is stored in the first parameter in the function, and the second value is stored in the second parameter. Therefore 10 will be stored in
n1 and 20 will be stored in
n2. We then display the result of the function via the
The result of the above code will be:
30 100 40
You can see that every time the function is called, our
result variable contains the addition of the two numbers passed.
Lists, Tuples, and Dictionaries
Lists, tuples, and dictionaries are three of the most commonly used data structures in programming. Though all of them store a collection of data, the main difference lies in the following:
- How you place data in the data structure
- How the data is stored within the structure
- How data is accessed from the data structure
In the next few sections you'll see some of these properties for each data structure.
Lists
Lists are used to store a collection of items of varying data types. The elements are stored inside square brackets where each element is separated from each other with a comma.
Let's see how to create a simple list:
randomlist = ['apple', 'banana', True, 10, 'Mango']
You can see we have stored strings, a number, and a Boolean in this list. In Python (unlike other strongly typed languages), a list can store any type of data in a single list, as shown above. More commonly, however, lists tend to store many different values of the same data type.
Accessing List Elements
To access an element in a list simply write the name of the list variable followed by pair of square brackets. Inside the brackets specify the index number of the element you want to access. It is important to note that lists in Python (and many other programming languages), list indexes start at 0. This means the first element in every list is at position 0, and the last element is at position n-1, where n is the length of the list. This is called zero-based indexing.
Take a look at this code:
print(randomlist[0]) print(randomlist[4])
Here we are accessing the first and fifth element of the
randomlist list. The output will be:
apple Mango
You may have also noticed that the elements in the list remain in the order in which they are stored. They will remain in the same order unless they are explicitly moved or they are removed.
Assigning New List Elements
To assign a value to an existing list position, you must specify the index of the position you want to assign the value to and then use the assignment operator (
=) to actually assign the value.
See the code below:
# Define the list randomlist = ['apple', 'banana', True, '10', 'Mango'] # Print the current value at index 0 print(randomlist[0]) # Assign a new value at index 0 randomlist[0] = 'Peach' # Print the updated value print(randomlist[0])
Here we have updated the first element of the list. We displayed the value of the element before and after the update to show the change.
Adding List Elements
In the last sub-section we showed how to assign a value to a list, but this only applies if an item already exists at that position. What if we wnat to expand the size of the list and add a new item without getting rid of any of our previous items? We do this by using the
append() function.
randomlist = ['apple', 'banana', True, '10', 'Mango'] print(randomlist) # Add a new element randomlist.append(0) print(randomlist)
When running this code you will notice that the value 0 is shown at the end of the list after calling the
append function. Our list now has a total of 6 elements in it, including our new value.
Deleting List Elements
To remove an element, we simply use the
del keyword. Take a look at the following example to see how it is used:
randomlist = ['apple', 'banana', True, '10', 'Mango'] print(randomlist) # Remove the second element del randomlist[1] print(randomlist)
Here we deleted the second element of the
randomlist list. We use the
['apple', 'banana', True, '10', 'Mango'] ['apple', True, '10', 'Mango']
Tuples
Tuples are similar to list in that they store elements of varying data types. The main distinction between tuples and lists is that tuples are immutable. This means that once you have created a tuple you cannot update the value of any element in the tuple, nor can you delete an element.
In terms of syntax, tuples differ from lists in that they use parenthasis, whereas lists use square brackets. Even with all of these differences, tuples are still very similar to lists. Elements are accessed the same, and element order is preserved, just like lists.
Here is how you can create a tuple:
randomtuple = ('apple', 'banana', True, '10', 'Mango')
Accessing Tuple Elements
Tuple elements can be accessed in same way as lists:
randomtuple = ('apple', 'banana', True, '10', 'Mango') print(randomtuple[1]) print(randomtuple[4])
In the above script we are accessing the second and fifth element of the tuple. As expected, this would result in the following output:
banana Mango
Assigning Values to Tuple Elements
As discussed earlier, it is not possible to assign new values to already declared tuple elements. So you cannot do something like this:
randomtuple[1] = 10 # This operation is not allowed
Attempting an assignment like this results in the following error being raised:
TypeError: 'tuple' object does not support item assignment
Deleting a Tuple Element
You cannot delete an individual tuple element. Attempting to do so would result in a raised error, just like we showed when you try to re-assign an element:
TypeError: 'tuple' object doesn't support item deletion
However you can delete a tuple itself using "del" function as shown in the following example:
randomtuple = ('apple', 'banana', True, '10', 'Mango') print(randomtuple) del randomtuple print(randomtuple)
If you try to access a deleted tuple, as in the second
NameError: name 'randomtuple' is not defined
Dictionaries
Like lists and tuples, dictionary data structures store a collection of elements. However, they differ quite a bit from tuples and lists because they are key-value stores. This means that you give each value a key (most commonly a string or integer) that can be used to access the element at a later time. When you have a large amount of data, this is more efficient for accessing data than traversing an entire list to find your element.
When you create a dictionary, each key-value pair is separated from the other by a comma, and all of the elements are stored inside curly brackets. See the following code:
randomdict = {'Make': 'Honda', 'Model': 'Civic', 'Year': 2010, 'Color': 'Black'}
Dictionaries are very useful when you have a lot of information about a particular thing, like the car example we showed above. They're also useful when you need to access random elements in the collection and don't want to traverse a huge list to access them.
Accessing Dictionary Elements
Dictionary elements are accessed using their keys. For instance if you want to access the first element, you will have to use its key, which in this case is 'Make'. Take a look at the following example to see the syntax:
randomdict = {'Make': 'Honda', 'Model': 'Civic', 'Year': 2010, 'Color': 'Black'} print(randomdict['Make']) print(randomdict['Model'])
Here we are accessing the first and second elements of the randomdict dictionary via their keys. The output will look like this:
Honda Civic
Because dictionary elements are accessed using their keys, the elements are not ordered in the data structure, and it is not as straight-forward to iterate over like lists are.
Assigning Values to Dictionary Elements
To assign value to already existing dictionary element you first have to access the element and then assign a new value to it. The following example shows this:
randomdict = {'Make': 'Honda', 'Model': 'Civic', 'Year': 2010, 'Color': 'Black'} print(randomdict['Make']) randomdict['Make'] = 'Audi' print(randomdict['Make'])
The output will have this:
Honda Audi
Deleting Dictionary Elements
There are three different ways to delete elements in dictionaries: You can delete individual elements, you can delete all the elements, or you can delete the entire dictionary itself. The following example shows all of these three ways:
randomdict = {'Make': 'Honda', 'Model': 'Civic', 'Year': 2010, 'Color': 'Black'} # Displaying complete dictionary print(randomdict) # Deleting one element del randomdict['Make'] print(randomdict) # Clearing whole dictionary randomdict.clear() print(randomdict) # Deleting dictionary itself del randomdict print(randomdict)
Here we are displaying the dictionary after performing each of the three delete operations. Don't worry about the "#" and proceeding text in the code - these are there to make comments about the code. Comments are not executed, they just provide information about the code, and are purely optional.
The output of the above code will be:
{'Color': 'Black', 'Make': 'Honda', 'Model': 'Civic', 'Year': 2010} {'Color': 'Black', 'Model': 'Civic', 'Year': 2010} {} Traceback (most recent call last): File "dict_test.py", line 16, in <module> print(randomdict) NameError: name 'randomdict' is not defined
Notice that since we deleted the dictionary at the end, therefore an error is thrown indicating that
randomdict is not defined.
Example Application
Now that we've gone through many of the most basic concepts in Python, let's put it to good use and create an simple appplication using what we learned.
Let's say you have so many cars that you just can't keep track of them all, so we'll create an application to do it for you. It'll work by continually asking you if you want to add cars to your inventory, and if you do, then it will ask for the details of the car. If you don't, the application will print out the details of all of your cars and exit.
Here is the full code, which we'll explain in detail in the rest of this section:
cars = [] add_inventory = raw_input('Add inventory? [y/n] ') while add_inventory == 'y': # Get car data from user make = raw_input('Make: ') model = raw_input('Model: ') year = raw_input('Year: ') miles = raw_input('Miles: ') # Create car dictionary object and save it to list car = {'Make': make, 'Model': model, 'Year': year, 'Miles': miles} cars.append(car) # Ask user if we should keep going add_inventory = raw_input('Add inventory? [y/n] ') print('') print('Here are your cars:') # Display all of our cars for c in cars: print('Make: ' + c['Make']) print('Model: ' + c['Model']) print('Year: ' + c['Year']) print('Miles: ' + c['Miles']) print('')
In the first line of our code we create a list that will hold the details of all of our cars. Each element in the list will be a dictionary item, which will contain details like "Make", "Model", etc.
The second line of code we use a built-in Python function called
raw_input(), which displays the given text to the user via the command line and then waits for the response. Any text that is entered by the user is then saved in the
add_inventory variable.
We then check if the user wanted to add inventory by checking for a "y" character. If the user does want to add inventory, then we use the
raw_input() function again to gather information about the car. Once we have everything we need, we create a
car variable that stores a dictionary with all of our car data. This dictionary object is then saved in our
car list using the
append() method, which you may recall adds our element to the end of the list.
Using a "while-loop", we continually check to see if the user wants to add more cars to their inventory. This could go on for as long as the user keeps entering "y" in the "Add inventory?" prompt, which is exactly what "while-loops" are good for.
When the user finally enters "n" (or any character that isn't "y"), we will print out a full list of their inventory for them. This is done using a "for-loop". For each item in the list, we store the current item in the temporary
c variable and retrieve all of the relevant car data using its keys, which we then print out to the screen using string concatenation (or "addition"). This adds the two strings together to become one before getting printed to the screen.
Running this code via the command line may look something like this:
$ python cars.py Add inventory? [y/n] y Make: Porsche Model: 911 Turbo Year: 2017 Miles: 2000 Add inventory? [y/n] y Make: Ferrari Model: 488 GTB Year: 2016 Miles: 12000 Add inventory? [y/n] y Make: Lamborghini Model: Aventador Year: 2017 Miles: 8000 Add inventory? [y/n] n Here are your cars: Make: Porsche Model: 911 Turbo Year: 2017 Miles: 2000 Make: Ferrari Model: 488 GTB Year: 2016 Miles: 12000 Make: Lamborghini Model: Aventador Year: 2017 Miles: 8000
What's next?
This article provides a very basic introduction to the Python programming language. We have touched on only the most fundamental concepts, including variables, operators, conditional statements, loops, and more.
An entire article could be dedicated to each of these topics, so I'd suggest finding more resources on each. To learn more, personally I'd recommend taking a course like Complete Python Bootcamp: Go from zero to hero in Python, which will guide you through all of the most important concepts in greater detail.
Another great one is the Complete Python Masterclass, which goes even further in to things like object-oriented programming and even databases.
Once you find your feet in the simple Python concepts, move on to more advanced topics like object-oriented Python. Most of the advanced programming applications now-a-days are based on object oriented principles. As explained in the beginning, Python is being widely used for web development, machine learning, data science, and micro-controllers as well, so try out a little of everything and see which niche is most interesting to you.
What do you think of Python so far? What do you plan on using it for? Let us know in the comments! | http://stackabuse.com/python-tutorial-for-absolute-beginners/ | CC-MAIN-2018-26 | refinedweb | 6,434 | 59.13 |
Ranter
- .
- ste0911763y@norman70688 it's c#
Those two static methods are either called within the same class they're defined or there's a "using static ClassName" (c# 6 feature)
- Along with 13 and 11 references to those functions.
You ever get so sick of working with shit code, you decide to go full Stockholm syndrome and start writing shit code of your own? I've done that. My reasoning is "You must like shit code. Here, have some more shit code".
- 1024 upvotes, you're welcome!
- nummer312613yAbstraction, son!
-
- @g-m-f bool cannot return null ( atleast you cannot set it to it, it would probably return null in something like this: "bool a;")
- Unit testing must be Epic on that project
- @No-one in C# I believe that would actually throw a compiler error if you tried to read before assigning it :)
- @Jamoyjamie possibly, that was just my guess.
once i tryed to assign null value to bool - no luck
- Make those nonstatic. And extract an interface. Like what if the implementation of returnFalse() changes or needs to be mocked, you know?
- cdrice40483y@QoolQuy2000 This comment made my day.
"You must like shit code. Here, have some more shit code" is TOTALLY going into a commit message. And soon.
- @norman70688 in C# u can import statics, for example: we have a Console class with a WriteLine static method, if I add at top of the file: Import System.Console, the I can use WriteLine directly without the need to write: Console.WriteLine
- livevanzz373yHacker!
- It's silly but not incorrect. The question is of course why. Two possible options. One : stupidity. Two: take advantage of code completion, so you type retCTRL+SPACE and it gets filled in. Is it worth the 3 extra seconds? Absolutely not.
I vote the reason as #1
- Seriously... No doubt, people think that we are stupid...
I hope that the person who did this, realised his/her own stupidity and corrected it...
Otherwise, God save him from his team devs... Considering, they stop rolling on the floor...
This was seriously hilarious, in a weird sort of way...
- Grumpy31103yUnholy demons of darkness, that is source code made from the antimatter of brain substance. And a very weird way of saying
return (maxClients <= numClients);
Whoever wrote that code is probably busy right now writing 4294967296 separate functions to return 32-bit integers.
- haters will say its fake
- beriba9313yAt first I thought wtf, then some more wtf, then i loled and then another wtf came through my mind.
But this may be actually useful. What if someone wants to log every use of "false". In this case it's just one additional line of code instead of much more in any other case. Imagine how much time you can save. I have no fucking idea why anyone would like to log every use of "false" bool but that could happen... in a galaxy far far away. Who am I kidding, that's totally ridiculous
- @beriba if we wouldn't take into account methods names, it could be used to reverse everything with one little change
But anyways, i think its fake
- God I wish I could say this is fake.
- jonjo10323yNext a function will be made with void * and typeof passed in..
- ste0911763yWorking with many different companies and teams, i stumbled upon some of the most absurd things one can imagine.
My conclusion is that often "developers" are not really what the word imply but just people with some vague knowledge on the subject, doing the minimum required to please the managers.
The big problem with that code, IMHO, is that probably there's no particular reason behind it other than "because why not" or "I used to do things like this in another language so ..."
- mundo0347753yBut it is super readable. Does that matter?
- @mundo03 It matters because there was no need for the first two functions... It is just a sheer wastage of both time and will only lead to sheer amusement or frustration for the people involved in the project..
Nothing could be more readable than
if (maxClients > numClients)
return false
else
return true
- @SuyashD95 I think the example put by @Grumpy is also very readable.
return maxClients > numClients
- This guy took abstraction way toooo.... serious
- Sauruz18223yAdvocate of the devil here. I can seriously think of reasons why he did this. Lets say you use those boolean methods in other functions and you want to write to a log when false happens, this would be the way to do it.
- No sane human can do this, so either there is a reason for this insanity or its a joke.
- "13 references"
...
- Im sorry, how the fuck this retarded post has almost 500 ++ ???? I call it bullshit!!! 😂
- yarwest28423y@gitpush this is useful. Do you by chance know what happens if you import two classes (assuming this is possible) and call a static method they both have? Compiler error I suppose?
- Congrats on the 500!
- That is what I call the OOP syndrome.
- vrpg19985983yThere are two types of people...
- Just swap the return values of functions before resigning. :')
- Tell us the story of this code
- mclayoz361yThis reminds me some code that went outsourced to junior programmers in india and charged 10K back to us...
- mathias23421yIt's very flexible, what if you one day want to change what false is
- I hope they wrote unit tests for those functions. That would be amazing!
- @wasp your first comment has given me a great revenge plan. also, i fixed it. Now all we need is the ultimate question
- Berkmann1831744dSounds like the person was paid by LOCs or something that lead to that.
Related Rants
- DRSDavidSoft26
Found this in our codebase, apparently one of my co-workers had written this
- linuxxx34*client calls in* Me: good morning, how can I help you? Client: my ip is blocked, could you unblock it for m...
- sam966931Difference between C# and Javascript Me: Hold my cup of tea. C#: That's not a cup of tea. Me: Hold my cup o...
I wouldn't believe it hadn't I seen this with my own eyes.
undefined
c#
wtf
return
wtfonceagain
true
false | https://devrant.com/rants/482216/i-wouldnt-believe-it-hadnt-i-seen-this-with-my-own-eyes | CC-MAIN-2020-16 | refinedweb | 1,026 | 74.29 |
As we did back in Lecture 16, we are now pausing to see how the C language handles arrays. And as before, you will see that many differences are superficial and what you've learned for C++ in this class nicely applies to C.
As a reminder of the differences we've already covered:
Today we will look at arrays. The short description is that C requires you to be more explicit with how much memory you need for an array, but after that, pointers and array usage is the same.
NOTE: you need to now
#include <stdlib.h> for malloc and calloc.
In C++, we've learned that memory is allocated with the keyword
new. For example, to create an array of 5 ints:
int* numbers = new int[5]; // C++ code
In C, we instead use the keywords
malloc or
calloc. They're almost, but not exactly, the same. The prototype for
malloc is:
void* malloc(int);
The parameter of type
int needs to be the number of bytes that you want for your array from the operating system. So, when requesting space for an array of 5 ints, for example, to arrive at the correct number of bytes to request, you need to know not only the 5, but also the number of bytes required by a single int. Most systems represent an int with 4 bytes, but not always, and embedded systems can be different from platforms. To know how many bytes your type needs, use the
sizeof() function. For example:
malloc( 5*sizeof(int) );
calls
sizeof(int), which returns the number of bytes needed to store a single int. This is then multiplied by 5 (because you need room for 5 of them), the result of which is finally the argument to
malloc.
Because
malloc just accepts a number, it has no knowledge of the type you're going to store within that space, so it doesn't know whether to return a
int* or a
char* or what. So, it just returns a
void*, which can be cast to any of those types. So, for our array of 5 ints, we finally end up at:
int* intArr = malloc( 5*sizeof(int) );
Say we instead want 12 doubles, then it looks like this:
double* doubles = malloc( 12*sizeof(double) );
You can view this as a simple syntax change, but there is a little more going on. In C++, the
new keyword figures out how many bytes you need based on the primitive type. In C, you have to compute that yourself in the code (with the handy sizeof() function). Here is the side-by-side difference:
Our other option for memory allocation is
calloc, which makes this "I need to know two things, the size of the type and the number of elements I'm story" thing explicit. In
calloc, our "array of 5 ints" and "array of 12 doubles" looks like this:
int* intArr = calloc( 5, sizeof(int) ); double* doubles = calloc( 12, sizeof(double) );
You'll notice that it takes two arguments, while malloc just requires you to multiply them together yourself. The other significant difference is that unlike
new or
malloc,
calloc not only allocates the memory, it also zeroes it out(!!)! No random garbage in your arrays! But, it's somewhat slower, since it has to iterate over all the memory and zero it out.
The choice between
malloc and
calloc is largely a personal preference between the two, as long as you're knowledgeable about these differences.
When using
malloc or
calloc, be sure to
#include <stdlib.h>, or you'll get a warning about the return type of your allocation function.
Remember that scanf requires a pointer to know which memory address will be filled. For example, reading an int::
int N; scanf("%d", &N);
int* nums = malloc( 10*sizeof(int) ); for( int i = 0; i < 10; i++ ) scanf("%d", &nums[i]); // we just need the & to get the address of the array cell
Remember to delete your memory allocation when you're finished with your arrays!
The C equivalent of
delete is the function
free:
free(intArr); free(doubles);
Remember how we lost our great
string type from C++?
It's not so bad. Strings in C are instead explicit arrays of chars. There is a small catch, though. The final char of your array must be a null character (0 on your ASCII table, first in your heart). So, if you were to encode the word "hi" as a C string, you would need an array of size three:
char* hiStr = malloc( 3*sizeof(char) ); hiStr[0] = 'h'; hiStr[1] = 'i'; hiStr[2] = 0; //alternatively, the character '\0'
Alternatively and wonderfully, you can also do this:
char* hiStr = "hi";
The null char is still there, just unseen in the code. This second case builds the array in the stack, rather than the heap - this means that this second example does not need to be
freed, while the first does. In general, we're avoiding this in our class (it's called "static allocation"), but for C strings it's too common to step around. You'll also see:
char aString[20];
which builds an array of 20 chars in the stack, pointed to by a
char* aString. Again, you can technically build arrays in the stack like this for any type but it's exceedingly common with strings, which tend to be short, and so people put them in the stack, so they don't have to
free them.
Once you've allocated space for some chars, your pointer can be used with
scanf to read in strings from the user:
char aString[20]; scanf("%s",aString);
or
char* aString = malloc( 20*sizeof(char) ); scanf("%s",aString);
As with
cin, the string will be read in until whitespace - here, you have to be careful to have allocated enough space for the string you'll be reading in!
Finally, the above is all about creating and reading strings.
But what about helpful functions? Make sure you
#include <string.h>. This library provides a wide variety of useful string functions.
Again, very superficial differences. First, you make a pointer to a FILE using
fopen, which accepts two C strings as arguments, the first of which is a filename, and the second is a mode (detailed here). After doing this, you can use
fscanf or
fprintf just like you would
scanf and
printf. So, to open a file and read an integer:
FILE* fin = fopen("someFile.txt","r"); int theInt; fscanf(fin, "%d", &theInt);
$ ./avg How many? 5 5 3 9 2 3 3 2 9 3 5
$ ./fileread Average is 34.850000 Max is 94 | https://www.usna.edu/Users/cs/nchamber/courses/si204/s18/lec/l23/lec.html | CC-MAIN-2018-22 | refinedweb | 1,114 | 68.6 |
Struggling to grasp the concepts of Model-View-View-Model? Keep it simple!
Have you found that you can understand the basics of data binding in WPF and Silverlight, but when you start to read about Model-View-View-Model .
Many times the simplest approach is best.
In this article, I will present MVVM step-by-step but not drop you off a cliff. I won’t take the purist approach to MVVM, but I will use the familiar event model in code-behind that you are used to. Yes, you will be using MVVM, but you will do it using a programming model that you are very familiar with.
Why MVVM?
The whole point behind MVVM is to separate UI logic from your business and data logic. You want to be able to test your business and data logic without having to run your user interface. You can use MVVM to do this and still code your user interface layer just like you are used to.
Before diving into MVVM, it will be helpful to understand data binding in XAML so that you have the foundation you need to apply a MVVM architecture to your WPF or Silverlight applications. Let me start this discussion by showing you a sample WPF window that I’ll use to illustrate the various concepts in this article.
The Sample
Figure 1 shows a sample screen used to display product data and also allow the user to add and modify that product data. This is a WPF application but the basic data-binding concepts apply equally to Silverlight applications. I’ll use this screen throughout this article to illustrate the various concepts of data binding and ultimately an MVVM model.
The sample application uses an XML file for the product data, but could just as easily go against a database. This sample uses a class called ProductManager for all data access routines, and it uses a Product entity class that contains one property for each data field in your data store.
The Product Class
The Product entity class implements the INotifyPropertyChanged interface so each property can raise the PropertyChanged event. The data binding in XAML relies on this interface to stay informed of changes to properties in your code. Listing 1 shows the Product entity class.
The ProductManager Class
The ProductManager class has methods that work with the Product entity class. For example, there is a GetProducts method that returns an ObservableCollection of Product objects. There are also stubs for Insert, Update and Delete. I did not write any code in these methods as I was trying to just illustrate MVVM and not data access.
The Brute Force Approach
To start this discussion of data binding, let me show how you can use the brute force approach to populating the ListView and the various text boxes on this screen. First off, you create a ListView with XAML shown in Listing 2.
You will use the Loaded event procedure of the WPF window to populate the list view using the code shown in Listing 3. When you set the DataContext property with the ObservableCollection of Product objects, the {Binding} in each GridViewColumn shown in Listing 2 grabs the data from the Product object’s property to display in the appropriate column in the ListView.
When you click on a row in the ListView control, the SelectionChanged event fires. In this event you will retrieve the Product object from the selected row and use that to populate the text boxes and other controls with the data from that Product object as shown in Listing 4.
The code in Listing 4 should be fairly standard code that you are accustomed to writing if you have ever coded Windows Forms or ASP.NET. You also know that you need to write code to take the data from the forms and move it back into the Product object prior to passing this object to the Insert or Update methods on your data class. You end up with a lot of code just to move data back and forth between your UI and your entity class. Use XAML data binding to eliminate all that code you used to write just to move data in and out of controls. Let’s take a look at how to do that.
Use XAML data binding to eliminate all that code you used to write just to move data in and out of controls.
Data Binding Basics in XAML
Just as you use the Binding syntax in the ListView you may also use this same approach for your text boxes, check boxes and other controls. Each GridViewColumn control has as its parent the ListView itself. The concept of a container or parent control is very important in XAML and you use this to your advantage in WPF and Silverlight. Consider the following XAML:
<Grid Name="grdDetail"> <TextBox Text="{Binding Path=ProductName}" /> </Grid>
In the above XAML, the Grid named grdDetail contains one text box that has a {Binding} in its Text property. If you set the DataContext of the Grid control to an instance of a Product object, then the Text property of the TextBox will display the ProductName property within that Product object as shown in the following code:
C# Product entity = new Product(); entity.ProductName = "A New Product"; grdDetail.DataContext = entity; VB Dim entity As New Product() entity.ProductName = "A New Product" grdDetail.DataContext = entity
You can take advantage of this technique to eliminate the code shown in Listing 4 that moves the code from the Product object into the various controls. Replace the code in the SelectionChanged event with the code shown in Listing 5.
Take advantage of control to control data binding in XAML to eliminate code.
Eliminate More Code
While you eliminated a lot of code in the last example, you still had to write code in the SelectionChanged event. You can eliminate even more code by taking advantage of the control-to-control data binding features in XAML. Consider the following XAML:
<Grid DataContext="{Binding ElementName=lstData, Path=SelectedItem}"> <TextBox Text="{Binding Path=ProductId}" /> <TextBox Text="{Binding Path=ProductName}" /> <TextBox Text="{Binding Path=Price}" /> <CheckBox IsChecked="{Binding Path=IsActive}" /> </Grid>
You set the DataContext property of the Grid to a Binding that uses the ElementName attribute set to the name of the ListView. You set the Path property to SelectedItem as that property of the ListView contains the currently selected Product object. This syntax is all that is needed to automatically bind the ListView control to the Grid where all your text box controls are located. You can now remove the SelectionChanged event procedure completely!
Binding Other Properties
All of the data binding you have done so far relates to data you retrieve from your database. However, you can bind almost any property on any control to any property on any class. To illustrate this concept consider the Add, Save and Cancel buttons on the form shown in Figure 1. Notice how the Add button is enabled, but the Save and Cancel buttons are disabled. The IsEnabled property on these buttons can be controlled from properties on a class.
Let’s assume you have a XAML window named winMoreDataBinding. You can add the appropriate properties to this window class and bind the IsEnabled properties of your buttons to these properties. To do this, you need to implement the INotifyPropertyChanged interface on your window. Listing 6 shows the code that you would add to your Window class to implement properties that you bind to the respective IsEnabled properties of your buttons.
To bind each of these properties to your buttons’ IsEnabled properties, you write the XAML shown in below:
<Button Content="Add" IsEnabled="{Binding Path=IsAddEnabled}" /> <Button Content="Save" IsEnabled="{Binding Path=IsSaveEnabled}" /> <Button Content="Cancel" IsEnabled="{Binding Path=IsCancelEnabled}" />
Now in the code-behind for your window if you set the IsAddEnabled property to False, then the Add button will become disabled automatically. It is important that you set the property IsAddEnabled, not the private variable that this property uses as its backing data store. The Set procedure for the IsAddEnabled will raise the property changed event which is how XAML is informed of the change in value and then knows to refresh its control’s UI state.
A Simple View Model Class
Now that you are familiar with data binding both in terms of data objects and UI objects, you can now expand on your knowledge to create a simple ViewModel and eliminate even more code-behind from your windows. If you download and look at each of the samples illustrated in this article, you will find that each window has about 200 lines of code-behind. When you start using a ViewModel you will cut this amount by more than half!
Remember that a view model is just a class with code, there is no magic. In the sample application that goes along with this article, there is a class called ProductViewModel. You will create an instance of this class by creating it as a resource in your XAML. First you will create an XML namespace and give that namespace a name. Consider the following XAML:
<Window x: <Window.Resources> <local:ProductViewModel x: </Window.Resources> ... </Window>
The fourth line down creates a namespace called local and assigns that namespace to the name of the assembly where the ProductViewModel class is located. Next, in the <Window.Resources> section of the XAML is where you create an instance of the ProductViewModel and assign it a key called viewModel.
DataCollection and DetailData Properties
In the ProductViewModel class there are two properties that you will use for data binding to your window (see Listing 7). The DataCollection property is an ObservableCollection of Product objects and DetailData is a single instance of a Product object. The DataCollection property will be filled in with the data in the constructor of the ProductViewModel class. The DetailData property will be filled in when you click on a row in the ListView.
You use these two properties as the source of data for the ListView control.
<ListView x: ... </ListView>
You also use the properties as the source of data for the Grid that contains all of the detail text boxes and the check box.
<Grid DataContext="{Binding Path=DetailData}"> ... </Grid>
Populating the Data in the View Model Class
The method InitializeComponent is called in the constructor of every WPF Window to create all controls of the window. This includes the ProductViewModel class you declared in the XAML. The constructor in your ProductViewModel will be called when the InitializeComponent method is called. In Listing 8 you see the constructor code in the ProductViewModel class. Notice that it calls the GetProducts method in the ProductManager class, which as you remember, returns a collection of Product objects.
Since you are doing data access in the constructor, it is very critical that you have excellent exception handling in place when retrieving your data. The GetProducts method contains all of the exception handling in this example.
Notice that the ProductViewModel class implements the INotifyPropertyChanged interface. In the prior example, this interface was added to the Window class because that is where the UI properties such as IsAddEnabled were created. Now all those UI properties are in the view model class.
Hooking Into the View Model Class in the Code-Behind
Remember that you created an instance of the view model class in the XAML. You will need to interact with this view model class in the code-behind of your window. In the constructor of your Window class (Listing 9) you can grab the instance of the view model class that was created by using the FindResource method on the Window class. You pass in the key that you assigned to the view model class in the XAML, in this case viewModel, you cast it to a ProductViewModel and assign it into a private field variable in the Window class. You can now use this private variable to call any method or get/set any properties in the view model.
Event Procedures in the Window Class
In the product screen, there are still event procedures left in the Window class. These event procedures are for when you click on the Add, Save and/or Cancel buttons. There is very little code in each event procedure as all that is needed is to make calls to the view model class, or maybe perform a little bit of UI logic. Listing 10 shows all the code that is left in the code-behind of this window. Notice that you use the private view model variable to call methods in the view model.
If you want, you can get rid of some of these events using the Commanding model available in WPF. However, you end up writing a lot more code to support the command model, and to me there is very little benefit. You have accomplished the goal of moving all logic into the view model class so that you can unit test the view model. There is no UI code at all in the view model class and thus can be tested very easily using automated tools. Another problem with the command model is that not all events can be hooked up to commands. So at some point, you then have to write some very complicated code to hook to all these events. I find the simpler approach that I have laid out in this article is a good compromise between having everything in the code-behind and a “pure” view model approach. I have accomplished the goals of MVVM but I have kept my programming model simple and easy to understand.
Summary
Once you understand the basics of data binding in XAML, you can eliminate a lot code that is otherwise needed to move data into controls and out of controls back into an object. This forms the basis of a MVVM approach. You are used to writing classes; you just need to get used to the idea of using properties on classes to affect UI properties. In this article, you saw a very simple and easy-to-use pattern for MVVM. While the purists would disagree with this approach, those folks that like things simple and easy to understand should be satisfied. | http://www.codemag.com/article/1011091 | CC-MAIN-2017-26 | refinedweb | 2,383 | 59.84 |
26 CFR § 301.6227-1 - Administrative adjustment request by partnership.
(a) In general. A partnership may file a request for an administrative adjustment with respect to any partnership-related item (as defined in § 301.6241-1(a)(6)(ii)) for any partnership taxable year. When filing an administrative adjustment request (AAR), the partnership must determine whether the adjustments requested in the AAR result in an imputed underpayment in accordance with § 301.6227-2(a) for the reviewed year (as defined in § 301.6241-1(a)(8)). If the adjustments requested in the AAR result in an imputed underpayment, the partnership must take the adjustments into account under the rules described in § 301.6227-2(b) unless the partnership makes an election under § 301.6227-2(c), in which case each reviewed year partner (as defined in § 301.6241-1(a)(9)) must take the adjustments into account in accordance with § 301.6227-3. If the adjustments requested in the AAR are adjustments described in § 301.6225-1(f)(1) that do not result in an imputed underpayment (as determined under § 301.6227-2(a)), such adjustments must be taken into account by the reviewed year partners in accordance with § 301.6227-3. A partner may not make a request for an administrative adjustment of a partnership-related item except in accordance with § 301.6222-1 or if the partner is doing so on behalf of the partnership in the partner's capacity as the partnership representative designated under section 6223. In addition, a partnership may not file an AAR solely for the purpose of changing the designation of a partnership representative or changing the appointment of a designated individual. See § 301.6223-1 (regarding designation of the partnership representative). When the partnership changes the designation of the partnership representative (or appointment of the designated individual) in conjunction with the filing of an AAR in accordance with § 301.6223-1(e), the change in designation (or appointment) is treated as occurring prior to the filing of the AAR. For rules regarding a notice of change to the amount of creditable foreign tax expenditures see paragraph (g) of this section.
(b) Time for filing an AAR. An AAR may only be filed by a partnership with respect to a partnership taxable year after a partnership return for that taxable year has been filed with the Internal Revenue Service (IRS). A partnership may not file an AAR with respect to a partnership taxable year more than three years after the later of the date the partnership return for such partnership taxable year was filed or the last day for filing such partnership return (determined without regard to extensions). Except as provided in § 301.6231-1(f), an AAR (including a request filed by a partner in accordance with § 301.6222-1) may not be filed for a partnership taxable year after a notice of administrative proceeding with respect to such taxable year has been mailed by the IRS under section 6231.
(c) Form and manner for filing an AAR -
(1) In general. An AAR by a partnership, including any required statements, forms, and schedules as described in this section, must be filed with the IRS in accordance with the forms, instructions, and other guidance prescribed by the IRS, and must be signed under penalties of perjury by the partnership representative (as described in §§ 301.6223-1 and 301.6223-2).
(2) Contents of AAR filed with the IRS. A partnership must include the information described in this paragraph (c)(2) when filing an AAR with the IRS. In the case of a failure by the partnership to provide the information described in this paragraph (c)(2), the IRS may, but is not required to, invalidate an AAR or readjust any items that were adjusted on the AAR. An AAR filed with the IRS must include -
(i) The adjustments requested;
(ii) If a reviewed year partner is required to take into account the adjustments requested under § 301.6227-3, statements described in paragraph (e) of this section, including any transmittal with respect to such statements required by forms, instructions, and other guidance prescribed by the IRS; and
(iii) Other information prescribed by the IRS in forms, instructions, or other guidance.
(d) Copy of statement furnished to reviewed year partners in certain cases. If a reviewed year partner is required to take into account adjustments requested in an AAR under § 301.6227-3, the partnership must furnish a copy of the statement described in paragraph (e) of this section to the reviewed year partner to whom the statement relates in accordance with the forms, instructions and other guidance prescribed by the IRS. If the partnership mails the statement, it must mail the statement to the current or last address of the reviewed year partner that is known to the partnership. The statement must be furnished to the reviewed year partner on the date the AAR is filed with the IRS.
(e) Statements -
(1) Contents. Each statement described in this paragraph (e) must include the following correct information:
(i) The name and TIN of the reviewed year partner to whom the statement is being furnished;
(ii) The current or last address of the partner that is known to the partnership;
(iii) The reviewed year partner's share of items as originally reported on statements furnished to the partner under section 6031(b) and, if applicable, section 6227;
(iv) The reviewed year partner's share of the adjustments as described under paragraph (e)(2) of this section;
(v) The date the statement is furnished to the partner;
(vi) The partnership taxable year to which the adjustments relate; and
(vii) Any other information required by forms, instructions, and other guidance prescribed by the IRS.
(2) Determination of each partner's share of adjustments -
(i) In general. Except as provided in paragraphs (e)(2)(ii) and (iii) of this section, each reviewed year partner's share of the adjustments requested in the AAR is determined in the same manner as each adjusted partnership-related item was originally allocated to the reviewed year partner on the partnership return for the reviewed year. If the partnership pays an imputed underpayment under § 301.6227-2(b) with respect to the adjustments requested in the AAR, the reviewed year partner's share of the adjustments requested in the AAR only includes any adjustments that did not result in the imputed underpayment, as determined under § 301.6227-2(a).
(ii) Adjusted partnership-related item not reported on the partnership's return for the reviewed year. Except as provided in paragraph (e)(2)(iii) of this section, if the adjusted partnership-related item was not reported on the partnership return for the reviewed year, each reviewed year partner's share of the adjustments must be determined in accordance with how such items would have been allocated under rules that apply with respect to partnership allocations, including under the partnership agreement.
(iii) Allocation adjustments. If an adjustment involves allocation of a partnership-related item to a specific partner or in a specific manner, including a reallocation of an item, the reviewed year partner's share of the adjustment requested in the AAR is determined in accordance with the AAR.
(f) Administrative proceeding for a taxable year for which an AAR is filed. Within the period described in section 6235, the IRS may initiate an administrative proceeding with respect to the partnership for any partnership taxable year regardless of whether the partnership filed an AAR with respect to such taxable year and may adjust any partnership-related item, including any partnership-related item adjusted in an AAR filed by the partnership. The amount of an imputed underpayment determined by the partnership under § 301.6227-2(a)(1), including any modifications determined by the partnership under § 301.6227-2(a)(2), may be re-determined by the IRS.
(g) Notice requirement and partnership adjustments required as a result of a foreign tax redetermination. For special rules applicable when an adjustment to a partnership related item (as defined in section 6241(2)) is required as part of a redetermination of U.S. tax liability under section 905(c) and § 1.905-3(b) of this chapter as a result of a foreign tax redetermination (as defined in § 1.905-3(a) of this chapter), see § 1.905-4(b)(2)(ii) of this chapter.
(h) Applicability date -
(1) In general. Except as provided in paragraph . | https://www.law.cornell.edu/cfr/text/26/301.6227-1 | CC-MAIN-2021-10 | refinedweb | 1,406 | 50.57 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.