text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
AngularJS is a relatively new JavaScript framework by Google, designed to make your front-end development as easy as possible. There are plenty of frameworks and plugins available. As such, it can sometimes prove difficult to sift through all of the noise to find useful tools. Here are three reasons why you might choose AngularJS for your next project. 1 - It Was Developed by Google Angular is built and maintained by dedicated Google engineers. This one may seem obvious, but it's important to remember that many (not all) frameworks are made by hobbyists in the open source community. While passion and drive have forged frameworks, like Cappucino and Knockout, Angular is built and maintained by dedicated (and highly talented) Google engineers. This means you not only have a large open community to learn from, but you also have skilled, highly-available engineers tasked to help you get your Angular questions answered. This isn't Google's first attempt at a JavaScript framework; they first developed their comprehensive Web Toolkit, which compiles Java down to JavaScript, and was used by the Google Wave team extensively. With the rise of HTML5, CSS3, and JavaScript, as both a front-end and back-end language, Google realized that the web was not meant to be written purely in Java. AngularJS came about to standardize web application structure and provide a future template for how client-side apps should be developed. Version 1.0 was released just under 6 months ago (as of December, 2012) and is being used by a host of applications, ranging from hobby to commercial products. Adoption of AngularJS as a viable framework for client-side development is quickly becoming known to the entire web development community. Because AngularJS is built by Google, you can be sure that you're dealing with efficient and reliable code that will scale with your project. If you're looking for a framework with a solid foundation, Angular is your choice! 2 - It's Comprehensive If you're familiar with projects, like QUnit, Mocha or Jasmine, then you'll have no trouble learning Angular's unit-testing API. Angular, similar to Backbone or JavaScriptMVC, is a complete solution for rapid front-end development. No other plugins or frameworks are necessary to build a data-driven web application. Here's an overview of Angular's stand-out features: -. - MVVM to the Rescue! Models talk to ViewModel objects (through something called the $scope object), which listen for changes to the Models. These can then be delivered and rendered by the Views, which is the HTML that expresses your code. Views can be routed using the $routeProvider object, so you can deep-link and organize your Views and Controllers, turning them into navigable URLs. AngularJS also provides stateless controllers, which initialize and control the $scopeobject. - Data Binding and Dependency Injection. Everything in the MVVM pattern is communicated automatically across the UI whenever anything changes. This eliminates the need for wrappers, getters/setters or class declarations. AngularJS handles all of this, so you can express your data as simply as with JavaScript primitives, like arrays, or as complex as you wish, through custom types. Since everything happens automatically, you can ask for your dependencies as parameters in AngularJS service functions, rather than one giant main()call to execute your code. - Extends HTML. Most websites built today are a giant series of <div>tags with little semantic clarity. You need to create extensive and exhaustive CSS classes to express the intention of each object in the DOM. With Angular, you can operate your HTML like XML, giving you endless possibilities for tags and attributes. Angular accomplishes this, via its HTML compiler and the use of directives to trigger behaviors based on the newly-created syntax you write. - Makes HTML your Template. If you're used to Mustache or Hogan.js, then you can quckly grasp the bracket syntax of Angular's templating engine, because it's just HTML. Angular traverses the DOM for these templates, which house the directives mentioned above. The templates are then passed to the AngularJS compiler as DOM elements, which can be extended, executed or reused. This is key, because, now, you have raw DOM components, rather than strings, allowing for direct manipulation and extension of the DOM tree. - Enterprise-level Testing. As stated above, AngularJS requires no additional frameworks or plugins, including testing.. As long as you have a source for storing data, AngularJS can do all of the heavy lifting on the client, while providing a rich, fast experience for the end user. 3 - Get Started in Minutes Getting started with AngularJS is incredibly easy. With a few attributes added to your HTML, you can have a simple Angular app up in under 5 minutes! - Add the ng-appdirective to the <html>tag so Angular knows to run on the page: <html lang="en" ng-app> - Add the Angular <script>tag to the end of your <head>tag: <head> ...meta and stylesheet tags... <script src="lib/angular/angular.js"></script> - Add regular HTML. AngularJS directives are accessed through HTML attributes, while expressions are evaluated with double-bracket notation: <body ng- <h1>Today's activities</h1> <ul> <li ng- {{activity.name}} </li> </ul> </body> </html> The ng-controller directive sets up a namespace, where we can place our Angular JavaScript to manipulate the data and evaluate the expressions in our HTML. ng-repeat is an Angular repeater object, which instructs Angular to keep creating list elements as long as we have activities to display, and use this <li> element as a template for how we want all of them to look. - When you want to grab something from Angular, fetch your data with a JavaScript file containing a function whose name corresponds to the controller you've outlined in your HTML: function ActivitiesListCtrl($scope) { $scope.activities = [ { "name": "Wake up" }, { "name": "Brush teeth" }, { "name": "Shower" }, { "name": "Have breakfast" }, { "name": "Go to work" }, { "name": "Write a Nettuts article" }, { "name": "Go to the gym" }, { "name": "Meet friends" }, { "name": "Go to bed" } ]; } As mentioned previously, we're creating a function with the same name as the ng-controller directive, so our page can find the corresponding Angular function to initialize and execute our code with the data we wish to grab. We pass in the $scope parameter in order to access the template's activities list that we created in our HTML view. We then provide a basic set of activities with the key, name, corresponding to the activity's property name that we listed in our double-bracket notation, and a value, which is a string representing the activities that we want to accomplish today. - While this application is complete, it's not overly practical. Most web applications house lots of data stored on remote servers. If you've got your data stored on a server somewhere, we can easily replace the array with a call from Angular's AJAX API: function ActivitiesListCtrl($scope) { $http.get('activities/list.json').success(function (data) { $scope.activities = data; } } We've simply replaced the native JavaScript array object of hashes with a specialized HTTP GET function, provided by the Angular API. We pass in the name of the file that we watch to fetch from the server (in this case, a JSON file of activities) and we are returned a promise from Angular, much in the same way that the promise pattern works in jQuery. Learn more about promises in jQuery here on Nettuts+. This promise can then execute our success function when the data has been returned, and all we have to do is bind the return data to our activities, which as previously stated, was provided by dependency injection, via the $scope parameter. A static to-do list is nice, but the real power stems from how easily we can manipulate the page without having to set up a bunch of JavaScript functions to listen and respond to user interactions. Imagine that we need to sort our activities list alphabetically. Well, we simply add a drop down selector to allow the user to sort the list: <h3>Sort:</h3> <select> <option value="name">Alphabetically</option> </select> Add the model directive to the drop down: <select ng- Finally, we modify the <li> tag to recognize sortModel as a way to order our list: <li ng- All of the heavy lifting is intelligently done by AngularJS. And that's it! The secret is the filter we've added to the ng-repeat directive in the list item. The orderBy filter takes an input array (our list of activities), copies it, and reorders that copy by the property outlined in the select tag. It's no coincidence that the value attribute of the option tag is name, the same value that is provided by our JSON file as the property of an activity. This is what allows AngularJS to automagically turn our HTML option value into a powerful keyword for sorting our activities template. Notice how we aren't listening for user interaction events. Our code isn't riddled with callbacks and event handlers for dealing with objects we've clicked, selected, touched or enabled. All of the heavy lifting is intelligently done by AngularJS to find the controller function, create the dependency between the template and the controller, and fetch the data to render it on the screen. AngularJS provides a full and robust tutorial, which creates a very similar web app and expands it even more - all in a matter of minutes! Conclusion AngularJS is quickly becoming the dominant JavaScript framework for professional web development. You can find plenty of AngularJS scripts and apps on Envato Market to help you achieve more with Angular JS, such as cropping tools, online store templates, rating apps, and more. In this tutorial: - We've reviewed how Google came to develop this framework as a way to make the most of HTML. - We've examined the basic core features and functionality that make Angular such a pleasure to work with. - Finally, we've developed a dynamic, fully-functional demo in a matter of minutes to demonstrate the true power of how easy it is to go from nothing, to a full application. If you're looking for a robust, well-maintained framework for any sized project, I strongly recommend that you take a look at AngularJS. It can be downloaded for free at AngularJS.org, which also contains a wealth of information, including the full API documentation, as well as numerous examples and tutorials that cover every facet of front-end web development. Good luck! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/3-reasons-to-choose-angularjs-for-your-next-project--net-28457
CC-MAIN-2019-22
refinedweb
1,781
50.87
We purchased a company earlier this year and are in the process of doing a full network refresh and have been having issues with GPO's running at their remote sites. We have been trying to figure out why some GPO's would run but not others, it hasn't been a big enough issue to assign dedicated resources do since there were so many other issues that needed to be resolved first ({trust me when I say that this network was a mess, basic rip and replace of all network hardware due to age and issues). So as I was working on it over the weekend, i found that: - Their are no issues when running DCDIAG on the server except for some of the GPO's - The SYSVOL folders are located on the Domain Controllers - No DFS issues in event viewer - When looking at the GPO's themselves most are fine, but a few show computer version "24 (AD), Not available (SYSVOL)" Digging into it further, it appears that at some point in the past, someone deleted the AD namespace from DFS....aka, SYSVOL, no longer replicates to other domain controllers. Now I am looking to see if there is a method of either doing a automated or manual wayto rebuild the namespace without having to wipe out the domain. help Obi-Wan Spiceworkies, you are my only hope! 5 Replies Dec 21, 2016 at 3:18 UTC In the Replication area in the DFS Management Console, if you hit "Add Replication Groups to Display", does Domain System Volume show up at all? If it does, are there specific memberships that are missing? This may help. it talks about if SYSVOL is missing entirely, but also talks about broken replication. Dec 21, 2016 at 3:22 UTC Also - If the domain was upgraded from 2003, there's a chance that SYSVOL might still be using FRS for replication rather than DFSR. A lot of times you'll unknowingly start to troubleshoot DFSR SYSVOL only to find out that it's still using FRS and FRS is broken. Dec 22, 2016 at 4:06 UTC Definitely been upgraded from 2003 or earlier (just shutdown the last non DC recently) It is still using FRS, but I am afraid to move it to DFSR without knowing what other things could be broken. I am going to work on trying a to do a non authoritative restore in my test environment (have to spin up a couple of clean DC's and then break them. If that works I will schedule doing it during my next scheduled downtime. Anyone else have any thoughts? Would the upgrade to DFSR resolve the issue in an of itself? Dec 22, 2016 at 2:50 UTC I think that's a good idea. If FRS is broken, the procedure to convert it to DFSR is prone to error.
https://community.spiceworks.com/topic/1950905-interesting-issue-with-ad-dfs-replication-that-has-me-stuck
CC-MAIN-2017-43
refinedweb
482
63.53
>>] BRAZIL PM BRIEFS 110818 Released on 2012-10-17 17:00 GMT * * Dilma launched, today, the Brazil Without Misery plan for the south-east region of the country. The plan hopes to elevate 2.7 million people from poverty in the states of Espirito Santo, Minas Gerais, Rio de Janeiro and Sao Paulo. ENERGY/MINING * A senior executive at Petrobras blamed rising costs at the company on higher imports of refined oil products and maintenance shutdowns at some of its oil facilities. * Petrobras is gradually restarting operations at a refinery in Argentina after a fatal explosion forced the plant closure last week, the company said on Wednesday. * Brazil contracted 1,544 megawatts of new hydroelectric, wind, biomass and thermoelectric energy on Wednesday during an auction for electricity to come online in 2014. Petrobras and MPX Energia SA (MPXE3.BR), an energy company controlled by Brazilian billionaire Eike Batista, both agreed to sell energy from natural-gas-fired plants that will be built within the next three years. * Petrobras has awarded Pacific Drilling a three-year contract for one of its ultra-deepwater drillships, Pacific said Wednesday. The Pacific Mistral, built by Samsung Heavy Industries, is a dynamically positioned rig equipped to drill at depths of up to 10,000 feet. Work starts in the last quarter of 2011.. Brazil ramps up naphtha purchases in North Africa Wed Aug 17, 2011 3:43pm GMT LONDON (Reuters) - this month. Traders reported the Brazilian oil company had won a tender to purchase a 40,000 tonne cargo from Morocco's port of Mohammedia in September. It will follow the export of a cargo from Algeria's Skikda refinery, due to load later this week. Regional supply tightness is expected through September as traders report volumes for export from the Algerian plant are also set to cross the Atlantic to South America next month. "There is a lot of action in the Mediterranean and a lot is changing hands," said a naphtha broker. Dilma lanc,a Brasil Sem Miseria para a Regiao Sudeste AUG 18 A presidente Dilma Rousseff anunciou hoje (18) as ac,oes do Plano Brasil Sem Miseria para a Regiao Sudeste. As medidas visam a retirar da situac,ao de extrema pobreza 2,7 milhoes de habitantes do Espirito Santo, de Minas Gerais, do Rio de Janeiro e de Sao Paulo. Em cerimonia realizada na capital paulista, a presidenta assinou convenios com os governadores dos quatro estados: Antonio Anastasia (MG), Geraldo Alckmin (SP), Renato Casagrande (ES) e Sergio Cabral (RJ). Eles se comprometeram a colaborar com a iniciativa federal, lanc,ada em julho e voltada a atender cidadaos que tem renda mensal inferior a R$ 70. "Homens, e agora mulheres, estao determinados a combater a pobreza extrema no nosso pais", disse a presidenta, em discurso. "O pais precisa ter a coragem de combater o que e preciso. Miseria ainda e o nosso principal problema e o nosso principal desafio." Em Sao Paulo, o governo estadual e o governo federal vao integrar seus programa de renda minima. Com um unico cartao, familias que se enquadrem nos criterios poderao receber beneficios do Bolsa Familia e do Renda Cidada (programa paulista). O Renda Cidada paga beneficios de ate R$ 80 por mes a 161 mil familias do estado de Sao Paulo. Segundo o governo paulista, com a parceria, 300 mil familias passarao a receber algum tipo de auxilio. Ao todo, cerca de 1 milhao de pessoas serao beneficiadas. "Ultrapassamos um periodo de disputas para unir esforc,o em prol dos que mais precisam", disse o governador Geraldo Alckmin, ressaltando que a parceria entre os governos e um "marco" para o pais. Esse tipo de complementac,ao ja existe no Espirito Santo e no Rio de Janeiro. O governador fluminense, Sergio Cabral, informou que o programa de renda minima do estado vai beneficiar ate o fim deste ano cerca de 300 mil familias. "Os estados devem assumir o protagonismo que lhes cabe no combate `a extrema pobreza", disse ele. O Plano Brasil Sem Miseria, no Sudeste, preve ainda a expansao da rede de ensino tecnico, alem da instalac,ao de unidades basicas de saude e centros de assistencia social. Preve tambem medidas de incentivo `a agricultura familiar na regiao. Nesta quinta-feira, a ministra do Desenvolvimento Social e Combate `a Fome, Tereza Campello, assinou acordo com representantes da Associac,ao Brasileira dos Supermercados (Abras) dos quatro estados do Sudeste para a compra de produtos de agricultores familiares. "O Brasil Sem Miseria e compromisso do governo, mas e tambem da sociedade", afirmou a ministra, em discurso. "Os empresarios estao atendendo ao nosso chamamento" Campello tambem assinou acordo com a Associac,ao Brasileira dos Distribuidores de Energia Eletrica (Abradee) para que auxiliem no cadastro de familias no Brasil Sem Miseria. "Das 800 mil familias que ainda devem ser incluidas no Bolsa Familia, metade esta no Sudeste", explicou. "Vamos juntar forc,as e integrar todas essas familias." ------------------------------------------------------- The President today announced Rousseff (18) the actions of Brazil Without Poverty Plan for Southeast. The measures are aimed at removing the extreme poverty of 2.7 million inhabitants of the Holy Spirit, Minas Gerais, Rio de Janeiro and Sao Paulo. In a ceremony held in Sao Paulo, the president has signed agreements with the governors of four states: Antonio Anastasia (MG), Geraldo Alckmin (SP), Renato Casagrande (ES) and Cabral (RJ). They pledged to cooperate with the federal initiative, launched in July and aimed to meet citizens who have a monthly income less than $ 70. "Men and women now, are determined to fight extreme poverty in our country," the president said in a speech. "The country must have the courage to fight what it takes. Poverty is still our main problem and our main challenge. " In Sao Paulo, the state government and the federal government will integrate their minimum income program. With a single card, families that meet the criteria can receive benefits from the Family Allowance and Income Citizen Program (Sao Paulo). The Citizen Income pays benefits up to $ 80 per month to 161 000 families in the state of Sao Paulo. According to the Sao Paulo government, with the partnership, 300 000 families will receive some type of aid. In all, about 1 million people will benefit. "We exceeded a period of disputes to unite efforts in favor of those who need it most," said the governor Geraldo Alckmin, noting that the partnership between governments is a "milestone" for the country. This type of complementation exists in the Holy Spirit and in Rio de Janeiro. Rio de Janeiro Governor Sergio Cabral said the minimum income program will benefit the state later this year about 300 000 families. "States should take the leading role they play in fighting extreme poverty," he said. The Brazil Without Poverty Plan, the Southeast, also provides for the expansion of technical education, plus the installation of basic health units and centers for social assistance. It also provides incentives for family farming in the region. On Thursday, Minister of Social Development and Hunger Alleviation, Tereza Campello, signed an agreement with representatives of the Brazilian Association of Supermarkets (Abras) of four Southeastern states for the purchase of family farmers. "Brazil Without Poverty is a commitment from the government, but also of society," the minister said in a speech. "Businesses are responding to our call" Campello also signed an agreement with the Brazilian Association of Electricity Distributors (Abradee) to assist in the registration of families in Brazil without Poverty. "Of the 800 thousand families still should be included in the Bolsa Familia, half are in the Southeast," he said. "Let's join forces and integrate all these families." Rising Costs Curb Petrobas Income AUGUST 18, 2011 A senior executive at Brazil's Petroleo Brasileiro SA, or Petrobras, blamed rising costs at the company on higher imports of refined oil products and maintenance shutdowns at some of its oil facilities. In an interview, Petrobras' Chief Financial Officer Almir Barbassa highlighted the problem of high depletion rates at some of the state-owned energy company's mature oil fields in the Campos Basin, but said it was working on new technologies to stabilize output there. [PETROBAS] Bloomberg News Petrobas is working on technology that could boost output at mature oil fields, says CFO Almir Barbassa, shown above in Sao Paulo in March. Petrobras earlier this week reported that its second-quarter profit rose 32% from year-ago levels to 10.94 billion reais ($6.88 billion). But an increase in costs meant it was unable to fully benefit from a sharp rise in global oil prices, which reached a high of $127 a barrel in April. Part of that was because Petrobras was forced to import more oil products such as jet fuel, diesel, gasoline and naphtha, which it doesn't produce enough of to satisfy domestic demand, Mr. Barbassa said. Also, some of the company's offshore production platforms had to be shut during the quarter for maintenance. Another factor was the rising cost of arresting production declines in some of Petrobras' older fields. Depletion in the mature offshore fields of the Campos Basin, which account for more than half of Petrobras' domestic production, has been accelerating since early 2009, and in some places is as high as 20% a year, according to analysts at Credit Suisse. For years, Petrobras has been injecting water into the Campos fields to keep flagging oil output stable. But that can sometimes lead to a higher water content in the oil that is produced. Mr. Barbassa said Petrobras is working on a new kind of device which will separate the water from the oil before it reaches the production platform. "If it works well, then it will revolutionize production and output from the Campos Basin will increase," he said. Petrobras is spending $224.7 billion over five years to develop a series of massive offshore oil fields that lie beneath a thick layer of salt under the ocean floor-the biggest investment program in the oil industry. One field, Lula, the largest discovery in Brazil's history, is estimated to hold recoverable reserves of between 5 billion and 8 billion barrels of oil. But output from these fields has been slow to ramp up and Petrobras is still heavily reliant on its older fields. Weighing on Petrobras' second-quarter earnings was a 2.28 billion reais loss in its refining and transport division. State policy in Brazil dictates that Petrobras has to keep its gasoline and diesel prices fixed, which prevented it from passing on rising global oil prices to consumers. But Mr. Barbassa says that the policy has, more often than not, worked in Petrobras' favor. "Up to the end of last year we had better prices in Brazil than abroad," he said. He defended the pricing policy, saying it means "a more stable cash flow for the company and consumers don't suffer the consequences of volatile prices." Petrobras reopens Argentina refinery after fatal blast Aug 17, 2011 4:19pm GMT * Restarts operations after permit from local authorities * Bahia Blanca refinery has a capacity of 31,000 bpd BUENOS AIRES, Aug 17 (Reuters) - Brazil's state-run oil company Petrobras (PETR4.SA: Quote) is gradually restarting operations at a refinery in Argentina after a fatal explosion forced the plant closure last week, the company said on Wednesday. Local officials ordered Petrobras to shut down the 31,000 barrel-per-day refinery pending an investigation into an Aug. 10 blast in a resting area that killed one worker and injured another. [ID:nN1E77906H] "The plant has permission to operate and is slowly retaking its normal operations," a source at Petrobras told Reuters, adding that Petrobras received a temporary permit from local environmental authorities following the probe. The plant, in the port city of Bahia Blanca in Buenos Aires province, accounts for about five percent of Argentina's total refining capacity of 627,000 BPD. For more see, [ID:nN1296899] Energy reserves have fallen in recent years in Latin America's third-biggest economy, and the country has had to import more fuel to meet its needs. Critics blame government intervention in the market and political uncertainty for discouraging investment. UPDATE: Brazil Contracts 1,544MW Of New Electric Energy For 2014 AUGUST 17, 2011, 7:14 P.M. ET (Adds details of winning bidders starting in second paragraph) SAO PAULO (Dow Jones)--Brazil contracted 1,544 megawatts of new hydroelectric, wind, biomass and thermoelectric energy on Wednesday during an auction for electricity to come online in 2014. State-controlled oil company Petroleo Brasileiro SA (PBR, PETR4.BR) and MPX Energia SA (MPXE3.BR), an energy company controlled by Brazilian billionaire Eike Batista, both agreed to sell energy from natural-gas-fired plants that will be built within the next three years. In addition to the two gas-fired plants, companies successfully sold energy from wind and biomass plants. The auction is part of Brazil's plans to expand generating capacity by at least 5,000 MW a year to match demand growth. In addition to counting on controversial megadams such as Belo Monte, slated to be the world's third largest when it comes online in 2015, the country has invested heavily in wind power, adding close to 2,000 MW of capacity a year in recent auctions. "There was an equal distribution between sources," said Mauricio Tolmasquim, head of EPE, the government's energy planning corporation. "There was concern that there would be a source that would dominate everything, and that didn't happen." About 40% of energy will come from natural gas-fired plants, and about 40% from wind, Tolmasquim said in Sao Paulo. MPX said in a statement after the auction that its 499 MW plant, to be built in the state of Maranhao--where Batista's oil- and gas-exploration company OGX Petroleo & Gas Participacoes (OGXP3.BR) recently discovered a natural-gas field--will cost 6.5 billion Brazilian reais ($4.1 billion). Petrobras, as the government energy company is known, will build its 530 MW plant in Rio de Janeiro state, close to its massive offshore oil and gas fields. Rivals had complained that Petrobras, a dominant player in the supply of natural gas in Brazil, had an unfair advantage over rivals. The company requires generators to pay for a minimum amount of natural gas even if they don't use it, whereas Petrobras didn't impose the same requirement on its own natural-gas project. Fourty-four wind-powered plants agreed to sell energy at the auction, as well as four plants that burn sugarcane bagasse for fuel. Hydroelectric energy was also sold at the auction from the Jirau plant being built in Brazil's Amazon region. The dam expanded its generating capacity and sold the additional 450 MW at Wednesday's auction. The price of energy from the projects averaged a discount of 26% from the maximum price of BRL139 a megawatt hour. The ceiling was set by the government, and the winners determined by whoever offered the biggest discount to that price. The winning bidders will have to begin generating electricity in three years. Petrobras contracts Pacific Drilling's ultra-deepwater drillship for three-year campaign offshore Brazil August 18, 2011 Brazilian major Petrobras (NYSE:PBR) has awarded Pacific Drilling a three-year contract offshore Brazil for the Pacific Mistral ultra-deepwater drillship. The contract is expected to commence in the fourth quarter of 2011, and estimated maximum contract revenues, including mobilization and client requested modifications, are expected to be approximately $536 million. The Pacific Mistral was delivered by Samsung Heavy Industries in June 2011. The rig is equipped for and capable of operating in water depths of up to 12,000 feet and drilling wells 37,500 feet deep. "We are pleased to announce the beginning of a core, strategic relationship with Petrobras," said Chris Beckett, CEO of Pacific Drilling. "This contract for the Pacific Mistral underscores our commitment to work with the best operators in the industry and also expands our operating presence in the Atlantic Basin, a region of focus for our activities.". Boeing Gives Guarantees to Brazil in Fighters Bid.
http://www.wikileaks.org/gifiles/docs/33/3376722_fwd-latam-brazil-pm-briefs-110818-.html
CC-MAIN-2014-41
refinedweb
2,689
50.46
See also: IRC log <prolix> like label on command <prolix> contentinfo is broader than <addres> <prolix> there isn't a 1-to-1 mapping between <address> in HTML5 and contentinfo in ARIA <MikeSmith> prolix: should comment on #aapi instead <wendy> fo: 3 types of canvas accessibility: decorative, informative, interactive <wendy> (frank olivier) <wendy> prolix--yes, i can do that <wendy> i'm wendychisholm at skype. <wendy> ...if we created dom underneath, overload "fallback text." <wendy> ...that works for decorative and informative (in case of table with data), but there are cases where you want fallback text, but also want better text. <wendy> ...want to see the table as well as the canvas. a way to know there is accessibility info within canvas. <wendy> ?? is it nested like object? <wendy> cs if you can render the content, it won't render the fallback. <wendy> ... if you need the accessibility information but your browser supports the primary content, you can't get to it. <wendy> ... part of the solution is better browsers. <wendy> fo: could show both. or let the user choose. <prolix> that's what the noframes content in the web IRC interface says - "you need a browser that supports frames and javascipt. full stop' fo: dom underneath allows you to park it under the element and not create another dom. cs said something bout aria... scribe: labelledby cs: informative canvas--have something like aria-describedby that has some sort of text that is a sufficient equivlalent. <prolix> fallback could be an ARIA-enabled widget or tree graph fo: 2 layers of informative canvas. 1 is a simple text equivalent/fallback works. 2 more complex (a graph of data) with data table as fallback. <prolix> the "family tree" example fo: informative contains data that is read-only. interactive can contain data the user can modify. ?? aria could be processes separately. fo: there are accessibility tools that will read that information. sf: are there any at that access it? fo: fangs reads everything from the canvas. ... I think nvda does it. (unsure) ... not a big problem to update at software to look underneath the canvas. ... how do you know it's fallback and not alternative representation. sf: question about the spec (missed) fo: couple weeks ago a comment that elements under canvas should be tabbable. talks about color picker. ... could tab into form elements behind to change color values. sf: what about non-at user who wants to use the keyboard? fo: if someone wants to recreate the ui controls using canvas (more stylish looking controls), i cant use os theming on them. ... would have to handle focus highlights. cs: could a browser fix that? ... flash accesses that. fo: if you have a high contrast mode on your computer, there are ways to do it (to go into that mode). <prolix> cs: ie messes with css. fo: overwrite the stylesheet. cs: high contrast mode turns off a lot of css. fo: could give canvas access to that. a color mode in canvas--a background ui control color you can specifiy. cs: available to you as a windows app. fo: solved in the css color module. (should be solved....) <JF> anything on that page in particular greg that you want me to highlight? <prolix> high contrast equals what - the opposite of the rgb values used or user client-side preferences? cs: access to the operating system focus. ... in windows when you tab into a text box it's the same focus that you get when tab into a windows dialog. ?? description of how apple does this? cs: it's not based on the operating system. ... there are ATs that will change how things look. done at the OS level. would be nice to support. sf: to be considred ccessible, need to provide that. ?? want access to the themeing api. scribe: describes the set of intrinsics. ... everyone has them. lowering those to the browser is a huge undertaking. cs: as an author, not asking to have access to that. ... if i put focus on an element in canvas, i want the same behavior that happens in html. <prolix> some users will want the canvas, but need to be guided through it (users of supplemental speech, for example, for those with cognative processing issues) cs: so if user sets, then the system variable are honored. <prolix> or those with a limited point-of-regard <wendy> scribe: wendy, scribe <wendy> sf: proiding programaitc focus <wendy> eep. <wendy> sf: providing programmatic focus <wendy> cs: works on most elements, but not on canvas. <wendy> ?? canvas is just a bitmap. not providing a form element but something that looks like a form element. <wendy> cs: they have made it accessible by using msaa. <wendy> ?? msaa or theming apis? <wendy> cs: not sure <wendy> sf: programmatically focusable areas. <wendy> fo: you could have a canvas context method, draw focus rectangle tha tuses the os theming to draw a rectangle on the canvas. <prolix> CSS user preferences need to be supported <wendy> ... canvas with 10 ui elements and you want to tab between then, author would have to make the keyboard tabbing. <wendy> cs: that combined with aria markup should be most of what is necessary. <wendy> fo: creating hidden ui controls underneath, need a system focus rectangle and using aria to expose properties and behaviors to ATs. <wendy> ... only using canvas to style from scratch your ui controls. <wendy> ... not limited to only one ui control per canvas, but can have several. <wendy> ?? seems like it's a hole tha tyou keep digging--how many apis do you add to the context? <wendy> ... if just drawing focus rings, but if there are other aspects of ui decisions that theme will change, then you will need to expose a huge set of things. <wendy> ... in some sense, css is trying to solve. <wendy> fo: i don't think we can get to the point, where you use the system ui colors. <wendy> sw css color module defines a few hard coded names that map to windows concept, that will change as you change the themes. <wendy> ... may be deprecated. <wendy> cs: that was the design goal of those. <wendy> (sw from apple) <wendy> fo: creating a mini dom makes sense. <wendy> ...the only new method is a way to draw focus rectangle. <wendy> sf: it's a common use case to draw a focus ring, but also want to define an interactive area. <wendy> ... if it comes with focus--that's default. <wendy> ... then they dont have to worry about addng that. <wendy> cs: that ui works like the other uis the user is used to. <wendy> fo: also discussion of something like a click map. <wendy> ... if i have 10 user controls, i can set up 10 clickable areas and can get an event and handle it. <wendy> cs: support ismap? use the same image map? <wendy> fo: think that goes a little too far. <wendy> ... think the author can handle the events. <wendy> cs: can an image source point to a canvas? <wendy> ... create an overlay? <wendy> fo: yes, could create an overlay. <wendy> cs: interactive canvas--ui contorl is the tricky one. <wendy> cs: difficult to come up with a paradigm that makes sense--that works as canvas does now but works with how html was/is going. <wendy> ... does dom under canvas seem natural? <wendy> fo: shows demo <prolix> "The CSS2 System Color values have been deprecated in favor of the CSS3 UI 'appearance' property for specifying the complete look of user interface related elements" () <wendy> ... three day weather forecast. <wendy> ... "click here to switch to degrees F" 3 suns each with temp in the middle (in Celsius). <wendy> ... if you just draw pixels into the canvas, you won't get any information for the canvas. <prolix> so it needs live regions and accessible controls... <wendy> ... what i've done is create a tiny dom underneath. <wendy> ... fangs is a screen reader simulator <wendy> ... looking at that to see what it's giving for this page. <wendy> ... s/this page/this canvas <wendy> cs: what's the diff between what you've done there and having fallback html w/in canvas tag? <wendy> fo: could add a text box and interactive elements. <wendy> ... more example of an informative than interactive. <wendy> ... james craig did a demo with checkbox within canvas. <wendy> sw: makes text accessible to AT but invisible to the screen? <wendy> fo: take the fallback context, erase it, add in a new dom under the canvas, that has interactive elements. <wendy> cs: kept the interaction in sync? <wendy> fo: yes, james draws a focus rectangle himself. <wendy> cs: he was toggling the visible check and the hidden. <wendy> sw: what changes needed in html5 to support this? <wendy> fo: mini-dom: nothing really. it's the canvas subtree. <wendy> cs: canvas subtree and interaction between canvas and other elements managed? <wendy> jf: james demo uses javascript. <wendy> sw: any sync would need to be done by the script. <wendy> fo: he was supporting events, drawing focus. <wendy> cs: this approach does meet the minimum bar of accessibility--it's possible to make an accessible product. <wendy> ... however, is not easy, intuitive, such that developers would do it. <wendy> sw: argue that javascript to write checkbox and handle events not intuitive either. <wendy> ?? someone is obviously writing it for the rest of the UI. <wendy> sf: doubling the work? <wendy> cs: seems like keeping it synchronized would be hard. <prolix> could CANVAS support an "order" attribute? order="targetID,targetID2,targetID3" or order="targetRole, targetRole1, targetRole2" <prolix> that's the SVG adition to the XHTML Access Module () <wendy> discussion about how much work are developers going to do to add accessibility to custom controls. <wendy> fo: argues that if it's 8 days work to create their own custom contorls, 2 days work is no big deal. <wendy> accessibility folks say, "we've heard that before and unfortunately, that will take a lot of education to get people to do that." <wendy> fo: counters, most people will not create uis from canvas from scratch, they will use libraries and the libraries will be accessible. <wendy> wc: they better be! <wendy> fo: no matter what we do, people will have to do additional work to make canvas accessible. <wendy> ... currently, it's an additional few lines of code. <wendy> ... perhaps give them an api that will make it 1 line of code instead of 4. <prolix> testify, sister, testify!!! <wendy> discussion about alt and that isn't even used. <wendy> fo: but you are asking for a technological solution. i'm giving you that. <wendy> ... otherwise, you are asking for the impossible. <wendy> cs: designing the api is part of the api and not some tacked-on thing. <wendy> ... not an unpleasant task for devs. <wendy> mm: wish list <prolix> keyboard support, device independent event triggering, for a start <wendy> mm: wish-list++ add accessibility info out-of-band--you have a toolkit that doesn't suport accessibility. be able to add info, map to html controls, etc. <wendy> sw: we have a good solution. <wendy> ... if you inherit an app, you already know which elements you will need to shadow. <wendy> ... if it's well-factored, it can create the button in the subtree. <prolix> can you use XBL on CANVAS? <wendy> ... if you have a routein to focus a button ont he canvas, has it on the subtree. <wendy> cs: not saying not workable, thinking about how to make it more adoptable. <wendy> sw: well-factored, yes. <wendy> ... it's a drawing api. could make it oo but it's drawing. <wendy> fo: there is not perfect solution. lots of solutions that all have sub-optimal outcomes. <wendy> ... what we have currently is a good road. will people drive on that road? don't know. <prolix> converge on and learn from lessons of SVG <wendy> ... made it as easy as possible. but shouldnt put too much of a burden on ppl. <wendy> ... compared to what ppl doing today, should be ok. <wendy> cs: it would be nice in the future, perhaps in html5 timeframe, to have access to accessibilty apis from javascript. <wendy> ... for something like canvas, be able to say "this box is a button." <wendy> .. understand harder. if we go with a dom approach, that the other possibility is not locked out for a future version. <wendy> fo: interesting way to do that is to tag the canvas element with a property that says, "accessibility provided by method x." <wendy> ... think there are lots of ways you can go. <adrianba> +1 on the point about having accessibility apis from javascript <wendy> fo: shows the dom of the demo <prolix> adrianba, this is one reason why i'm going to suggest canvas implementation via script libraries if the Rich Web Application Incubator Group's final report's recommendation that a working group be established to continue to investigate the area is formed () <prolix> could use the SVG order attribute model referenced above <prolix> aria-role="button" <wendy> discussion about how to synchronize the shadow canvas with the html elements so that someone who is sighted and using the keyboard can operate the controls in the canvas. <wendy> cs: describes counter proposal. <wendy> talking about marking coordinates with roles. <wendy> ?? need to be able to do all of the things we do with the dom. <wendy> cs: counter proposal is to 1. direct api access 2. is there a way to make it so that it's possible to use the html elements but have them look as spiffy as theyw ant. <wendy> sw: in webkit you can take a button and make it look however you want. <wendy> ... there are extensions to css. <wendy> .. or you can make a div, draw anything you want into it and add an aria attribute. <wendy> ... canvas is for ppl who say, we want to draw other things differently. <wendy> cs: my concern ppl taking that jump too easily. <wendy> ?? is it likely that we're going to add features to html that will satisfy those same people? <wendy> ?? css will never help someone wh ocan do a better tetx editor. <wendy> cs: right. <prolix> amen <wendy> ?? we need a solution lets someone who wants to do that much work nd also wants to make their work accessible, we should let them do that. <wendy> cs: yes, make it possible and as pleasant as possible. <wendy> ?? as implementable <wendy> cs: making it so that there are fewer cases where ppl need to do that to get their artistic expression. <wendy> mm: the technology will be used in ways no one in this room has thought of. <prolix> something EXTENSIBLE <wendy> ... my big concern is that the solution we have will account for as many of those eventualities as possible. <wendy> fo: anything you can do with dom, you can do within canvas. <wendy> sc: let's mke sure that we've covered everything. <wendy> cs: hgh contrast, yes. <wendy> sc: options w/in browser--zoom, etc. <wendy> cs: pixel zooming should work. <wendy> font size an issue? <prolix> "The CSS2 System Color values have been deprecated in favor of the CSS3 UI 'appearance' property for specifying the complete look of user interface related elements" () <wendy> sc: so we've covered keyboard access, screen readers, etc. <wendy> cs: does canvas spec need to talk about browser and os settings. <wendy> fo: could be at least one paragraph in the spec. <wendy> js: we suggest to ua wg that these are on the horizon. <wendy> fo: we've covered the major use cases of canvas. <wendy> ... everyone seems to be ok to use a dom. <wendy> clarification that the shadow dom is not like iframe, but is child of canvas element. <wendy> fo: need good documentation both for devs and at dev. <wendy> ... think only have one new method: draw focus system rectangle. <wendy> sw: the alternate is you could put in an invisible element and focus on it. <wendy> fo: don't think we need to redesign the api. <wendy> ... and a way to clear it. <wendy> cs: if using a screen mag, need to follow focus. <wendy> fo: os should handle the case. in windows, the bitmap will have direct coordinates. <wendy> will a rectangle be sufficient or also need other shapes? <wendy> discussion of focus rings/rectangles. <wendy> have a layer above the canvas. <wendy> fo: people are handling clicks on canvas and taking an action. <wendy> cs: if the focus moves, the viewport needs to move with focus. <wendy> fo: we can do that. <wendy> sf: the magnifier has to understand that is a focusable rectangle. <prolix> good idea, cyns <wendy> sw: since the focus is programmatic. <wendy> ACTION: cs and fo talk about how porgrammatic focus works under the covers in microsoft. [recorded in] <wendy> ACTION: sw talk about how programmatic focus works under the covers in apple. [recorded in] <wendy> sf: how to get access to text...cursor (missed it) <wendy> cs: needs to be a way for input focus to be synched (like keyboard focus) <prolix> thank you all <wendy> all done! <prolix> as for theCSS control of UI colors and such, check <tantek> FYI - anyone in room 1243 - the HTML media types session has been canceled due to being covered during the TAG joint meeting <SCain> We were wondering...thanks for letting us know <weinig> <annevk> you guys are done with the <canvas> discussion? <cardona507> no <annevk> ok :) <annevk> I missed the first bit discussing the link header and nothing much was minuted <annevk> what happened so far? <weinig> tantek: <MikeSmith> scribe: timeless <scribe> ScribeNick: timeless Tantek: ... ... html has been developed at the w3c ... to describe meta data ... meanwhile HTML5 has been developed at the w3c ... which also ... ... and for html5, use cases were developed to ... ... and IanH developed a way to express meta data <Julian> Tantek: There was a draft of HTML5 with RDFa developed and ... ... pub status for HTML5+RDFa: First Public Working Draft Glenn Goldstein, MTV networks: ... scribe: I want to get a sense for the scope of microdata ... Microdata could be used to hint to search engines for discoverability ... To inject data into the document that will be rendered out by e.g. JavaScript Tantek: ... ... one more point that's relevant for the discussion ... there was a joint meeting held with the TAG earlier this week ... my understanding was that there were plans to factor the Microdata syntax out into another document? MikeSmith: there has not been a decision IanH: ... ... this was regarding vocabulary tantek: then this wasn't what I understood from yesterday's tag meeting Julian: ... ... there was something which takes out the syntax from the html5 spec ... there is a complete spec which has the html5 microdata removed ... and the working group asked for a counterproposal for keeping it in ... last week ... and there's been no such counterproposal MikeSmith: Manu offered to write a counterproposal MJS: I think Ian is planning to do one ... and I don't think it makes sense for Manu to argue against himself Julian: do we have a timeline for when the counterproposal can be expected IanH: what was the deadline? MikeSmith: do we have one? IanH: it's probably about a month ... if there is one, then it will probably happen tantek: My understanding that Tim made was that he wanted microdata separate ... his reasoning was that the html5 spec was relatively big ... and it would be nice to separate things out that could be kinda big MikeSmith: the idea was to allow microdata and RDFa to be able to stand on their own and compete... ... so if RDFa is separate, then microdata should be separate [ scribe: "compete" was not what MikeSmith said, but scribe couldn't catch what was said ] MikeSmith: it's worth mentioning that both could be used ... there's no name conflicts ... when hixie did the original draft, there were one or two name conflicts ... but that was resolved <annevk> (the deadline mentioned earlier is December 2) MikeSmith: today, they don't really conflict with one another ... they can both be used Hixie: we're already recommending a third one ... class attributes and microformats MJS: I think microdata and RDFa have someone different design centers <annevk> (see ) MJS: and I think it might be nice to let them play out in the market ... microdata is relatively good for representing simple data ... RDFa is good for expressing the complete RDF ... ... different people see values in both of those approaches ... i don't think we need to bet on one or the other ... a helpful decision would be to publish HTML5+RDFa ... and publish HTML5+microdata ... and let them be used for a bit ... and then at some day we can pick (canonicalize) the winner <Kai> +1 to mjs MJS: The HTML5 spec has a provision for binding external modules ... The current HTML5+RDFa draft takes use of that provision ... so it's kind of a question of whether validator profiles will take into account such specifications ... will they put them into tantek: does html5 define a validation model for extensions? Hixie: I don't understand what that means MJS: HTML5 defines the concrete syntax ... converting source into DOM ... and defining values for certain attributes ... other documents would have to define how their things integrate tantek: extensions work at the DOM or tree model ... instead of the parsing model? MJS: correct plh: ... ... maybe it would be worth writing a document somewhere ... saying "these are the use cases for ..." Hixie: We actually have the use cases for microdata ... but I've never seen that for RDFa <tantek> Zakim q? plh: Who is the community who sees them as unrelated ... if we don't tell them "this is what you should use" Hixie: we have an open bug on making the use of data-* more clear plh: take those use cases and put them into one document Hixie: I think the spec does this rather well ... we have an introduction that explains how to use this and (when?) ... In my view, we shouldn't split this out ... there is one thing which needs to be integrated in and that'll help make it clearer plh: do we have a list of where all the extensions are? Hixie: this isn't really practical ... there are too many ... and i don't think people will come to the spec looking for this tantek: I heard three requests from plh ... 1. use cases ... 2. a tutorial, of how to use the different pieces <Hixie> plh, has the list of use cases used for developing microdata tantek: 3. some sort of documentation iterating all of the extension mechanisms in htm5 plh: ... ... I think we could put some of this in ... a wiki (perhaps) Hixie: do we have something like this in html4? plh: no, we don't <Julian> canvas? Hixie: i don't think this has been a problem <annevk> Julian, not an author extension tantek: when I looked at html4, i had a lot of trouble figuring out what was extensible, and what wasn't ... I ended up writing this up in XMDP <annevk> Julian, also a bad example, Apple is still regretting doing it unilaterally tantek: before that, there were very few extensions in html ... it was only after someone took the time to do that, that people started doing that Hixie: I think it's healthy not to encourage this plh: If they're going to do it, we want to give them some guidance [ scribe pauses to roll back to elsewhere ] Julian: I don't buy the argument that extensibility should not be well documented to make it harder to use it ... people will do it ... see <canvas> Hixie: well <canvas> happened *way* later ... and I told apple to go to the working group <Kai> HTML 4 was not extended, because most people didn't really understand it. tantek: I'm going to stay neutral on this ... it would have helped to have the document i ended up writing ... on the other hand, the process of doing the research ... led me to a far deeper understanding, than had i just read a tutorial ... I'm going to propose to wrap this up ... for people who want such tutorials ... you're welcome to write tutorials, or file change requests Hixie: ... or file bugs tantek: ... or just leave material on the side that search engines can find ... I'm hearing no requests for a change request <Zakim> tross, you wanted to state that guidance helps avoid conflicts with future HTML features <MikeSmith> scribe: MikeSmith <tantek> tross: we have had trouble figuring out how to not step on parts of HTML5 with extensions. tross: ... while also enabling people to produce conformant documents <scribe> scribe: timeless_mbp RRSAgent: make minutes <scribe> scribenick: timeless_mbp [ proposal for coffee break ] tantek: I heard plh 's request ... i'm also going to raise this up ... this has been a source of perma threads on the mailing list ... i think it would be useful to put the use cases somewhere else other than the email archives ... it would be useful to put them, e.g. in a wiki page ... just in the interest of reducing the amount of duplicate traffic on the mailing list Hixie: I think the use cases are on a wiki page tantek: I heard that plh is willing to help <Hixie> plh, is the wiki page <Hixie> plh, and is the e-mail Hixie: oh you wanted it for extensions, not just microdata <annevk> annevk: We did have something like this <myakura> that one? <Hixie> plh, is the extension mechanism list tantek: I'm going to ask plh to review that page ... and otherwise, i think this issue is closed Hixie: plh if you look at all of the urls that were just pasted ... they're all related/derivatives ... the wiki is probably out of date ... feel free to edit the what wg wiki to your heart's content tantek: other topics? ... it was noted that these are both extensions ... mjs mentioned that html5 allows both attributes and elements ... do both RDFa and microdata add just attributes? Hixie: that's an incomplete explanation ... RDFa also adds APIs ... one of these relies on something from drag and drop Julian: I think people are working on a javascript spec for RDFa ... it's not in the spec, but people are working on a proposal <annevk> (RDFa adds near-infinite attributes) tantek: any other issues? <Hixie> what i said above is that microdata has a DOM API, not RDFa tantek: are we only talking about moving RDFa into a separate spec? Julian: it's a big time slot BryanSullivan: it would be good ... ... to have a comparison-tutorial for how these stack up together ... without having to go into them in deep detail Hixie: it would be really helpful for someone who has not established a public opinion ... to write this tantek: the source is often accused of bias based on who the source is ... it's a tough problem to compare the three in a way that is objective and seems objective plh: how stable are all three ... ? Hixie: microdata and RDFa ... microdata has issues open as to whether it should exist or not ... RDFa has issues as to whether to use namespaces or not ... for microdata, stability depends on when/how we get implementations ... without an implementation in the next six months, then the spec is very fluid tantek: does anyone want to address stability for RDFa ... technically Julian: i think there are open questions about edge cases ... i don't anything in the generic syntax Hixie: there is an open bug about namespaces in RDFa plh: I don't want someone who might spend time to make a tutorial to waste their time if there's significant potential change Hixie: ... in microdata, i think the desire is for microdata to cease to exist (?) ... from my point of view, i don't think microdata needs to exist ... RDFa has flaws ... especially namespaces/prefixes tantek: open issues can be found in the microformats wiki Julian: i want to mention that there's disagreement as to whether the use of namespaces and prefixes is an actual bug tantek: is there an actual bug for this? Hixie: bug 7670 <Hixie> MJS: the process for resolving this issue ... will be the same as if this were a bug against the main html5 spec ... the html5+RDFa editor will need to give an initial editor's response ... and if someone cares to raise a complaint, it will go to the issue tracker ... and once it's in the issue tracker, we'll solicit change proposals Kai: (deutsche telecom) ... I want to make a plug for RDFa ... RDFa is sort of on the upswing ... I don't think one can really say at this point, what's going to happen ... i bet there will be tons of use cases in ... where there will be lots of files ... using this stuff Hixie: microdata and RDFa are both serializations of RDF ... they fulfill the same use cases ... in exactly the same places where you can have urls in RDFa, you can have urls in microdata ... the exception is per field data types ... and xml-wippls (?) Kai: i find it difficult for such a relatively small group to foresee what will actually happen Hixie: just having one of either type will satisfy users <markbirbeck> @Hixie: I don't like namespaces/prefixes either, but I think they are here for a while to come. However, in the RDFa TF, I am pushing for 'URIs-everywhere' -- i.e., authors don't need to use namespaces if they don't want to. Would that address your issue with RDFa? Or is it really 'all namespaces must go'? tantek: Kai: i'm going to ask you to review those urls from irc ... and describe the Kai: I need approval before i could join ... I have a question, have you talked to Ivan Herman? Julian: I think he's on the RDFa task force tantek: the specific request was for additional use cases Kai: i think we could ask Ivan to look for additional use cases <plh> s/Evan/Ivan/ MikeSmith: was there a dispute as to whether microdata didn't address all the use cases of RDFa ... I think we have massive amounts of use cases mjs: I think if anyone wants to talk to other communities, they're welcome to ... I think chairs have a lot of work to do tantek: Kai, request: either you document use cases, or you talk to Ivan and request that he document the use cases on a web page <Hixie> markbirbeck: imho we should never introduce namespaces to text/html doug: are we expecting an equal amount of use cases from microdata? tantek: we already have this for microdata ... I want to Close this Session <timeless> scribenick: timeless <hober> (this is several months out-of-date with respect to microdata and rdfa-in-html changes, but) I put together some thoughts on how microdata, microformats, rdf, and rdfa relate: tantek: as the queue is empty, and MJS noted time is up SESSION CLOSED RRSAgent: make minutes <dsinger> moving to video in salon B any second <dsinger> but we're moving to #video, not here <mjs> ScribeNick: mjs <BryanSullivan> BS: we're talking about the server-sent events spec BS: (brief overview of how SSE works) ... would like to discuss implications of connection-oriented concept ... I work for AT&T - we are interested in push in the OMA ... interested in introducing mechanisms like SMS-push that could be used transparently <BryanSullivan> annevk, push isn't necessarily tied to SMS BS: here's some event flows ... this is an example of EventSource over http <annevk> (the site from bkaj.net is projected) BS: (more exposition of the diagram) ... let me explain the entities in this picture ... push involves the delivery of events over signaling networks, such as SMS or SIP ... flows involve the client application, and the push client (the UA) ... you need the ability to decode binary push messages ... another entity is the push server, which supports connectionless push, possibly including the push proxy gateway ... the server application runs on a server application, including SSE and possibly Push Access Protocol directly ... does everyone understand the background and objectives? MJS: is the goal to make this transparent to unmodified EventSource client and server? BS: yes ... starting with a non-transparent appraoch that works ... uses push server for events delivered outside the EventSource connection ... client application might open EventSorce in some different way BS the server would be aware of push BS: the client could make a decision to switch to connectionless ... this would work out of the box today, but some UA changes would be needed MJS: it seems like when an EventSource goes connectionless it needs to give a unique ID to the server BS: yes, we'll have to consider these things ... that was the first case ... next case, using a proxy ... this could be mostly transparent ... the UA would have to know how to receive SMS still, but the proxy could hide the push aspects from the server ... (shows SIP example) ... SIP can do MESSAGE for <1300 bytes or INVITE + MSRP for larger messages ... (another flow diagram that involves push service registration) ... in this case you have a negotiation mechanism IH: when the spec was written originally, the idea was to allow explicitly referencing an SM service with an sms: URL ... but I think this is a better way of doing it ... the middle parts are blackbox equivalent to the spec BS: the intent is not to put anything into event source, but to provide possible references as a possible deployment option MJS: need a way to close BS: in the case of IMA we have a way to deregister ... we like to take small steps ... we would probably do an update for this ... things I'm interested in is how to do routing for multiple applications <Bryan_Sullivan> scribe: BryanSullivan <timeless_mbp> ScribeNick: Bryan_Sullivan Rob: overall problem is deadlocks between browsers both accessing the same origin's local storage ... one solution: list all the places that will hold the storage mutex ... it's not clear where all those are, etc ... the approach proposed is to spec what does not release the mutex ... there is some debate whether the declaration is a MAY or MUST ... sent to the mailing list a summary of the whatwg proposed solution <fantasai> HTML Testing Task Force <fantasai> krisk facilitator <inserted> scribe: fantasai kris: Want to hear opinions ... First slide, Why create a test suite? ... Prove that 2 or more impl can be built from the spec ... That's super-important for having an interoperable web ... Example ... SVG ... If we have a set of tests that no one can pass, have to wonder if that section of spec is any good kris projects a really cool graph kris: There are some thin slices of red across all browser vendors, happens the same over time ... scary bc you can't use that feature ... he key part is to make sure at least 2 or more browser vendors pass the test ... can see over time, big chunks missing, then filled in ... improvement over time ... When I think of great html5 test suite ... At ten end of the day, it's a software project ... if we don't ship, we don't have a product ... Simple example: 3 browsers, 3 features ... success isn't everyone passing everything ... key thing is at least two browsers can do the same thing ... everyone passing will happen eventually ... after time, browser C is under pressure to fix Test XYZ b/c others have implemented it Kai: Passing a test means they all adhere to the same criteria, right? kris: HTML5 fails if we are missing tests for feature XYZ ... spec could be bad ... interop may never exist for XYZ ... Prioritization ... browser test suite space is a complicated space ... prioritization should come from coverage across spec ... some are much more comprehensive than others ... having comprehension is important ... I've looked at some test suites on W3C site, and they aren't comprehensive ... better to have comprehensive than automated ... Don't have much, but I'm interested in building momentum. Interested, ping me ... I need more people, we can do more features, more comprehension, please participate ... We have a mailing list [email protected] ... I'll be working on infrastructure stuff ... conference calls if appropriate, etc. ... initially will get consensus on organization of the test suite ... Those are my initial thoughts ... Some things are tought to test from self-description testpoint are pretty doable ... SVG text on a curve moving and then you have to select it shepazu: In addition to having real conformance tests, I don't think we should underestimate the value of the acid approach ... i think in addition to do implementability tests, I think people should consider some functional subsets of combined atomic tests ... that let authors and browsers really see how things are interacting and make sure corner cases are filled Adrian: The notion of having tests that result in some kind of score is interesting ... It's clear that where developers value something like that for evaluating some level of conformane ... But choosing an arbitrary subset of features that end up testing corners of thing sdon't actually help interoperability much Adiran: I don't think we have a problem with tests that show a level of conformance, but it should be clear that's what they're doing Adrian: Racing to some test turns out not to be very helpful shepazu: I absolutely agree ... e.g. for SVG I was trying to make some demos, and was having a frustrating time going between rather good implementations ... Sometimes it's a problem with specs, sometimes impl bugs ... What I'm talking about with combinatorial things, e.g. Acid test didn't help with real uses kris: Acid 1 included a lot of things, didn't have a score, all the pieces might be able to do but can you put them together? Adrian: Other thing I think is also fair to say, although kris will prolly kill me ... I actually also don't believe that tests designed to highlight bugs that are common across browsers are necessarily a bad thing ... They become a bad thing when people equate them with conformance tests ... It's clear that some of these Acid tests have pushed vendors to fix bugs, but then people use them to evaluate conformance Kai: I see potential to take the spec to conformance criteria, if you pass tests it eliminates variance across browsers. ... I think it's a nice opportunity to drive consistency shepazu: There are reasons why W3C hasn't done conformance testing ... To do conf testing, you need far more exhaustive set of tests than for implementability testing ... If we do crowd-sourceing and cooperation among vendors, we may have the resources <adrianba> fantasai: this is the direction that the css group is moving in <adrianba> ... hopefully this will produce tools/tests that are useful both to implementers and web developers shepazu: Kai, it's a great goal to have, Kai: Who determines what the conformance criteria are shepazu: This is possibibly a revenue stream for W3C, to help and shepherd the testing activity ... And then have fees for testing to get a gold star or something Kai: Who determines whether rendering at 0,0 or 10,10 is correct? fantasai: the spec kris: ... ... I hear sometimes that the spec is really big, but the test suite is just starting .... I wonder at that Arron: The tests aren't just for testing implementations to see that they're following the rules. ... They're also testing the spec, to make sure that they aren't saying something insane that can't be implemented Kai: I think Doug's idea is great, what is the expected result, bring that into conformance criteria (?) shepazu: What I'd like to see in UI is, "oh, here's a test. Take me to the part of the spec being testsed. Take me from there to all tests for this section." ... If there's a bug in the test, I can talk about it in somme kind of forum. ... make it easy to file a bug on the spec from there ... That's the right way of looking at it ... The idea of ... I'm tired kris: some suites start out with organization then write tests ... other ones the toc is pulled in afterwards, and the find out they're missing stuff <Kai> s/,what is the expected result/for any given feature, determine what the expected result is and then Adrian: We have to be really clear that there's a very big difference in complexity between creating a test suite that's designed to make sure every normative requireement of a spec it's possible to test ... vs test suite designe dto test a UA for compliance to a spec ... Given that we're talking about HTML5, the spec is so huge and hard to test, getting to the point of having the former is going to be really really hard ... Going the huge difference to getting a conformance test, we shouldn't underestimate that ... There's a risk of not ending up with a comprehensive test suite at all if we aim for that Michael Cooper: I like the idea of getting toward this perfect future, but also be aware that the best is the enemy of the good Michael: We need spec testing, but design for conformance testing after we exit CR shepazu: Yes, one thing we don't have w3c, is post-recommendation ... Another step in the process, but have some measure of interop Arron: Certified recommendation shepazu: Ok, now, we have fair confidence that this is really implemented widely <adrianba> fantasai: i don' <adrianba> ..t think we should ahve a separate step in the process <adrianba> ... we should have a good way of reporting on implementations <adrianba> ...like one of those charts - this is where we are at shepazu: From a product point of view, from a business point of view, having that arbitrary stamp is useful <MichaelC> Expansion on my comments: Have a perfect future in sight, knowing you won't get all the way there, but at least you know you're on the road towards it (rather than some other road) kris points to SVG charts. Maybe this shouldn't be a must, because none of the browsers implement it Adiran: Maybe that's a case for profiling specs shepazu: ... funds .. certification ... I think oneof the valuable things would be that you would ahave I was just talking to Media Annotations about testing ... They had no idea. They had never seen a w3c test before ... It would be really nice if we had funding for staff and all they did was shepherd groups towards better testability fantasai: Didn't the QA working group do this? shepazu: no, they made guidelines, but didn't have a hands-on approach fantasai: So be more like the i18n wg? shepazu: Yeah. Promoting testing activity and promote testing culture kris: All of the w3c specs, there's a lot of consistency. If you see must, it really means must ... But from testing it's all different ... they got to that point and did whatever they needed to do ... You know what, you really need to have a toc ... you need a test for every must Adrian: There's also a difference between creating guidance which is intended to be prescriptive, e.g. you must do this minimum level of thing to qualify as a test suite ... vs documenting a best practice, "if you follow this process you're more likely to have a more testable spec" ... Guidance that is specific enough to be useful isn't general enough to be applicable broadly shepazu: That's why having the QA people being hands on shpazu: coming in and presenting about this document you didn't read :) kris: I found recently the W3C/QA part with definitions.. how was this hidden? shepazu: I'd love to promote interop between not just impl but also specifications. shepauz: e.g. right now CSS, SVG, and XSLFO are all talking about text wrapped to shapes shepazu: That's great, we need more coordination like that ... One thing that would help with that is having people that wander group to group ... But also extracting definitions. DO I need to define this myself, or extract a definition that someone else wrote Adrian: Also having someone say "you do realize those guys over there are doing this as well" ... My experience as a consultant is noticing that e.g. you have 3 teams doing this and they don't all know about each other shepazu: Musts, only things that are a must, once you reach REC status, are royalty-free ... the shoulds aren't ... it's serving this weird dual purpose of interop and IP ... I didn't know this until several months into being a team member ... Nobody knows this (most of the room didn't know this) shepazu: One thing atha profile allows, it can enable .. You could have a spec that does the IP problem, and then put shoulds and mays on top of the musts in the spec using a profile Adrian: Not even about that. For large specs, you can't do all of it in one go. THe advantage of a profile is that it encourages people to do the same things first ... You get interop on that, then move to the next step shepazu: That's a coordination that encourages cooperation ... People say profiles encourage schizms. In some cases it can. But it also allows coordination kris: They don't know which things they can go use ... even stuff that is in a draft, no guarantees here, authors want to know that ... This is a must, go use it, and I think that's ok shepazu: i would love for the validator to say "your code is valid, but these features are not supported by these UAs" ... We could do that if we had a comprehensive testing strategy kris: It's not that ppl ask if there's a bug or not, but more "is there a reasonable chance of the <video> tag turning into a video element" Adrian: We've spent a lot of time congratulating ourselves for identifying that testing is important, but how are we actually going to get some tests? ... We now have a testing task force, and apparently, at least the start of a group of people that care about testing ... what's the next step Mike: get mailing list going, call for participation ... Don't want to really get things going until you get critical mass on the mailing list ... Then once we have discussions going, have some telecons Adrian: What does momentum look like? <adrianba> fantasai: here's a goal <adrianba> ...have 5 tests well documented in a common repository Arron: And the standard format is important, becaus eotherwise you'll get a lot of tests in a lot of different formats that you can't usefully process Have 5 tests in a standard format that's well documented in a common publicly-accessible versioning repository Mike: I dont' remember when it was, I thik 1.5 years ago we tried to start the testing effort. We started a mailing list ... But we had nobody agree to lead the effort. The one biggest thing is having somebody to lead the effort ... Every time I've been involved in somethng like this ... there are some things that should be consensus decisions, and in some cases want executive decisions ... e.g. version control system choice ... The thing I worrya bout most is that it gets bogged down in decisions about what tools to use and even to some degree you need to have decisions on things like, associating metadata with tests ... That allows the system consumign to use that data ... We need a unique id for the test ... so that we can identify it for reporting etc. ... Most people don't care on the format, but somebody needs to take initiative and decide shepazu: So we got together, with plh, .. talking about omocah hakathon ... We anted to solve some real problems, like set up a repository, set up a way to test across borwsers ... set up common ways to view tests, review tests, Mike: I just want someone in charge. We have a lot of people wanting to submit test cases kris: I worry about spending time on infrastructure, but we need the tests ... that's the higher bit than omocha thing. ... If you focus on toolset too much, you have a great tool set but no project shipped Arron: I think if we have a large set of tests already that are held up by this TF trying to figure out what we need ... just send them ... Then we can look at them and try to blend their formats together to create the standard format ... And then we can in parallel set up the infrastructure <scribe> ACTION: Arron to start ball rolling on this [recorded in] <scribe> ACTION: doug to send out instructions on how to get CVS access for dev.w3.org's htmlwg test suite folder [recorded in] shepazu: Doesn't matter if the CVS repo is temporary and we move to something else later Adrian: Let's make it a scratch folder, we collect things here for now shepazu: Give each implementor their own folder, and they organize however they want Arron: THis is just to collect stuff so that we can review it quickly and decide on what formats we need ... Once we have those formats decided, we can push back on people to update to the common formats ArroN; and then we can start approving and reviewing those tests and start moving them into a differetn section off the repository that's approved Adrian: Once we've reviewed and arrived at a format for individual test cases, knowing that will help us figure out what the big picture organization should be ... for the overall test suite ... And determining what that overall structure looks like is a candidate for the executive decision Arron: ... ... We have this first piece, tackle that, move on to the next piece, move on to the next piece, etc. kris: Anyone else anything else? shepazu: Do you know Sylvain Pasche? ... He built on a very similar infrastructure, he's executing tests on all browsers ... it's very elegant ... this is one of the key thigns the test format should eneable ... We like reftests because it enables automation ... The SVG wg spent and additional 2 years of writing tests after writing the spec ... People in wgs are not thinking ahead, thinking that a significant part of their charter time ... will be spent on test Adrian: Also because the people good at writing specs often arent' the ones writing tests Arron: That happened with 2.1, we came to the spec and stated writing tests and wound up bringing lots fo issues people tell stories about why tests are important minuter takes a break <Kai> +1 <tantek> concluding session starting in room A This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) Succeeded: s/??/sw/ Succeeded: s/riht/right/ Succeeded: s/html/RDFa/ Succeeded: s/XX/First Public Working Draft/ Succeeded: s/John/Glenn/ Succeeded: s/somethingXX/the Microdata syntax/ Succeeded: s/they want/we want/ Succeeded: s/spot/slot/ Succeeded: s/use cases/open issues/ Succeeded: s/Evan/Ivan/ Succeeded: s/Evan/Ivan/ FAILED: s/Evan/Ivan/ Succeeded: s/push/SMS-push/ Succeeded: s/interested in SMS-push/interested in push/ Succeeded: s/BS/BS:/ Succeeded: s/the/the whatwg/ Succeeded: i/Want to hear/scribe: fantasai Succeeded: s/ahve/have/ Succeeded: s/borwsers/browsers/ Succeeded: s/featuer/feature/ Succeeded: s/shoudl/should/ Succeeded: s/ocvreage/coverage/ Succeeded: s/moementum/momentum/ FAILED: s/,what is the expected result/for any given feature, determine what the expected result is and then/ Succeeded: s/Adiran/Adrian/ Found Scribe: wendy, scribe WARNING: "Scribe: wendy, scribe" command found, but no lines found matching "<wendy, scribe> . . . " Use "ScribeNick: dbooth" (for example) to specify the scribe's IRC nickname. Found Scribe: timeless Inferring ScribeNick: timeless Found ScribeNick: timeless Found Scribe: MikeSmith Inferring ScribeNick: MikeSmith Found Scribe: timeless_mbp Inferring ScribeNick: timeless_mbp Found ScribeNick: timeless_mbp Found ScribeNick: timeless Found ScribeNick: mjs Found Scribe: BryanSullivan Found ScribeNick: Bryan_Sullivan Found Scribe: fantasai Inferring ScribeNick: fantasai Scribes: wendy, scribe, timeless, MikeSmith, timeless_mbp, BryanSullivan, fantasai ScribeNicks: timeless, MikeSmith, timeless_mbp, mjs, Bryan_Sullivan, fantasai WARNING: No "Present: ... " found! Possibly Present: Adiran Adrian Arron BS Bert BryanSullivan Bryan_Sullivan Dashiva Eliot_Graff Gondo Hixie IH IanH Julian Kai Kangchan Lachy Laura MJS Michael MichaelC Mike MikeSmith Rob SCain TabAtkins adrianba annevk cardona507 cardona507_ cs cyns doug dsinger eric_carlson fantasai fo frankolivier glenn glenng grosmai hober html-wg2 inserted jallan jf joined js kohei kris left manu markbirbeck mm mnot myakura plh prolix richt rubys satoshi sc scribenick sf shelleyp shepauz shepazu shpazu silvia sw tantek timeless timeless_mbp tross wc weinig wendy yof: arron cs doug sw[End of scribe.perl diagnostic output]
http://www.w3.org/2009/11/05-html-wg2-minutes.html
CC-MAIN-2014-42
refinedweb
8,919
73.07
Character Controller progress Some good progress this morning. The player capsule (red) now correctly glides and slides around the environment, no longer penetrating either the static or dynamic shapes, and we have the floor ray cast set up and working to detect the distance and normal of the current floor, so can start work on gravity for the player next. I'm doing a discrete check for the character controller rather than a swept test. You accept the risk of tunnelling here (which one can work around) but get a load of stuff simplified in return. For example, sliding along surfaces is handled automatically when doing a discrete check - you GJK the player capsule and the object together, and then correct the player position based on the minimum separating vector and this automatically slides you along a surface, at the correct proportion based on the angle. It also means that moving, dynamic objects automatically push the player around out of the box, since the check is discrete and run every frame. Later, we can look at examining the mass of the colliding object and using this to ratio off the MSV between the object and the player in some way, so that the player has the ability to push dynamic objects around too, but this is not a great concern for me. I'm not trying to write a heavy physics sim game here. I'm more interested in being able to do a subset of things with the physics engine (rag-dolls, doors, trapdoors etc) but keep tight control of the things I want to (player, NPCs, etc). This is the current internals of the Kcc::move() method: void Kcc::move(Physics &physics, const Vec3 &step){ Vec3 result = pos + step; Matrix to = translationMatrix(result); BroadphaseResult br = physics.broadphaseAabb(shape.aabb(to).expanded(2)); int it = 0; bool loop = true; while(it < 5 && loop) { loop = false; for(auto &b: br.bodies) { ConvexResult r = physics.convexIntersection(shape, to, *b); if(r.valid()) { result += r.separatingVector(); to = translationMatrix(result); loop = true; } } ++it; } pos = result; Vec3 base = pos; base.y -= (ht / 2); RayResult rr = physics.rayCast(Ray(base, Vec3(0, -1, 0)), 100); if(rr.valid()) { debugAddLine(base, rr.worldPoint(), makeColor(255, 255, 255)); debugAddLine(rr.worldPoint(), rr.worldPoint() + rr.normal(), makeColor(0, 255, 0)); }}You basically start with the Kcc at the player position, then call move() with the step then read back the final position from the Kcc when updating the character. You can see we use the broadphase to get the potentially colliding objects, then loop over these, checking for penetration and correcting the final position by the relevant amount. [EDIT: Seems I'm spinning needlessly around if no collisions as well, which needs sorting. Duh! Handy to post code in a journal then read it back sometimes...] Needs a bit of work to handle sharp, trapping corner shapes as it can wiggle about a bit in there at the moment, but this should be easy enough to sort out if required. Whether it is needed will depend on how the final level geometry possibilities work out and whether I end up with large dynamic environment objects. Not sure if I want this yet. I'm also then doing a ray cast to find the floor and just handing some of this information off to the debug system. I'm pretty lazy with my debug system since it will ultimately be removed so it is all just global state in an anonymous namespace inside DebugRender.cpp with an interface of free functions, so it is easy to include it anywhere without having to put in proper dependency flows like I do with the real stuff. So you can just add debug lines anywhere you include DebugRender.h, and they are drawn as part of the GameMode::render() method called by Application. There's also a setDebugScreenMessage() method that uses varadic templates to allow you to pass any parameters that can be passed to std::ostream, which is then drawn each frame in the top left corner of the screen - handy for debug output that would grow enormous if pushing out to QtCreator's Application Output window via OutputDebugString(). There's then another method, addPhysicsToDebugLines(), that runs through and adds lines representing all the physics shapes to the debugLines list, so you can see where the actual physics shapes are and ensure they are correctly synchronised with the Scene nodes representing the solid objects. I might take a small detour and add in some kind of shadowing system soon, since it makes it a lot easier to be able to tell at a glance where things are in relation to each other. This will mainly consist of just copying code from previous projects, like a lot of things seem to be these days. I have amassed a vast library of failed projects on this HDD now and have quite a large library of my own code to draw on, which is nice Thanks, as usual, for stopping by and I hope this is of interest to someone, somewhere. 2 Recommended Comments There are no comments to display. You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy!Register a new account Already have an account? Sign in here.Sign In Now
https://www.gamedev.net/blogs/entry/2261500-character-controller-progress/
CC-MAIN-2018-47
refinedweb
894
59.03
Suppose I have two sites - A and B - both with 5/5 Mbps WAN links that are connected via L2L VPN. Each Site is defined in AD Sites 'n Services with its own domain controllers, etc. The problem is that for folder redirection to work "well" I would ideally like the folder redirection to occur based on which site the user hails from - for example, if user in site A, folder redirection would work with a share on A. Or, if the user happens to move to B, the folders would redirect to a location on B. Finally, the storage servers responsible for A and B's redirection (the locations they are pointed to) would replicate. Is this type of functionality possible in Group Policy/AD? Or do I have to use a single "abstracted" share that can route to the appropriate, replicated store based on subnet? You can do this one of two ways. The way that I would recommend is as follows: Create a DFS namespace for your shares, something like \\domain\users should do. \\domain\users Add both servers to this DFS root. Check the box so that clients prefer (or are required) to use a server located in their AD site. Yes, it's smart enough to determine this using subnets defined in AD Sites & Services Set up replication between the two shares using DFS-R. Make a GPO that redirects to \\domain\users and link it like you would any other OU. DFS is smart enough to refer them to the closest server based on info in AD Sites and Services. The other way would be: Set up DFR-R between the two servers, but don't add them to a namespace. This means the shares will be replicated, but won't share a common domain-based path. You can link GPOs to sites, not just OUs. Link a GPO at each site that points to \\server1\users for site 1 and like one that points to \\server2\users at site 2. \\server1\users \\server2\users I prefer the first choice, because using a DFS namespace makes file server upgrades a breeze and it allows you to only have to maintain one GPO, while still allowing users that might roam to either site to dynamically map whatever server is closest. Either way, DFS (in whether it's replication, namespaces, or both) should be involved. By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 1649 times active
http://serverfault.com/questions/414112/site-specific-folder-redirection-through-group-policy
CC-MAIN-2016-18
refinedweb
421
78.18
flasker 0.1.24. - A simple pattern to organize your project via the current_project proxy (cf. Structuring your project for an example)., Celery, and SQLAlchemy. Flasker handles the setup but intentionally leaves you free to interact with the raw Flask, Celery and SQLAlchemy session objects. Some knowledge of these frameworks is therefore required. Flasker also comes with two optional extensions: Quickstart Installation: $ pip install flasker To create a new project: $ flasker new basic This will create a project configuration file default.cfg in the current directory and a basic Bootstrap themed app (this can be turned off with the -a flag). DB_URL = sqlite:///db/db.sqlite [APP] # any valid Flask configuration option can go here # cf for the full list DEBUG = True TESTING = True [CELERY] # any valid Celery configuration option can go here # cf BROKER_URL = redis:// When it starts, the flasker command line tool imports all the modules declared in the MODULES key of the configuration file (in the PROJECT section). Inside each of these you can use is available on db.session db = current_project.db #. Cf. the Wiki for all the available configuration options. Extensions ReSTful API This extension is meant to very simply expose URL endpoints for your models. There exist other great ReSTful extensions for Flask. Here are the main differences with two popular ones: FlaskRESTful works at a sligthly lower level. It provides great tools but it would still require work to tie them with each model. Here, the extension uses the Flasker model structure to do most of the work. Flask-Restless is similar in that it also intends to bridge the gap between views and SQLAlchemy models. However the Flasker API is built to provide: - Faster queries: the 'json). Here is a very simple sample file: from flasker import current_project from flasker.ext.api import APIManager from flasker.util import Model from sqlalchemy import Column, ForeignKey, Integer, Unicode # Create the APIManager api_manager = APIManager(add_all_models=True) current_project.register_manager(api_manager) # Define the models class House(Model): id = Column(Integer, primary_key=True) address = Column(Unicode(128)) class Cat(Model): name = Column(Unicode(64), primary_key=True) house_id = Column(ForeignKey('houses.id')) house = relationship('House', backref='cats') Which will create the following endpoints: - /api/houses/ (GET, POST) - /api/houses/<id> (GET, PUT, DELETE) - /api/houses/<id>/cats/ (GET, PUT) - /api/houses/<id>/cats/<position> (GET) - /api/cats/ (GET, POST) - /api/cats/<name> (GET, PUT, DELETE) Cf. the Wiki for the complete list of available options. Authentication This extension uses Flask-Login to handle sessions and Google OAuth 2 to handle) Cf. the Wiki for the complete list of available options. Utilities Available utilities include: - Caching - Jsonifying - Logging Cf. the Wiki for a more detailed explanation on some of the available utilities. Sources - Downloads (All Versions): - 190 downloads in the last day - 1015 downloads in the last week - 3121 downloads in the last month - Author: Matthieu Monsch - License: MIT - Categories - Package Index Owner: mtth - DOAP record: flasker-0.1.24.xml
https://pypi.python.org/pypi/flasker/0.1.24
CC-MAIN-2014-10
refinedweb
491
55.95
public int computeArea(int A, int B, int C, int D, int E, int F, int G, int H) { int areaOfSqrA = (C-A) * (D-B); int areaOfSqrB = (G-E) * (H-F); int left = Math.max(A, E); int right = Math.min(G, C); int bottom = Math.max(F, B); int top = Math.min(D, H); //If overlap int overlap = 0; if(right > left && top > bottom) overlap = (right - left) * (top - bottom); return areaOfSqrA + areaOfSqrB - overlap; } Hello! So, the code should be fairly straightforward. I first calculate the area of each rectangle and then calculate the overlapping area between the two rectangles (if there is one!). At the end, we sum up the individual areas and subtract the overlapping area/0 ! Feel free to ask should you have any queries for me OR if my solution can be improved upon! :) hero has the same opinion! Mine is similar with yours :) public int computeArea(int A, int B, int C, int D, int E, int F, int G, int H) { int together; if (C <= E || A >= G || B >= H || D <= F) { together = 0; } else { int x = Math.min(C, G) - Math.max(A, E); int y = Math.min(D, H) - Math.max(B, F); together = x * y; } return (C - A) * (D - B) + (G - E) * (H - F) - together; } Same here, int computeArea(int A, int B, int C, int D, int E, int F, int G, int H) { int total = (C-A)*(D-B) + (G-E)*(H-F); int L = min(D, H), J = max(B, F); int I = max(A, E), K = min(C, G); if (L > J && K > I) total -= ( (L-J)*(K-I) ); return total; } public class Solution{ public int computeArea(int A, int B, int C, int D, int E, int F, int G, int H){ int overlapLength = Math.max(A,E) < Math.min(C,G) ? Math.min(C,G) - Math.max(A,E) : 0 ; int overlapHeight = Math.max(B,F) < Math.min(D,H) ? Math.min(D,H) - Math.max(B,F) : 0; return (C-A)*(D-B)+(G-E)*(H-F) - overlapLength*overlapHeight; } } That's what I thought. But where do you have squares here? All I know is that they're rectangles. Yeah. Should've been areaOfRectA. Totally overlooked the name when I was writing the code. Thanks. In the first line, int areaOfSqrA = (C-A) * (D-B); is it possible this area is negative ? lets say (C,D) = (-2,2) , (A,B ) = ( -1,1), then the area = (-2 -(-1)) * (2-1) = -2, this then makes no sense. Correct me if I am wrong. Hi! I can see why you would think that but remind yourself on how we usually calculate an area of a RECTANGLE. Width * Height right? Ok. Now, how do we get the width/height out of the coordinates that a rectangle occupy? In this problem (on the 2D plane), C represents the RIGHT-side (X-coordinate) of a rectangle whereas A represents the LEFT-side (X-coordinate) of the rectangle. A can NEVER be HIGHER than C. Try sketching the rectangle out and give it VALID coordinates. You'll see that C >= A. Same goes for D & B for the Y-coordinates. Hope that makes sense. :) First you called them squares, now you even say triangle... you really don't like rectangles, don't you? :-P Omg. I don't even know how that happened. Haha. Alright. I've changed it. Hopefully, it's right this time. Thanks for the clarification! :) @StefanPochmann lol........... @waisuan Could you please explain the logic behind finding values of left, right, bottom and top and overlap ? Or if you could point to some resources that will help me understand the logic Hello, Interesting answer. I would like to suggest mine as well. In Java, this task can be simply done as follows: public int solution(int K, int L, int M, int N, int P, int Q, int R, int S) { int areaT_,area1_ ,area2_ ,areaInt_ = 0; Rectangle rect_KLMN = new Rectangle(K, L, M - K, N - L); Rectangle rect_PQRS = new Rectangle(P, Q, R - P, S - Q); area1_ = (int) (rect_KLMN.height * rect_KLMN.width); area2_ = (int) (rect_PQRS.height * rect_PQRS.width); if (rect_KLMN.intersects(rect_PQRS)) { Rectangle rect_int = rect_KLMN.intersection(rect_PQRS); areaInt_ = (int) (rect_int.height * rect_int.width); } areaT_ = area1_ + area2_ - areaInt_; return areaT_; } Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/15733/my-java-solution-sum-of-areas-overlapped-area
CC-MAIN-2018-05
refinedweb
728
75.3
Sheets Sheets Sheets Class Definition Defines the Sheets Class. When the object is serialized out as xml, its qualified name is x:sheets. [DocumentFormat.OpenXml.ChildElementInfo(typeof(DocumentFormat.OpenXml.Spreadsheet.Sheet))] public class Sheets : DocumentFormat.OpenXml.OpenXmlCompositeElement type Sheets = class inherit OpenXmlCompositeElement Public Class Sheets Inherits OpenXmlCompositeElement - Inheritance - - Attributes - Remarks [ISO/IEC 29500-1 1st Edition] sheets (Sheets) This element represents the collection of sheets in the workbook. There are different types of sheets you can create in SpreadsheetML. The most common sheet type is a worksheet; also called a spreadsheet. A worksheet is the primary document that you use in SpreadsheetML to store and work with data. A worksheet consists of cells that are organized into columns and rows. Some workbooks might have a modular design where there is one sheet for data and another worksheet for each specific analysis performed on that data. In a complex modular system, you might have dozens of sheets, each dedicated to a specific task. [Example: <sheets> <sheet name="Sheet1" sheetId="1" r: <sheet name="Sheet2" sheetId="2" r: <sheet name="Sheet5" sheetId="3" r: <sheet name="Chart1" sheetId="4" type="chartsheet" r: </sheets> end example] [Note: The W3C XML Schema definition of this element’s content model (CT_Sheets) is located in §A.2. end note] � ISO/IEC29500: 2008.
https://docs.microsoft.com/en-us/dotnet/api/documentformat.openxml.spreadsheet.sheets?view=openxml-2.8.1
CC-MAIN-2019-43
refinedweb
216
55.24
Lab 13: Final Review Due by 11:59pm on Tuesday, August 10. Starter Files Download lab13.zip. Inside the archive, you will find starter files for the questions in this lab, along with a copy of the Ok autograder. The questions in this assignment are not graded, but they are highly recommended to help you prepare for the upcoming final. You will receive credit for this lab even if you do not complete these questions. Suggested Questions Trees Q1: Prune Min Write a function that prunes a Tree t mutatively. t and its branches always have zero or two branches. For the trees with two branches, reduce the number of branches from two to one by keeping the branch that has the smaller label value. Do nothing with trees with zero branches. Prune the tree in a direction of your choosing (top down or bottom up). The result should be a linear tree. def prune_min(t): """Prune the tree mutatively from the bottom up. >>> t1 = Tree(6) >>> prune_min(t1) >>> t1 Tree(6) >>> t2 = Tree(6, [Tree(3), Tree(4)]) >>> prune_min(t2) >>> t2 Tree(6, [Tree(3)]) >>> t3 = Tree(6, [Tree(3, [Tree(1), Tree(2)]), Tree(5, [Tree(3), Tree(4)])]) >>> prune_min(t3) >>> t3 Tree(6, [Tree(3, [Tree(1)])]) """ "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q prune_min Q Scheme Q3: Split Implement split-at, which takes a list lst and a non-negative number n as input and returns a pair new such that (car new) is the first n elements of lst and (cdr new) is the remaining elements of lst. If n is greater than the length of lst, (car new) should be lst and (cdr new) should be nil. scm> (car (split-at '(2 4 6 8 10) 3)) (2 4 6) scm> (cdr (split-at '(2 4 6 8 10) 3)) (8 10) (define (split-at lst n) 'YOUR-CODE-HERE ) Use Ok to test your code: python3 ok -q split-at Q ) Use Ok to test your code: python3 ok -q compose-all Tree Recursion Q5: Align Skeleton Have you wondered how your CS61A exams are graded online? To see how your submission differs from the solution skeleton code, okpy uses an algorithm very similar to the one below which shows us the minimum number of edit operations needed to transform the the skeleton code into your submission. Similar to pawssible_patches in Cats, we consider two different edit operations: - Insert a letter to the skeleton code - Delete a letter from the skeleton code Given two strings, skeleton and code, implement align_skeleton, a function that minimizes the edit distance between the two strings and returns a string of all the edits. Each addition is represented with +[], and each deletion is represented with -[]. For example: >>> align_skeleton(skeleton = "x=5", code = "x=6") 'x=+[6]-[5]' >>> align_skeleton(skeleton = "while x<y", code = "for x<y") '+[f]+[o]+[r]-[w]-[h]-[i]-[l]-[e]x<y' In the first example, the +[6] represents adding a "6" to the skeleton code, while the -[5] represents removing a "5" to the skeleton code. In the second example, we add in the letters "f", "o", and "r" and remove the letters "w", "h", "i", "l", and "e" from the skeleton code to transform it to the submitted code. Note: For simplicity, all whitespaces are stripped from both the skeleton and submitted code, so you don't have to consider whitespaces in your logic. align_skeleton uses a recursive helper function, helper_align, which takes in skeleton_idx and code_idx, the indices of the letters from skeleton and code which we are comparing. It returns two things: match, the sequence of edit corrections, and cost, the numer of edit operations made. First, you should define your three base cases: - If both skeleton_idxand code_idxare at the end of their respective strings, then there are no more operations to be made. - If we have not finished considering all letters in skeletonbut we have considered all letters in code, then we simply need to delete all the remaining letters in skeletonto match it to code. - If we have not finished considering all letters in codebut we have considered all letters in skeleton, then we simply need to add all the remaining letters in codeto skeleton. Next, you should implement the rest of the edit operations for align_skeleton and helper_align. You may not need all the lines provided. def align_skeleton(skeleton, code): """ Aligns the given skeleton with the given code, minimizing the edit distance between the two. Both skeleton and code are assumed to be valid one-line strings of code. >>> align_skeleton(skeleton="", code="") '' >>> align_skeleton(skeleton="", code="i") '+[i]' >>> align_skeleton(skeleton="i", code="") '-[i]' >>> align_skeleton(skeleton="i", code="i") 'i' >>> align_skeleton(skeleton="i", code="j") '+[j]-[i]' >>> align_skeleton(skeleton="x=5", code="x=6") 'x=+[6]-[5]' >>> align_skeleton(skeleton="return x", code="return x+1") 'returnx+[+]+[1]' >>> align_skeleton(skeleton="while x<y", code="for x<y") '+[f]+[o]+[r]-[w]-[h]-[i]-[l]-[e]x<y' >>> align_skeleton(skeleton="def f(x):", code="def g(x):") 'def+[g]-[f](x):' """ skeleton, code = skeleton.replace(" ", ""), code.replace(" ", "") def helper_align(skeleton_idx, code_idx): """ Aligns the given skeletal segment with the code. Returns (match, cost) match: the sequence of corrections as a string cost: the cost of the corrections, in edits """ if skeleton_idx == len(skeleton) and code_idx == len(code): return _________, ______________ if skeleton_idx < len(skeleton) and code_idx == len(code): edits = "".join(["-[" + c + "]" for c in skeleton[skeleton_idx:]]) return _________, ______________ if skeleton_idx == len(skeleton) and code_idx < len(code): edits = "".join(["+[" + c + "]" for c in code[code_idx:]]) return _________, ______________ possibilities = [] skel_char, code_char = skeleton[skeleton_idx], code[code_idx] # Match if skel_char == code_char: _________________________________________ _________________________________________ possibilities.append((_______, ______)) # Insert _________________________________________ _________________________________________ possibilities.append((_______, ______)) # Delete _________________________________________ _________________________________________ possibilities.append((_______, ______)) return min(possibilities, key=lambda x: x[1]) result, cost = ________________________ return result Use Ok to test your code: python3 ok -q align_skeleton OOP Q Button: """ Represents a single button """ def __init__(self, pos, key): """ Creates a button """ self.pos = pos self.key = key self.times_pressed = 0): ________________ for _________ in ________________: ________________ def press(self, info): """Takes in a position of the button pressed, and returns that button's output""" if ____________________: ________________ ________________ ________________ ________________ def typing(self, typing_input): """Takes in a list of positions of buttons pressed, and returns the total output""" ________________ for ________ in ____________________: ________________ ________________ Use Ok to test your code: python3 ok -q Keyboard Iterators and Generators Q7: ***" Use Ok to test your code: python3 ok -q pairs Q8: ***" def __next__(self): "*** YOUR CODE HERE ***" def __iter__(self): "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q PairsIterator Q9: Str Write an iterator that takes a string as input and outputs the letters in order when iterated over. class Str: """ >>> s = Str("hello") >>> for char in s: ... print(char) ... h e l l o >>> for char in s: # a standard iterator does not restart ... print(char) """ "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q Str Linked Lists Q10: Fold Left(______, ______, ______) Use Ok to test your code: python3 ok -q foldl Q11: Fold Right ***" Use Ok to test your code: python3 ok -q foldr Q12: Filter With Fold Write the filterl function, using either foldl or foldr. def filterl(lst, pred): """ Filters LST based on PRED >>> lst = Link(4, Link(3, Link(2, Link(1)))) >>> filterl(lst, lambda x: x % 2 == 0) Link(4, Link(2)) """ "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q filterl Q13: Reverse With Fold! Extra for experience: Write a version of reverse that do not use the Link constructor. You do not have to use foldl or foldr. def reverse(lst): """ Reverses LST with foldl >>> reverse(Link(3, Link(2, Link(1)))) Link(1, Link(2, Link(3))) >>> reverse(Link(1)) Link(1) >>> reversed = reverse(Link.empty) >>> reversed is Link.empty True """ "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q reverse Q14: Fold With Fold foldr(link, step, identity)(z) Use Ok to test your code: python3 ok -q foldl2
https://inst.eecs.berkeley.edu/~cs61a/su21/lab/lab13/
CC-MAIN-2021-49
refinedweb
1,352
64.14
The Dataflow SDK for Python supports batch execution only. Streaming processing is not yet available. Batch programs can be executed locally (mostly used for development and testing purposes), or in the Google Cloud using the Cloud Dataflow service. The support page contains information about the support status of each release of the Dataflow SDK. To install and use the Dataflow SDK, see the Dataflow SDK installation guide. Cloud Dataflow SDK distribution contents The Cloud Dataflow SDK distribution contains a subset of the Apache Beam ecosystem. This subset includes the necessary components to define your pipeline and execute it locally and on the Cloud Dataflow service, such as: - The core SDK - DirectRunner and DataflowRunner - I/O components for other Google Cloud Platform services The Cloud Dataflow SDK distribution does not include other Beam components, such as: - Runners for other distributed processing engines - I/O components for non-Cloud Platform services Release notes This section provides each version's most relevant changes for Cloud Dataflow customers. 2.2.0 (December 8, 2017) Version 2.2.0 is based on a subset of Apache Beam 2.2.0. See the Apache Beam 2.2.0 release notes for additional change information. Added two new Read PTransforms ( textio.ReadAllFromText and avroio.ReadAllFromText) that can be used to read a very large number of files. Added adaptive throttling support to DatastoreIO. DirectRunner improvements: DirectRunner will optionally retry failed bundles, and streaming pipelines can now be cancelled with Ctrl + C. DataflowRunner improvements: Added support for cancel, wait_until_finish(duration), and job labels. Improved pydoc text and formatting. Improved several stability, performance, and documentation issues. 2.1.1 (September 22, 2017) Version 2.1.1 is based on a subset of Apache Beam 2.1.1. Fixed a compatibility issue with the Python six package. 2.1.0 (September 1, 2017) Version 2.1.0 is based on a subset of Apache Beam 2.1.0. See the Apache Beam 2.1.0 release notes for additional change information. Identified issue: This release has a compatibility issue with the Python six 1.11.0 package. Work around this issue by running pip install google-cloud-dataflow==2.1.0 six==1.10.0. Limited streaming support in DirectRunner with PubSub source and sink, and BigQuery sink. Added support for new pipeline options --subnetwork and --dry_run. Added experimental support for new pipeline option --beam_plugins. Fixed an issue in which bzip2 files were being partially read; added support for concatenated bzip2 files. Improved several stability, performance, and documentation issues. 2.0.0 (May 31, 2017) Version 2.0.0 is based on a subset of Apache Beam 2.0.0. See the Apache Beam 2.0.0 release notes for additional change information. This release includes breaking changes. Identified issue: This release has a compatibility issue with the Python six 1.11.0 package. Work around this issue by running pip install google-cloud-dataflow==2.0.0 six==1.10.0. Added support for using the Stackdriver Error Reporting Interface. Added support for Dataflow templates to file based sources and sinks (e.g. TextIO, TfRecordIO, AvroIO). Added support for the region flag. Added an additional Google API requirement. You must now also enable the Cloud Resource Manager API. Moved pipeline options into options modules. finish_bundle now only allows emitting windowed values. Moved testing related code to apache_beam.testing, and moved assert_that, equal_to, and is_empty to apache_beam.testing.util. Changed the following names: - Use apache_beam.io.filebasedsinkinstead of apache_beam.io.file. - Use apache_beam.io.filesysteminstead of apache_beam.io.fileio. - Use TaggedOutputinstead of SideOutputValue. - Use AfterAnyinstead of AfterFirst. - Use apache_beam.optionsinstead of apache_beam.utilfor pipeline_optionsand related imports. Removed the ability to emit values from start_bundle. Removed support for all credentials other than application default credentials. Removed --service_account_name and --service_account_key_file flags. Removed IOChannelFactory and replaced it with BeamFileSystem. Removed deprecated context parameter from DoFn. Removed SingletonPCollectionView, IterablePCollectionView, ListPCollectionView, and DictPCollectionView. Use AsSingleton, AsIter, AsList, and AsDict instead. Improved several stability, performance, and documentation issues. Note: All versions prior to 2.0.0 are DEPRECATED. 0.6.0 (March 23, 2017) This release includes breaking changes. Identified issue: Dataflow pipelines that read GroupByKey results more than once, either by having multiple ParDos consuming the same GroupByKey results or by reiterating over the same data in the pipeline code, may lose some data. To avoid being affected by this issue, upgrade to the Dataflow SDK for Python version 2.0.0 or later. Added Metrics API support to the DataflowRunner. Added support for reading and writing headers to text files. Moved Google Cloud Platform specific IO modules to apache_beam.io.gcp namespace. Removed label as an optional first argument for all PTransforms. Use label >> PTransform(...) instead. Removed BlockingDataflowPipelineRunner. Removed DataflowPipelineRunner and DirectPipelineRunner. Use DataflowRunner and DirectRunner instead. Improved several stability, performance, and documentation issues. 0.5.5 (February 8, 2017) This release includes breaking changes. Added a new metric, Total Execution Time, to the Dataflow monitoring interface. Removed the Aggregators API. Note that the Metrics API offers similar functionality. Removed usage of non-annotation based DoFns. Improved several stability and performance issues. 0.5.1 (January 27, 2017) This release includes breaking changes. Added Metrics API support to the DirectRunner. Added source and sink implementations for TFRecordsIO. Added support for annotation-based DoFns. Autoscaling will be enabled by default for jobs that run using the Dataflow service, unless autoscaling_algorithm argument is explicitly set to NONE. Renamed PTransform.apply to PTransform.expand. Renamed apache_beam.utils.options to apache_beam.utils.pipeline_options. Several changes to pipeline options: job_name optionis now optional and defaults to: beamapp-username-date(mmddhhmmss)-microseconds. temp_locationis now a required option. staging_locationis now optional and defaults to the value of temp_location option. teardown_policy, disk_source_image, no_save_main_session, pipeline_type_checkoptions are removed. machine_typeand disk_typeoption aliases have been removed. Renamed DataflowPipelineRunner to DataflowRunner. Renamed DirectPipelineRunner to DirectRunner. DirectPipelineRunner is no longer blocking. To block until pipeline completion, use the wait_until_finish() method of the PipelineResult object, returned from the run() method of the runner, to block until pipeline completion. BlockingDataflowPipelineRunner is now deprecated and will be removed in a future release. 0.4.4 (December 13, 2016) Added support for optionally using BigQuery standard SQL. Added support for display data. Updated DirectPipelineRunner to support bundle based execution. Renamed the --profile flag to --profile_cpu. Windowed side inputs are now correctly supported. Improved service account-based authentication. (Note: Using the --service_account_key_file command line option requires installation of pyOpenSSL.) Improved several stability and performance issues. 0.4.3 (October 17, 2016) Fixes package requirements. 0.4.2 (September 28, 2016) Installations of this version on or after October 14 2016 are picking up a newer version of oauth2client, which contains breaking changes. Note that Cloud ML SDK is not affected. Workaround: Run pip install google-cloud-dataflow oauth2client==3.0.0. Improved performance and stability of I/O operations. Fixed several minor bugs. 0.4.1 (September 1, 2016) Allow TopCombineFn to take a key argument instead of a comparator. Improved performance and stability issues in gcsio. Improved various performance issues. 0.4.0 (July 27, 2016) This is the first beta release and includes breaking changes. Renamed the google.cloud.dataflow package to apache_beam. Updated the main session to no longer be saved by default. Use the save_main_session pipeline option to save the main session. Announced a test framework for custom sources. Announced filebasedsource, a new module that provides a framework for creating sources for new file types. Announced AvroSource, a new SDK-provided source that reads Avro files. Announced support for zlib and DEFLATE compression. Announced support for the >> operator for labeling PTransforms. Announced support for size-estimation support to Python SDK Coders. Improved various performance issues. 0.2.7 The 0.2.7 release includes the following changes: - Introduces OperationCouters.should_samplefor sampling for size estimation. - Implements fixed sharding in TextFileSink. - Uses multiple file rename threads in finalize_writemethod. - Retries idempotent I/O operations on Cloud Storage timeout. 0.2.6 The 0.2.6 release includes the following changes: - Pipeline objects are now allowed to be used in Python with statements. - Fixed several bugs including module dictionary pickling and buffer overruns in fast OutputStream. 0.2.5 The 0.2.5 release includes the following changes: - Added support for creating custom sources and reading from them in pipelines executed using DirectRunnerand DataflowRunner. - Added DiskCachedPipelineRunneras a disk-backed alternative to DirectRunner. - Changed how undeclared side outputs of DoFns in cloud executor are handled; they are now ignored. - Fixed pickling issue when the Seaborn package is loaded. - Text files output sink can now write gzipcompressed files. 0.2.4 The 0.2.4 release includes the following changes: - Added support for large iterable side inputs. - Enabled support for all supported counter types. - Modified --requirements_filebehavior to cache packages locally. - Added support for non-native TextFileSink. 0.2.3 The 0.2.3 release includes the following fixes: - The google-apitoolsversion is no longer required to be pinned. - The oauth2clientversion is no longer required to be pinned. - Fixed import errors that were raised during installation of the latest gcloudpackage and the Dataflow SDK for the statement import google. - Fixed the code to raise the correct exception for failures in start and finish of DoFnmethods. 0.2.2 The 0.2.2 release includes the following changes: - Improved memory footprint for DirectPipelineRunner. - Fixed multiple bugs: - Fixed BigQuerySinkschema record field type handling. - Added clearer error messages for missing files. - Created a new example using more complex BigQuery schemas. - Improved several performance issues by: - Reducing debug logging - Compiling some files with Cython 0.2.1 The 0.2.1 release includes the following changes: - Optimized performance for the following features: - Logging - Shuffle Writing - Using Coders - Compiling some of the worker modules with Cython - Changed the default behavior for Cloud execution: Instead of downloading the SDK from a Cloud Storage bucket, you now download the SDK as a tarball from GitHub. When you run jobs using the Dataflow service, the SDK version used will match the version you've downloaded (to your local environment). You can use the --sdk_locationpipeline option to override this behavior and provide an explicit tarball location (Cloud Storage path or URL). - Fixed several pickling issues related to how Dataflow serializes user functions and data. - Fixed several worker lease expiration issues experienced when processing large datasets. - Improved validation to detect various common errors, such as access issues and invalid parameter combinations, much earlier in time.
https://cloud.google.com/dataflow/release-notes/release-notes-python?hl=en
CC-MAIN-2017-51
refinedweb
1,729
53.58
In Elixir and many functional languages, functions are first class citizens. We will learn about the types of functions in Elixir, what makes them different, and how to use them. Table of Contents Anonymous Functions Just as the name implies, an anonymous function has no name. As we saw in the Enum lesson, these are frequently passed to other functions. To define an anonymous function in Elixir we need the fn and end keywords. Within these we can define any number of parameters and function bodies separated by ->. Let’s look at a basic example: iex> sum = fn (a, b) -> a + b end iex> sum.(2, 3) 5 The & Shorthand Using anonymous functions is such a common practice in Elixir there is shorthand for doing so: iex> sum = &(&1 + &2) iex> sum.(2, 3) 5 As you probably already guessed, in the shorthand version our parameters are available to us as &1, &2, &3, and so on. Pattern Matching Pattern matching isn’t limited to just variables in Elixir, it can be applied to function signatures as we will see in this section. Elixir uses pattern matching to check through all possible match options and select the first matching option to run: iex> handle_result = fn ...> {:ok, result} -> IO.puts "Handling result..." ...> {:ok, _} -> IO.puts "This would be never run as previous will be matched beforehand." ...> {:error} -> IO.puts "An error has occurred!" ...> end iex> some_result = 1 1 iex> handle_result.({:ok, some_result}) Handling result... :ok iex> handle_result.({:error}) An error has occurred! Named Functions We can define functions with names so we can easily refer to them later. Named functions are defined within a module using the def keyword . We’ll learn more about Modules in the next lessons, for now we’ll focus on the named functions alone. Functions defined within a module are available to other modules for use. This is a particularly useful building block in Elixir: defmodule Greeter do def hello(name) do "Hello, " <> name end end iex> Greeter.hello("Sean") "Hello, Sean" If our function body only spans one line, we can shorten it further with do:: defmodule Greeter do def hello(name), do: "Hello, " <> name end Armed with our knowledge of pattern matching, let’s explore recursion using named functions: defmodule Length do def of([]), do: 0 def of([_ | tail]), do: 1 + of(tail) end iex> Length.of [] 0 iex> Length.of [1, 2, 3] 3 Function Naming and Arity We mentioned earlier that functions are named by the combination of given name and arity (number of arguments). This means you can do things like this: defmodule Greeter2 do def hello(), do: "Hello, anonymous person!" # hello/0 def hello(name), do: "Hello, " <> name # hello/1 def hello(name1, name2), do: "Hello, #{name1} and #{name2}" # hello/2 end iex> Greeter2.hello() "Hello, anonymous person!" iex> Greeter2.hello("Fred") "Hello, Fred" iex> Greeter2.hello("Fred", "Jane") "Hello, Fred and Jane" We’ve listed the function names in comments above. The first implementation takes no arguments, so it is known as hello/0; the second takes one argument so it is known as hello/1, and so on. Unlike function overloads in some other languages, these are thought of as different functions from each other. (Pattern matching, described just a moment ago, applies only when multiple definitions are provided for function definitions with the same number of arguments.) Functions and Pattern Matching: "Taupe" ...> }: "Taupe"}) ** (FunctionClauseError) no function clause matching in Greeter1.hello/1 The following arguments were given to Greeter1.hello/1: # 1 %{age: "95", favorite_color: "Taupe"} iex:12: Greeter: "Taupe" ...> } Greeter1.hello/1 expects an argument like this: %{name: person_name} In Greeter1.hello/1, the map we pass ( fred) is evaluated against our argument ( %{name: person_name}): %{name: person_name} = %{name: "Fred", age: "95", favorite_color: "Taupe"}: "Taupe"} Now, person has been evaluated and bound to the entire fred-map. We move on to the next pattern-match: %{name: person_name} = %{name: "Fred", age: "95", favorite_color: "Taupe"} Now this is the same as our original Greeter1 function where we pattern matched the map and only retained Fred’s name. What we’ve achieved is two variables we can use instead of one: person, referring to %{name: "Fred", age: "95", favorite_color: "Taupe"} person_name, referring to "Fred" So now when we call Greeter2.hello/1, we can use all of Fred’s information: # call with entire person ...> Greeter2.hello(fred) "Hello, Fred" %{age: "95", favorite_color: "Taupe", name: "Fred"} # call with only the name key ...> Greeter2.hello(%{name: "Fred"}) "Hello, Fred" %{name: "Fred"} # call without the name key ...> Greeter2.hello(%{age: "95", favorite_color: "Taupe"}) ** (FunctionClauseError) no function clause matching in Greeter2.hello/1 The following arguments were given to Greeter2.hello/1: # 1 %{age: "95", favorite_color: "Taupe"}: "Taupe",. Private Functions When we don’t want other modules accessing a specific function we can make the function private. Private functions can only be called from within their own Module. We define them in Elixir with defp: defmodule Greeter do def hello(name), do: phrase() <> name defp phrase, do: "Hello, " end iex> Greeter.hello("Sean") "Hello, Sean" iex> Greeter.phrase ** (UndefinedFunctionError) function Greeter.phrase/0 is undefined or private Greeter.phrase() Guards We briefly covered guards in the Control Structures lesson, now we’ll see how we can apply them to named functions. Once Elixir has matched a function any existing guards will be tested. In the following example we have two functions with the same signature, we rely on guards to determine which to use based on the argument’s type: defmodule Greeter do def hello(names) when is_list(names) do names |> Enum.join(", ") |> hello end def hello(name) when is_binary(name) do phrase() <> name end defp phrase, do: "Hello, " end iex> Greeter.hello ["Sean", "Steve"] "Hello, Sean, Steve" Default Arguments If we want a default value for an argument we use the argument \\ value syntax: defmodule Greeter do def hello(name, language_code \\ "en") do phrase(language_code) <> name end defp phrase("en"), do: "Hello, " defp phrase("es"), do: "Hola, " end iex> Greeter.hello("Sean", "en") "Hello, Sean" iex> Greeter.hello("Sean") "Hello, Sean" iex> Greeter.hello("Sean", "es") "Hola, Sean" When we combine our guard example with default arguments, we run into an issue. Let’s see what that might look like: defmodule Greeter do def hello(names, language_code \\ "en") when is_list(names) do names |> Enum.join(", ") |> hello(language_code) end def hello(name, language_code \\ "en") when is_binary(name) do phrase(language_code) <> name end defp phrase("en"), do: "Hello, " defp phrase("es"), do: "Hola, " end ** (CompileError) iex:31: definitions with multiple clauses and default values require a header. Instead of: def foo(:first_clause, b \\ :default) do ... end def foo(:second_clause, b) do ... end one should write: def foo(a, b \\ :default) def foo(:first_clause, b) do ... end def foo(:second_clause, b) do ... end def hello/2 has multiple clauses and defines defaults in one or more clauses iex:31: (module) Elixir doesn’t like default arguments in multiple matching functions, it can be confusing. To handle this we add a function head with our default arguments: defmodule Greeter do def hello(names, language_code \\ "en") def hello(names, language_code) when is_list(names) do names |> Enum.join(", ") |> hello(language_code) end def hello(name, language_code) when is_binary(name) do phrase(language_code) <> name end defp phrase("en"), do: "Hello, " defp phrase("es"), do: "Hola, " end iex> Greeter.hello ["Sean", "Steve"] "Hello, Sean, Steve" iex> Greeter.hello ["Sean", "Steve"], "es" "Hola, Sean, Steve" Caught a mistake or want to contribute to the lesson? Edit this page on GitHub!
https://elixirschool.com/en/lessons/basics/functions/
CC-MAIN-2021-25
refinedweb
1,249
64.81
//Tom Nanke //CIS150-001 //10/30/07 //Program 3 //This program plays the game of pig. This is a game in which two players take turns rolling a six-sided die and the first player //to reach 100 points wins. In this game, however, the two players are a human player and a computer player. The //human player takes the first roll, and if he/she rolls from 2-6, he/she can choose to roll again or hold. If he/she decides to //hold, then the sum of all the rolls from the current turn is stored into his/her total score of the game. If a 1 is rolled, //however, the user's turn ends and no new points are added to his/her total game score. Once the user's turn is over, either //because of a hold or a 1 is rolled, then it becomes the computer's turn. To start, the computer keeps rolling a die until it //either rolls a 1 or gets a total sum of 20 or more, in which case it then holds. Then it would once again become the user's //turn, and after his/her turn, the computer keeps trying to roll until it reaches a 100, going back to last held value if it //rolls a one. The expected input is the user's selection of whether to roll or hold. The expected output is the total scores //at each turn, and then the winner of the game. #include <iostream> #include <ctime> //Here I include both ctime and time.h so that I get generate different random numbers for the human #include <time.h> //roll and for the computer roll. using namespace std; int humanTurn(int &humanTotalScore); //function prototype: This function calculates the human's score for a single turn. //pre-cond: The game has started. //post-cond: Returns a value for the total turn score to be added to the human's total game score. //The input parameter is the total game score for the human. int computerTurn(int &computerTotalScore); //function prototype: This function calculates the computer's score for a single turn. //pre-cond: The human has already rolled and either held or rolled a 1. View Full Document This preview has intentionally blurred sections. - Fall '07 - L.TSUI - Binary numeral system, Roll Click to edit the document details
https://www.coursehero.com/file/5871912/program3/
CC-MAIN-2017-51
refinedweb
393
72.26
Entering edit mode 3 months ago blackadder • 0 Hello all, When I am executing my snakemake pipeline I am trying to pass command line arguments with the --config flag in a python script. rule assembly: input: "trim/{sample}_1.fastq.gz", "trim/{sample}_2.fastq.gz", output: "assembly/{sample}.fa" params: fastq1 = "trim/{sample}_1.fastq.gz", fastq2 = "trim/{sample}_2.fastq.gz", conda: "environment.yaml" script: "assembly.py" The Python script import argparse parser = argparse.ArgumentParser() parser.add_argument('--config', type=str, required=True) args = parser.parse_args() if args.config["assembly"] == "megahit": shell ("megahit .........") elif config["assembly"] == "metaspades": shell ("metaspades.py ........") I am executing the pipeline like this: snakemake --cores 5 --config assembly="megahit" or snakemake --cores 5 --config assembly="metaspades" When I do that I am getting the following error: error: the following arguments are required: --config - Is there a way to make this work? - Am I completely wrong by choosing this option? (I did because when I used the snakemake run directive I was getting the "conda environments are only allowed with shell script notebook or wrapper directives (not with run)" error. Thanking you in advance! i don't think you need to add the addparse stuff, just use a generic configfile with a default value and your inline arguments should work So, you suggest instead of --config use the --configfile and specify the different params in this one? all this is unnecessary: When i have the following code in the script : I am getting the following error: and I execute my pipeline like this configwon't exist unless you have a configfile. do you have one? May be gb 's answer to your earlier post is useful to you: Snakemake command line arguments. By using the run directive, you cannot use conda environments... I need to load my programs from conda You could do something like (unstested and quickly typed so only for inspiration): And in the script: But I personally prefer something you are already doing: and The above is just to push you in the right direction. If you can't figure it out I can make a working example. Also the shellline in the python script looks weird but for now I assume it is just a example or not real code. Hello! Thank you for the reply. Yes, the shell directive is an example and not the real code. I will try your suggestion and I will get back to you.
https://www.biostars.org/p/9506141/#9506361
CC-MAIN-2022-21
refinedweb
408
57.47
11.5: Escape Character - Page ID - 3195 Since we use special characters in regular expressions to match the beginning or end of a line or specify wild cards, we need a way to indicate that these characters are "normal" and we want to match the actual character such as a dollar sign or caret. We can indicate that we want to simply match a character by prefixing that character with a backslash. For example, we can find money amounts with the following regular expression. import re x = 'We just received $10.00 for cookies.' y = re.findall('\$[0-9.]+',x) Since we prefix the dollar sign with a backslash, it actually matches the dollar sign in the input string instead of matching the "end of line", and the rest of the regular expression matches one or more digits or the period character. Note: Inside square brackets, characters are not "special". So when we say "[0-9.]", it really means digits or a period. Outside of square brackets, a period is the "wild-card" character and matches any character. Inside square brackets, the period is a period.
https://eng.libretexts.org/Bookshelves/Computer_Science/Book%3A_Python_for_Everybody_(Severance)/11%3A_Regular_Expressions/11.05%3A_Escape_character
CC-MAIN-2019-43
refinedweb
186
63.49
ddi_mmap_get_model - return data model type of current thread #include <sys/ddi.h> #include <sys/sunddi.h> uint_t ddi_mmap_get_model(void); Solaris DDI specific (Solaris DDI). ddi_mmap_get_model() returns the C Language Type Model which the current thread expects. ddi_mmap_get_model() is used in combination with ddi_model_convert_from(9F) in the mmap(9E) driver entry point to determine whether there is a data model mismatch between the current thread and the device driver. The device driver might have to adjust the shape of data structures before exporting them to a user thread which supports a different data model. Current thread expects 32-bit (ILP32) semantics. Current thread expects 64-bit (LP64) semantics. The ddi_mmap_get_model() function was not called from the mmap(9E) entry point. The ddi_mmap_get_model() function can only be called from the mmap(9E) driver entry point. The following is an example of the mmap(9E) entry point and how to support 32-bit and 64-bit applications with the same device driver. struct data32 { int len; caddr32_t addr; }; struct data { int len; caddr_t addr; }; xxmmap(dev_t dev, off_t off, int prot) { struct data dtc; /* a local copy for clash resolution */ struct data *dp = (struct data *)shared_area; switch (ddi_model_convert_from(ddi_mmap_get_model())) { case DDI_MODEL_ILP32: { struct data32 *da32p; da32p = (struct data32 *)shared_area; dp = &dtc; dp->len = da32p->len; dp->address = da32->address; break; } case DDI_MODEL_NONE: break; } /* continues along using dp */ … } mmap(9E), ddi_model_convert_from(9F) Writing Device Drivers for Oracle Solaris 11.3
https://docs.oracle.com/cd/E86824_01/html/E54779/ddi-mmap-get-model-9f.html
CC-MAIN-2021-25
refinedweb
235
53.51
This article is aimed at people who are new to the world of Raspberry Pi [Like me]. It gives an idea about what Raspberry Pi and Raspbian are and what are the uses of those. It also gives a simple guide to setup your first Pi and its OS, play around with a hello world program and set you on your path to build an army of Robots which will take over the world someday. Below are the topics covered. Article about the IoT overview linked below may serve as a good introduction to what the IoT is and its uses. Internet of things - Overview *Let’s start Raspberry Pi as just Pi to save some real estate :). One line answer to the about question would be, “Pi is a single-board computer”. Pi is a small scale computer in the size little bigger than a credit card, it packs enough power to run games, word processor like open office, image editor like Gimp and any program of similar magnitude. Pi was introduced as an educational gadget to be used for protyping by hobbyists and for those who want to learn more about programming. It certainly cannot be a substitute for our day to day Linux, Mac or Windows PC. Pi is based on a Broadcom SoC (System of Chip) with an ARM processor [~700 MHz], a GPU and 256 to 512 MB RAM. The boot media is an SD card [which is not included], and the SD card can also be used for persist data. Now that you know that the RAM and processing power are not nearly close to the power house machines you might have at home, these Pi’s can be used as a Cheap computer for some basic functions, especially for experiments and education. The Pi comes in three Configurations and we will discuss the specifications of those in the coming sections. The cost of a Pi is around $35 for a B Model and is available through many online and physical stores. Below are the basic things you would need to get started with using a Pi. Computer A Raspberry Pi Storage SD Card and a SD card reader to image the OS [These days laptops have inbuilt card readers] Power supply 5 volt micro USB adapter, mostly your android phone charger would work Display An TV/Monitor with DVI or HDMI port Display connector HDMI cable or HDMI to DVI converter cable Input USB Mouse USB Keyboard Network Ethernet cable Case If you really need one, you can get them online based on the model you have As discussed earlier the Pi comes in three configurations. Below is a table that gives the details about all three models namely A, B and B+. Description Model A Model B Model B+ Chip Broadcom BCM2835 (CPU, GPU, DSP, SDRAM, and single USB port) Processor 700 MHz ARM1176JZF-S core (ARM11 family, ARMv6 instruction set) RAM 256 MB 512 MB USB 1 (direct from BCM2835 chip) 2 on board 4 on board SD Card MicroSD card Voltage 600mA upto 1.2A @ 5V 750mA upto 1.2A @ 5V 600mA upto 1.8A @ 5V GPO 26 40 For our learning we shall choose Model B and explain things with regards to that model. In case if you are wondering why B and not B+, it’s because I only own a B and not a B+ :) Below are the ports on the Raspberry Pi board and some of their uses. The ports may also be used for other purposes than listed below. Mainly used for pheriperals like Keyboard, mouse and a Wifi Adapter. A powered USB hub can be connected and be expanded HDMI This is the High Definition Multimedia Interface [HDMI] and is use to connect to a Display unit like TV or Monitor or sometimes a projector Stereo Audio Audio connections using a 3.5 mm jack SD card is used as a boot device and also persistent storage. More stoage can be attached to the USB Micro USB The micro usb port is used for supplying power to the unit CSI Connector CSI [ Camera serial Interface] is used for connecting a camera to the unit Ethernet Used for connecting to a network using a network cable DSI Connector DSI [ Digital serial Interface] is used for connecting aLCD to the unit One other important pin is the GPIO. GPIO stands for General Purpose Input and Output. There are 26 Pins on a Model B in total. We shall build a simple circuit towards the end of this tutorial to understand more about how to use these GPIO. Before that let’s get our Pi setup with an Operating system. Below are some of the Operating systems that a Pi can run but in this article we will only learn about Raspian. Linux There are three official Linux flavors available for download namely Debian [Raspbian] *Recommended ArchLinux Pidora [Based on Fedora] RISC OS A retro looking 1080p GUI designed by the ARM designers. RISC was more common during the 90's Firefox OS A new OS by the Firefox team. Pretty much a combination of Firefox and PTXdist-built Linux Plan 9 Unix like OS by the by the Bell Labs, created by the UNIX creators Android No explanation necessary, but this hasn't gone beyond a 2.3 build and a bit too slow. browsing, python programming and a GUI desktop. The Raspian desktop environment is known as the “Lightweight X11 Desktop Environment” or in short LXDE. This has a fairly attractive user interface that is built using the X Window System software and is a familiar point and click interface. We shall look more into how to install and use this OS in the next section. Let’s first connect the board with all the necessary accessories to install and run an operating system. Step 1: Take the Pi out of its anti static cover and place it on the non-metal table. Step 2: Connect the display – Connect the HDMI cable to the HDMI port on the Pi and the other end of the HDMI cable to the HDMI port of the TV. Step 3: Connect your Ethernet cable from the Router to the Ethernet port on the Pi Step 4: Connect your USB mouse to one of the USB ports on the Pi Step 5: Connect your USB Keyboard to the other USB port on the Pi Step 6: Connect the micro USB charger to the Pi but don’t connect it to the power supply yet Step 7: Flash the SD Card with the Raspian OS. The Image Writer for Windows is used in place of dd which designed specifically for creating USB or SDcard images of Linux distributions, it features a simple graphical user interface that makes the creation of a Raspberry Pi SDcard straight forward. Download the latest version of Image Writer for Windows from the website:. Below are the steps. i. Download the binary (not source) Image Writer for Windows Zip file, and extract it to a folder on your computer. ii. Plug your blank SDcard into a card reader connected to the PC. iii. Double-click the Win32DiskImager.exe file to open the program, and click the blue folder icon to open a file browse dialogue box. iv. Browse to the imagefilename.img file you extracted from the distribution archive, replacing imagefilename.img with the actual name of the file extracted from the Zip archive, and then click the Open button. v. Select the drive letter corresponding to the SDcard from the Device drop-down dialogue box. If you’re unsure which drive letter to choose, open MyComputer or Windows Explorer to check. vi. Click the Write button to flash the image file to the SDcard. style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> You are now all set with the OS installed and your Pi up and running. * I am using Qemu emulator to get screen shots but the real config has all the below options plus some more like camera config. If you need to enter the config menu of the Pi, you will need to use the sudo raspi-config command. This config appears on the first boot of your Pi as well. Below are some options available and their quick explanations. style="width: 600px; height: 294px" data-src="" class="lazyload" data-sizes="auto" data-> # Option 1 info Short paragraph of Information on the tools and its usage. 2 expand_rootfs Only the OS and disk formatting is copied on to the SD card when flashed and it will show up as if you dont have anymore space on the card to use. Running expand_rootfs will allocate all the space available on the card to the OS so that it can be utilised. Just enter this command and it will do the job, on the next boot you shall see the newly allocated capacity. 3 overscan This is to specify what width of a border should be visible around the screen image which is used to accomodate for the images that spill over. If you disable overscan then the whole real estate of the screen is used. 4 configure_keyboard This helps configuring your key board, once executed we see the option to select a keyboard. There generic keyboard options and also a lot of specific ones. Then you can select the keyboard layout and Compose key. 5 change_pass Helps with changing your password 6 change_locale Language selection and associated character set selection. Usually the default works if you need English language 7 change_timezone Your Raspberry Pi detects the time from the Internet when you switch it on, but you’ll need to tell it what time zone you’re in when you first set it up. 8 memory_split change from here:: *********. 9. 10 ssh SSH is a way of setting up a secure connection between computers, usually so you can control one computer from another computer. Unless you know you need to use it, you can ignore this setting. 11 boot_behaviour You can use this setting to make your Raspberry Pi go straight into the desktop environment when you switch it on. 12 update Use this setting to install an update to Raspi-config if one is available. You need to have a working Internet connection to use this. Below are some options I used on my first boot. Step 1 : expand_rootfs. Choose or highlight the “expand_rootfs” option and press enter. Use your keyboard arrow keys to select the option. Once you press enter you will get a confirmation screen like below, press enter again and you will be taken back to the main config screen. style="width: 450px; height: 277px" data-src="" class="lazyload" data-sizes="auto" data-> Step2: overscan : You can either enable or disable overscan here. I am disabling it so my screen fills up completely. style="width: 416px; height: 278px" data-src="" class="lazyload" data-sizes="auto" data-> Step 3: configure_keyboard: Since the initial setting gives me the UK keyboard format, I will change the setting to use the generic international 105 with English(US) option. Select "configure_keyboard" and press Enter. style="width: 600px; height: 84px" data-src="" class="lazyload" data-sizes="auto" data-> Once the keyboard type is set, you'll need to specify the layout. There's a good chance you want a different layout than English (UK) like me, so choose "Other" and select an appropriate option for you. I am sticking with English (US). style="width: 417px; height: 124px" data-src="" class="lazyload" data-sizes="auto" data-> After this just choose the default modifier keys when asked, as well as "No compose key" on the next screen. If later you find you need a compose key to create alternative characters, you can return to this configuration screen by running "raspi-config". Step 4: Change password. Select change_pass and the rest are self-explanatory steps. I think it’s a good idea to change the generic password :) style="width: 556px; height: 284px" data-src="" class="lazyload" data-sizes="auto" data-> Step 5: Setting up Locale: If you live in the USA then you want en_US.UTF-8 else it depends on where you live. Scroll down to your locale you need, and de-select the en_GB [Great Britian] option on your way. In our case, we'll be enabling en_US.UTF-8 The next dialogue window will ask you to choose a default locale, select the locale you just chose on the previous screen and press Enter. Step 6: Setup Timezone: Select the "change_timezone" option. You'll be presented with a list of regions first. style="width: 500px; height: 290px" data-src="" class="lazyload" data-sizes="auto" data-> The next dialogue will show you a list of zones within that region. I am sure you can handle it from here J Step 7: Finish Go back to the main menu and select Finish. This should take you back to the original boot screen and you can type startx from there to enter the LXDE environment or the GUI. As discussed briefly earlier LXDE is the Graphical user interface for the Raspian Operating system. Now let’s quickly look at what are the tools and functions included with the LXDE. The GUI doesn’t load by default and you will need to enter the startx command to launch the GUI. The LXDE comes preinstalled with a plenty of software to get started. Below is a list of some of the available softwares for the Pi, the list is not exhaustive and you are free to explore the functions that are not listed here. The list below only serves as an introduction to the LXDE software packages. The LXDE softwares are divided into categories as below: style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> As most of you already know a File System is the chosen data structure and access methods used by any operating system to store, organize and access files. File System is also used to denote the partitions on the disk that is being used on a machine. Every filesystem has its own method of storing files onto a storage medium [like a hard disk or SD card]. Without these representations which are recognized it would be hard to share file between systems and people. Logical layout The Linux way of storing and organizing files is a bit different than that of Windows. Unlike Windows where everything is stored under drives and each drive having a letter, in Linux everything is grouped under a branch under the root file system. To take a look at what these branches are, open the Terminal on your pi and type the below command. ls / style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> As you can see in the above gif image, there are various directories. Some of those directories on the SD Card for persisting files and others are the virtual directories for accessing different portions of the operating system or the hardware. Below are the list of what you see and a short description. Directory boot Folder for Linux Kernel and other packages necessary to booth/start the Pi bin The Raspian related binary files including those for required to run the GUI of the OS dev This is one of those virtual directories and this is used for accessing all the connected devices including the storage etc Misc config files like encrypted passwords are put in this This is like My documents and each username will get a separate directory under this lib Lib= libraries, code required by various applications lost+found Dump of pieces of files are stored here when the system crashes media Dir for removable storage dives like USB and CD mnt Used to mount external hard drives and similar storage devices manually opt Optional software directory, any apps that are not part of the OS will go here proc Another Virtual directory, this contains info about running processes (programs) selinux Security utilities originally developed by our beloved NSA and these are related to Security Enhanced Linux sbin System maintenance binaries typically used by the root/superuser sys Operating system files tmp Temporary files usr This is used as storage for user accessible programs var Virtual directory that a program can use to persist data Physical Layout Even though we see all those directories above, if you look at the SD card contents you will see a completely different structure. For the Raspbian the card will be organized into two sections or partitions. The first partition is a smaller with around 75 MB and is formatted as VFAT, this is the same format used by Microsoft for the removable drives. This partition is mounted and is accessible by Linux in the /boot directory and this is the folder in which all the files used to configure the Pi and run the OS are stored. The second partition a larger one and formatted as EXT4, EXT4 is one of the native Linux Filesystems and is considered for its data safety and high speed. This contains the main chunck of the distribution and the programs and user files. Even though the packaged software that comes with the distribution is enough to start with we will always need to install new software on the operating system for new projects or work. Installing softwares on the Pi is pretty straight forward and simple process. The tool called apt will help us do these tasks and is the default package manager for the Raspbian. Softwares are addressed as packages in Linux world and hence lets start using the term. Packages are nothing but programs or a collection of programs designed to work together to perform a task. apt tool can be executed through the command line on the Terminal and there are also GUI based package managers like Synaptic Package Manager built on the apt tool which you could use. For now lets use the command line version as Synaptic eats up a lot of memory. There are three things about software installation, namely Finding a software, installing it and uninstalling it. Lets look at each of them below. The first step in installing any package would be to find the package by its name. Usually we should search the cache for the available packages and chances are that we find the one we need in there. The Cache is a list of all available packages that can be installed via apt, and are stored on repositories [internet servers] The apt tool includes a utility for managing this cache called, apt-cache. Using this tool, we could search for a software with a word or phrase. For example to find all available package managers we can type the below: apt-cache search package manager Command is basically telling the apt-cache tool to search for all the packages which has “Package manager” in their title or description. Usually we get a lot of records back and hence it’s better to be as specific as possible. Below is what happens once you execute the command. style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> As you can see I was not specific enough and ended up with a big list. :) Once you have chosen the name of the package to be installed, we need to use the apt-get command to install it. Installing is only allowed on a root user and hence the command needs to be run with a sudo prefix to let the OS know that this command is run as a root user. Let’s chose the package thrust and install the same. Type the below command sudo apt-get install thrust Below is what happens. style="height: 453px; width: 600px" data-src="" class="lazyload" data-sizes="auto" data-> Once you have decided to remove a package from the system you use the same apt-get command with remove argument to uninstall a software. Below is the command and the image of how it works. Sudo apt-get remove thrust style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> Remove also has a boss named purge. The remove command leaves the config files in the system where as purge would wipe out everything related to the package. Below is the command sudo apt-get purge thrust At times we will need to get the latest version for an already installed package. To do this apt-get uses the same install argument. If the package is already installed then it will treat a install command as upgrade and if the latest version is already installed then it will just let you know that the software is already up to date and exit. Below is the command. sudo apt-get install Also to upgrade all the softwares on the system at once you could use upgrade argument like below. sudo apt-get upgrade Apart from upgrading the packages if you need to get the updated tool of apt-get then use the Update argument like below. sudo apt-get update Below is the image showing all three commands above style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> As we already know Raspbian is preloaded with Python programming language and IDE for the same. I will quickly give a demo for a Hellow world program on Python. Learning Python is a whole different section which I will try covering on a different article. Python is also considered the official language of the Raspberry Pi. Learning any new computer language typically/traditionally starts with the Hello World program and I really wouldn’t be the one to break the tradition. Below are the steps to write and execute your first “Hello World” Pi-Thon program [pun intended ;) ] Step1: Click and open the IDLE 3[Phython 3 IDE] from the start menu Step2: Click on File menu and open a new Window Step3: Type the below code in your new window #my first Pi-thon program print (‘Hello World’) username = input("What is your name? ") print (‘Welcome to Pi world, ' + username + ' have a good one!') Step4: Click save from the file menu and give a name to the file Step5: Click run from the Run menu or press F5. That’s it. Below is a gif with the above steps demonstrated. style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> As we saw earlier the GIPO pings on the Pi are used to interface with other input and output devices. Let’s put together a quick example of switching a Green LED on when there are no new emails in your inbox and a RED led when there are new emails. We shall use Python as our programming language. To access the GPIO, python provides a library called RPi.GPIO, and this is preinstalled with the Raspian we are using. However, let’s import the library and get the latest version of the library just in case its out of date. Below is the command for the same. style="width: 600px; height: 453px" data-src="" class="lazyload" data-sizes="auto" data-> sudo apt-get install RPi.GPIO Now that we have got our GPIO library installed and lets also install feedparser library which will help parse the feed we get from gmail. sudo pip install feedparser Now let’s open a new widow on IDLE and start the below code, first line would be to import all the libraries we need namely import RPi.GPIO as GPIO, feedparser, time Now let assign the numbering system for our Pins which we will be using down below #assign numbering for the GPIO using BCM GPIO.setmode(GPIO.BCM) #assingn number for the GPIO using Board #GPIO.setmode(GPIO.BOARD) I have commented out the Board numbering mode above as I will be using the BCM numbering for the rest of the example. The difference between BCM (Broadcom SoC numbering) and BOARD mode is that the Board uses the Pins in the exact same way that they are laid out on the board whereas BCM has two different ways for two different versions. Below is a quick image from Meltwater’s Raspberry Pi Hardware to show the different BCM numberings between Revision 1 and 2. style="width: 600px; height: 366px" data-src="" class="lazyload" data-sizes="auto" data-> Let us now declare some variables that we shall be using in our program USERNAME = "YOUR_USERNAME" #just the part before the @ sign, add yours here PASSWORD = "YOUR_PASSWORD" NEWMAIL_OFFSET = 0 MAIL_CHECK_FREQ = 2 # check mail every 2 seconds The logic below will check the feed coming back from Gmail and see if the unread count is greater than 0 then it sets the GPIO GREEN to true, meaning light the GREEN LED. Else Light the RED LED. Since we have connected the Green LED to pin 18 [BCM number] on the Pi and Red LED to pin 23 the same has been set as intial values of those variables respectively. GREEN_LED = 18 RED_LED = 23 GPIO.setup(GREEN_LED, GPIO.OUT) GPIO.setup(RED_LED, GPIO.OUT) while True: newmails = int(feedparser.parse("https://" + USERNAME + ":" + PASSWORD +"@mail.google.com/gmail/feed/atom")["feed"]["fullcount"]) if newmails > NEWMAIL_OFFSET : print("You have", newmails, "new emails!") GPIO.output(GREEN_LED, True) GPIO.output(RED_LED, False) else: print("You have no new emails!") GPIO.output(GREEN_LED, False) GPIO.output(RED_LED, True) time.sleep(MAIL_CHECK_FREQ) Below is how we would connect the circuit. style="width: 600px; height: 450px" data-src="" class="lazyload" data-sizes="auto" data-> Once we run the code below is the result style="width: 600px; height: 473px" data-src="" class="lazyload" data-sizes="auto" data-> style="width: 541px; height: 308px" data-src="" class="lazyload" data-sizes="auto" data-> This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://www.codeproject.com/script/Articles/ArticleVersion.aspx?aid=839230&av=1249177
CC-MAIN-2020-34
refinedweb
4,347
66.98
Somehow you have to get the data into Teensy. Your ADC will most likely use I2S, TDM or similar. All supported by audio board. Also check the FFT in the audio library for an example, if you write... Somehow you have to get the data into Teensy. Your ADC will most likely use I2S, TDM or similar. All supported by audio board. Also check the FFT in the audio library for an example, if you write... Is the audio library not good enough? you have to consider that FS needs blocking. E.g. to open a file FS must first read the directory (cache may be empty) AND this can not be done without waiting for result. Even when writing data... If the teensies are working, no need. If the uploading is not working, you may clean the HID devices (Human Interface Devices) Only these are used for uploading, not com devices @nefarious if you have not done yet, eliminate (uninstall) ALL unused HID and Com ports with the device manager (activate view -> show hidden devices) This ensures that you get a clean port... At least one optimist! There are two modes Teensloader_CLI put teensy into programming mode (at least the version I have) - changing baudrate when (USB) Serial is used - send some bytes when HID is used. SD.h uses SdFat So suggest to modify the SdFat config.h to meet your requirements Maybe you could also ask Bill Greiman to advice on convering SdFat into a basic one and on teensy? what's the output? (please use the CODE wrap) Could he not run a program on teensy that configures teensy as a slave? That's why my reference to the forum rule. please follow the Forum rule @luni, this one seems be a non teensy guy and this can even be empty, as long setup() and loop() are in a regular c/c++ file When you change the default values in config.h, you must also delete the config.txt file on disk. the values in the file override the default values. IOW, no need to change Parameters in config.h,... or change any options (CPU speed, optimisation, usb type, etc) after deleting the utility folder, you should force a complete recompile this can be done by, for example, changing the CPU speed. yes let's continue in other thread. I guess you downloaded from github zip file, extracted into Arduino folder and compiled with TD? Could you please tell me what TeensyDuino version you are using? BTW you are compiling microSoundRecorder and not SimpleAudioLogger maybe avoiding using namespace std; and use directly std::sort could avoid side effects of <algorithm> but without source code difficult to say Edit: just say latest reply BTW, I just playing with IIR implementation for Teensy you may consider the CMSIS DSP functions. but for understanding I converted a cascaded Biquad to Matlab ic=3; % number of biquads %... My understanding is that Type II can easily be unstable: you first have the AR and when the dynamic range of the computer is not sufficient (64 bit may not be sufficient) then solution oscillates... with the following coefficients const double a[] = { 1, -9.544621361945, 41.00487909342, -104.4173708231, 174.5348069734, -200.0948614402, 159.3409444366, ... it seems that the filter is NOT stable (most likely too high order) try following Matlab code a=[1.0, -9.7723, 42.9767, -112.0082, 191.5851, -224.7201, 183.0563, ... -102.2575, 37.4887,... which says odroid@odroid have also a look into teensyloader_cli, which part of the USB message translates to Teensy model for example the HID uses the capabilities description to infer the Teensy model This should be the correct interpretation. Are there examples where this does not happen? you are using DAC and not Audioboard (SGTL5000 + I2S) not sure if T4 has even DAC (output_dac shows no functionioality) If using spi disks then async write is rather easy to implement: change in SdFatConfig.h : #define SPI_DRIVER_SELECT 3 and create your own SPI interface as described in SdFatConfig.h follow the... can you not make a clearer picture of the Teensy? I guess this is a question to Paul: can you confirm that the unused pins on the audio boards have no internal connection? Reason: I would like to stack two audio boards and use unused pins to... AFAICT, The OP was talking about telling teensy to go int bootmode? if you have USB connected and program is running, there is no need to press program button to download now code. @Raymond_Decker Sorry, I have no experience with PCMCIA. However, if you want to get help from someone here on the forum, I suggest to create a new thread with better title and be specific... Bill, Paul, not sure if I divert too much, but could it be similar to the following observation: I fail to SPI 'mount' a 1 TB uSD on T4.1 if the disk is nearly full but I ' mount' the disks easily... @zlp yes, that sounds right Meanwhile I added a _I2S_SGTL5000 symbol to config.h and changed a little bit the main program to make use of, and changes for audio board a little bit easier @zlp there are two issues - typo: "audioshield.inputSelect(AUDIO_INPUT_LINEIN)" has "s" and not "S" (this results in error) - the calls to audioShield.xxxx must be within setup() )
https://forum.pjrc.com/search.php?s=e6a7fcdaaa33106242f82cfe583da4b0&searchid=7299852
CC-MAIN-2022-05
refinedweb
895
71.95
JavaScript is a language which is full of surprises: its type coercion has been the subject of comedy, and one of it’s most well known books is titled JavaScript: The good parts. But of all of it’s eccentricities, the this keyword is likely the one which causes the most day-to-day confusion. While there are a number of articles about this, they generally take an exhaustive approach to explaining the many ways of using it. This is great for reference, but not so great if you just want to know how you should use this in your next project. That’s why in this tutorial, I explain only the three most common senses of this, by building a simplified version of a component taken from a real project – clubroom. In particular, I explain how this is used in the code which generates unique IDs for users and chat rooms. So without further ado, let’s get started! Say we’re building a chat app, and we want to produce unique, hexadecimal IDs for our users, starting our counter from 1000 so it looks like we’ve got a lot of users already. Let’s write the simplest possible code to accomplish this. var userCounter = 1000; function generateId() { // toString(16) converts a number to hexadecimal return (userCounter++).toString(16); } console.log(generateId()); // "3e8" console.log(generateId()); // "3e9" edit & run this javascript This works great when we only have to produce unique IDs for users, but what if we also want unique IDs for our rooms? We could just repeat this code twice, but this isn’t ideal – we’d inevitably make changes to one copy of the code and forget to make changes to the other. Instead, we could try to create a single function which works with multiple counters: var userIdCounter = 1000; var roomIdCounter = 1000; function generateId(counter) { return (counter++).toString(16); } console.log(generateId(userIdCounter)); console.log(generateId(userIdCounter)); console.log(generateId(roomIdCounter)); console.log(generateId(roomIdCounter)); edit & run this javascript But this won’t work, as the counter is a Number. Every time you pass a Number to a function, JavaScript passes a copy of the number instead of the original – preventing us from modifying the original. You can check this in the linked jsbin. We can still accomplish our goal with a similar method, though – we just need to find a way to pass the actual number into the function instead of a copy. And as luck may have it, JavaScript objects allow us to do this: var userIdGenerator = { counter: 1000 }; var roomIdGenerator = { counter: 1000 }; function generateId(generator) { return (generator.counter++).toString(16); } console.log(generateId(userIdGenerator)); console.log(generateId(userIdGenerator)); console.log(generateId(roomIdGenerator)); console.log(generateId(roomIdGenerator)); edit & run this javascript Having a single function which can produce a unique ID from any counter object is certainly a lot cleaner than needing a separate function for each counter, but there is still one thing we can improve. Currently, our two counter objects and our generateId function are accessible from any other javascript code on the same page. What if a 3rd-party script we include also happens to have a function called generateId? We could give our function a funny name to try prevent collisions, but even if this works, it makes our code a lot harder to read. Instead, why don’t we attach the function to the generator object itself? Let’s go back to an example with just the user id generator and give this a shot: var userIdGenerator = { counter: 1000, generate: function (idGenerator) { return (idGenerator.counter++).toString(16); } }; console.log(userIdGenerator.generate(userIdGenerator)); console.log(userIdGenerator.generate(userIdGenerator)); edit & run this javascript Great, we’ve managed to minimise the surface area where our code can collide with 3rd-party scripts! That said, it really doesn’t feel right passing userIdGenerator as a parameter to a function which is defined on that same object. And this is where the first sense of this comes in: 1. Within a function which was called as a property of an object, this points to that object. To put this in the context of the problem above, generate is a function which is called as a property of the object userIdGenerator. Or, in the lingo, generate is a method of userIdGenerator. This means that within the generate function, this will point to userIdGenerator. Knowing this, can you rewrite the generate function without the parameter? Once you’re done, touch or hover over the box below for an answer. var userIdGenerator = { counter: 1000, generate: function () { return (this.counter++).toString(16); } }; console.log(userIdGenerator.generate()); console.log(userIdGenerator.generate()); edit & run this javascript You can think of this as an invisible argument which is passed to every method, and is defined automatically by the browser to refer to the object which the method was called from. One of the interesting things about this is that it doesn’t matter how or where the function is defined – this will still be attached to whatever object the function was associated with when it was called. For example, while we wouldn’t do this in practice, it would be possible to accomplish the same thing as above this way: function generate() { return (this.counter++).toString(16); } var userIdGenerator = { counter: 1000, generate: generate }; console.log(userIdGenerator.generate()); console.log(userIdGenerator.generate()); console.log(generate()); console.log(generate()); edit & run this javascript Watch out though – the same flexibility which allows you to attach a function to an object and then call it as a method can also trip you up. This is demonstrated by the final calls to generate() above – did you notice that even though they are not associated with userIdGenerator, and thus did not increment userIdGenerator.counter, they still didn’t throw an error? In fact, generate did actually run counter++ on something, but it probably isn’t what you expect. Do you know what this pointed to inside the second call to generate()? Once you’ve thought about it for a bit, touch or hover over the box below for an answer: In a web browser, this will point to window, as the browser runs functions which not attached to anything as if they are methods of window. Incidentally, this is why you can write both setTimeout or window.setTimeout – if you leave out the window., the browser adds it back in for you. Now, back to our task of creating unique IDs for users and rooms without exposing the generate function to name conflicts. Given what we now know, we can do this: var userIdGenerator = { counter: 1000, generate: function () { return (this.counter++).toString(16); } }; var roomIdGenerator = { counter: 1000, generate: userIdGenerator.generate }; console.log(userIdGenerator.generate()); console.log(userIdGenerator.generate()); console.log(roomIdGenerator.generate()); console.log(roomIdGenerator.generate()); edit & run this javascript To avoid conflicts with other libraries, we’ve defined the generate function directly on userIdGenerator, and then to avoid repeating ourself, we’ve assigned the same function to roomIdGenerator.generate. Since this is set based on whatever is on the left hand side of the ., we’ve achieved our goal – albeit a little messily. How could we improve this? Well, it would certainly be nicer if the definition of each object looked the same, instead of having the generate method defined on the first object and then assigned to subsequent ones. It also wouldn’t hurt if we could create more generator objects without needing to repeat ourselves by defining the counter and generate properties for each one. Happily, we can do this with the new keyword, which provides us with our second sense for this: 2. Inside a function called with the new operator, this points to a new, empty object The javascript new operator creates a new object, i.e. {}. It then defines this inside the called function to refer to the new object. Finally, once the function has completed, new will return the newly created object (unless you manually return a separate value within the function – which you should avoid, unless you’re a masochist). What is the new operator useful for? Well, it allows us to create functions which set up these new objects in a certain way. In fact, these “setter upper” functions have a special name – constructors, and the new objects which they create are called instances. For example, we could use a constructor to set up instances of our “generator” objects from the previous examples. Can you write a constructor by filling out the inside of idGenerator below? Aim to make the script outputs the numbers 3e8, 3e9, 3e8, 3e9, like our previous scripts – disregard the final true/ false for the moment. Test your work in the linked jsbin, and if you get stuck, view an answer by touching or hovering over the below box. function idGener this.counter = 1000; this.generate = function() { return (this.counter++).toString(16); }; We’re getting pretty close to our final implementation of idGenerator. In fact, the above answer actually produces the desired behaviour without any chance of bugging out. But if the last line your code outputs false, as it does in the above answer, there is one thing left to do. It may surprise you to know that the issue in the above code did not exist in our previous answer (the one where we defined the generate function on the userIdGenerator object and then assigned it to shopIdGenerator.generate afterwards). Do you know what the problem is? Think about it for a bit, then touch or hover over the box below for an answer. By calling new idGenerator() twice, we’ve created two identical generate functions, despite one being enough. This causes userIdGenerator.generate == roomIdGenerator.generate to be false. This causes us to use two times the required memory. While two copies of one function aren’t going to break the memory bank, this is a bad habit to get into. For instance, if we’re calling a constructor thousands of times, each time creating 5 functions (e.g. by constructing an instance for each line in our chat), we’re going to be in trouble. We could fix this by defining generate as a property of the idGenerator function (remembering that functions are objects too), and then assigning it to each of the produced objects: function idGenerator() { this.counter = 1000; this.generate = idGenerator.generate; } idGenerator userIdGenerator.generate and roomIdGenerator.generate are now equal! We could leave it at this, but JavaScript has an even better way of solving this – using a property of a function called it’s prototype. What is a prototype? It is an object which can be found under the prototype property of every constructor function, defaulting to {}. While you can use it like any normal object, JavaScript has a special behaviour which makes it especially useful: if you try and access a method in an instance which doesn’t exist, JavaScript will then search the prototype of that instance’s constructor. It may help to think of a constructor’s prototype as a shared library of default methods which its instances can use. This leads us to the third sense of this: 3. When calling a method of an object which is not defined on that object but is defined on it’s prototype, this still points to that object. Grokking everything there is to know about prototypes takes a lot of work, but most people only need to know the basics. In fact, for this problem, all you need to know can be demonstrated with a simple example: function idGenerator() { this.counter = 1000; } idGenerator.prototype Take a few moments to try and understand this before moving on, then check that your understanding matches mine: As generate is attached to the prototype of the idGenerator function, you can still access it through each of the objects produced by new idGenerator – and this will still refer to the newly created objects inside of generate. It is important that you understand this, but once you do, you can just use this simple rule of thumb for both prototype methods and the standard ones we discussed earlier: When a function is called as a method (i.e. on the right hand side of a . character), this will by default refer to whatever is on the left hand side of the dot. You may have noticed the words “by default” and started worrying a little – does this mean that this might refer to something else? Indeed it does, but you don’t have to worry! this will only take on a different behaviour when you want it to. Learn how to control what this points to (and why you’d ever want to do so) in the next instalment of Demystifying JavaScript! I will send you useful articles, cheatsheets and code. Useful links Want to read more about this? You may find these helpful:
http://jamesknelson.com/demystifying-javascript-the-many-faces-of-this/
CC-MAIN-2017-34
refinedweb
2,145
53.1
Introduction In this tutorial we are going to learn how to use Pydantic together with Flask to perform validation of query parameters and request bodies. As we have already covered in the introductory tutorial about Pydantic, this library allows to define models that can be used for data deserialization and validation. A common use case where we receive external data that we cannot trust and that needs to be parsed to some model is when developing a web server. As such, we are going to learn how to use the Flask-Pydantic library, which allows us to use Pydantic models to perform the validation of query parameters and request bodies on Flask routes. Note that the usage of this library is not strictly necessary to combine Flask and Pydantic, since we can just access the request raw data and pass it to our Pydantic models. Nonetheless, this library offers a more elegant interface out of the box, as we will see below. If you use pip, you can install this library with the following command: pip install Flask-Pydantic The code from this tutorial was tested with Python v3.7.2, on Windows. The library versions used were the following: - Flask: 2.1.2 - Pydantic: 1.9.0 - Flask-Pydantic: 0.9.0 Defining a Pydantic class for query parameters We will start by the library imports. The first import will be the Flask class from the flask module, so we can create our application. After that we will import the BaseModel class from pydantic. This is the class that our pydantic models should extend. from flask import Flask from pydantic import BaseModel We will also import the validate decorator from the flask_pydantic module. We will later use this decorator in our route. from flask_pydantic import validate Now that we have done all our imports, we will take care of defining our pydantic model that will be responsible to validate the query parameters for our route. Although we are not going to implement any functionality in the route, let’s assume that our endpoint allows us to get a list of persons. Let’s also assume that we offer the possibility of filtering that list by two parameters: - first_name: a string containing a particular person first name that we want to search for - is_married: a Boolean that specifies if we want to bring married persons (true) or unmarried (false) Naturally, we are assuming that these filters will be passed as query parameters. As such, we will define a class called QueryParams (which inherits from the BaseModel). This class will have the fields mentioned above, with the respective type hints (str and bool). class QueryParams(BaseModel): first_name: str is_married: bool Next we will create our Flask app. app = Flask(__name__) After this we will declare our route. Let’s assume that this endpoint will correspond to the path “/person” and that it will answer to HTTP GET requests. With Flask only, we declare the route as this: @app.route("/person", methods=["GET"]) def get(): #implementation, access query params But, in our case, we want to validate that the query parameters that are passed to our request are valid per our pydantic model definition. One easy way could be accessing the request args parameter directly and then try to instantiate our QueryParams model. Nonetheless, Flask-Pydantic offers a more elegant way where we can apply the validate decorator to our route and declare a parameter called query in the route handling function. The type of this parameter must be our pydantic class QueryParams. @app.route("/person", methods=["GET"]) @validate() def get(query: QueryParams): #implementation In our route implementation we will simply return this query object back as a response. As output, later when testing the code, we should see the serialized JSON corresponding to this query object. @app.route("/person", methods=["GET"]) @validate() def get(query: QueryParams): return query To finalize, we will run our app with a call to the run method on our app object. The complete code is available below from flask import Flask from pydantic import BaseModel from flask_pydantic import validate class QueryParams(BaseModel): first_name: str is_married: bool app = Flask(__name__) @app.route("/person", methods=["GET"]) @validate() def get(query: QueryParams): return query app.run(host='0.0.0.0', port=8080) To test the code, simply open a web browser of your choice and type the following in the address bar: Note that we are passing the two query parameters that we have defined in our QueryParams class, so the request should be valid. As output we should get a result similar to figure 1. As can be seen, we have obtained the JSON representation of our QueryParams object. Next we will remove the query parameters: If we do so, we should get a result similar to figure 2. As can be seen, we get two errors indicating that both “first_name” and “is_married” are required fields. We will also try to add the query parameters back, but setting an invalid type (a string) to the Boolean parameter: As output we will get a result like the one shown in figure 3. In this case we get a type error, since the value we provided cannot be parsed to a Boolean. Making query param fields optional In the previous example we made a very strong assumption that both query parameters were required. As we could see in our tests, if we didn’t pass one of the fields (first_name and is_married), we would get an error. Nonetheless, it is very common that query parameters in an endpoint are optional, specially in an example such as the one we have built, where they represent some filtering criteria. As such, in this section, we will do a slight change to our code to make the fields optional. To start, we will import the Optional hint, additionally to all the imports we already had. from typing import Optional Then, we will add this type hint to both parameters of our model. The complete model is shown below. class QueryParams(BaseModel): first_name: Optional[str] is_married: Optional[bool] The rest of the code will stay the same. For completion, the full code can be seen below. from flask import Flask from pydantic import BaseModel from flask_pydantic import validate from typing import Optional class QueryParams(BaseModel): first_name: Optional[str] is_married: Optional[bool] app = Flask(__name__) @app.route("/person", methods=["GET"]) @validate() def get(query: QueryParams): return query app.run(host='0.0.0.0', port=8080) This time, if we access the endpoint from a web browser without setting any query parameters, we will no longer get an error. Figure 4 shows what happens in this case. As can be seen, we didn’t get an error and the object representing the query parameters had both fields set to null. Naturally we can pass one of the parameters and omit the other: Figure 5 exemplifies this use case. Note that type validation is still performed if we pass the query parameter. For example, if we set the is_married field to some value that is not a Boolean, we will still get an error: Figure 6 illustrates the mentioned error. Defining a Pydantic class for the request Body In this section we are going to cover how to define a class to desserialize and validate the request body. The approach will be very similar to what we have covered in the first code section for the query parameters. Our endpoint will answer to POST requests and receive a body payload representing a Person entity. Our imports will be the same we have seen before. from flask import Flask from pydantic import BaseModel from flask_pydantic import validate Then we will define the pydantic model that corresponds to the expected body. Like already mentioned, we will assume, for exemplification purposes, that we will receive a payload representing a Person entity. It will have the following fields: - first_name and last_name, which are both strings - age, which is an integer - is_married, which is a Boolean The class is shown below. class Body(BaseModel): first_name: str last_name: str age: int is_married: bool Next we will take care of creating our Flask app. app = Flask(__name__) After this we will declare our route. Once again, we assume that this endpoint will correspond to the path “/person” but, this time, it will answer to HTTP POST requests. Once again, besides the route decorator, we will add the validate decorator. Additionally, the route handling function will receive a parameter called body. The type of this parameter must be our pydantic Body class. @app.route("/person", methods=["POST"]) @validate() def create(body: Body): # Route implementation Once again, in the route implementation, we will simply return the parsed Body object as output of the route handling function. The full route definition is shown below. @app.route("/person", methods=["POST"]) @validate() def create(body: Body): return body The complete code is shown below and it already includes the call to the run method on the app object, so our Flask server starts listening to incoming requests. from flask import Flask from pydantic import BaseModel from flask_pydantic import validate class Body(BaseModel): first_name: str last_name: str age: int is_married: bool app = Flask(__name__) @app.route("/person", methods=["POST"]) @validate() def create(body: Body): return body app.run(host='0.0.0.0', port=8080) To test the code, we can use a tool such as Postman to perform the HTTP POST request. Figure 7 below shows an example where we have passed a valid JSON payload as body of our request. Consequently, we got back the same object, as expected. If we omit all the fields from the body and send only an empty object, we will get 4 errors as output of our request, which correspond to the 4 missing fields defined in our Body model. This scenario is shown below in figure 8.
https://techtutorialsx.com/2022/06/20/flask-and-pydantic/
CC-MAIN-2022-33
refinedweb
1,648
60.55
Developing stacks for Tutum has been a bit challenging, but we are happy to see that our users are making heavy use of them. This is clearly a big step forward in our goal of providing powerful but yet flexible tools to develop, test and deploy Docker applications. A Little Bit of Context A primary benefit of the Docker revolution is the ability to deploy development environments even for complex microservice architectures, minimizing the amount of unexpected problems when code goes to production. Docker achieves this goal with a very small footprint, making it possible to run development environments on your local machine. In this context, Fig (now Docker Compose) is a great piece of work that provides a YAML file specification where users are able to define relations between the different services such as links, environment variables, volumes and more. This was really convenient since YAML files follow a declarative approach of defining development environments that spin up with a single command. Soon, people started to enjoy these benefits and spread the use of the tool. However, Fig was designed for developers and isn’t able to deploy applications in a multi-host environment. This problem only grows more complex when people want a tool to simplify the deployment of applications in different environments like staging, pre-production and production. Implementation Challenges At Tutum, we created stacks to address these problems. They allow you to define YAML files with needed extensions for a multi-host environment, such as deployment tags to select the target deployment host, or deployment strategies to optimize your resources. You can find more information about how to use stacks here. A stack sits at the highest abstraction level for managing Docker containers and has strong dependencies on other capabilities that make it easy to go from your local environment to a multi-host environment. This is why our stack support was delayed until Tutum built solutions for networking, volumes and service discovery. The implementation is based on a new stack endpoint that processes YAML files and automates Tutum Service API calls to schedule the deployment of the stack services in the right order. The stack endpoint translates the YAML syntax of a classic Fig file into the Tutum internal JSON representation, which allows us to build a dependency graph based on the links and volumes-from relations. The dependency graph lets us process the deployment of the stack in parallel whenever possible. For example, imagine the following dependency graph: The stack endpoint would only trigger the deployment of A and B in parallel since they don’t depend on each other, but C is put on hold until A is deployed. Also, D must wait for B to be deployed, but E might be deployed in parallel with D when C finishes its deployment. These are complex task relationships that we manage by running a trigger at the end of each service deploy task. This in turn re-schedules other service deploy tasks that depend on the service that has just finished its deployment. Using Stacks for Configuration Management Stacks are the perfect fit for configuration management. You might need Puppet, Chef, Ansible or Salt for complex configurations, but a YAML file should suffice for the majority of use cases by just defining the links and environment variables. We recommend the use of fig.yml for local environments, and to have different tutum.yml files for your testing, validation and staging environments. tutum.yml keeps your configuration simple, readable and versioned in your repository. In order to facilitate the adoption of stacks in more complex production environments, Tutum is considering the following features: - External links: define services in your YAML files that are not running in Tutum. For example, you can define an external service to a RDS instance and Tutum links will resolve to the RDS endpoint. This makes it easy to substitute a mysql container in a local environment with a RDS instance in a production environment. Modular YAML files: extend the use of a base YAML file to reduce duplication of work for different environments. i.e. having one tutum.ymlYAML file that serves as the base for three different environments, and including something like import tutum.ymlat the beginning of each specific YAML file for each environment. I’d love to hear your feedback about these features and stay tuned for what is coming! Definitely plus one for both of those features. Being able to link to another service like RDS would be great. Also having Yaml files split out and including a parent YAML would make like a lot easier for production/staging etc… +1 for external links, pretty important for common setups […] Why Stacks are HUGE for Devs Using Docker. […]
https://blog.tutum.co/2015/03/31/why-stacks-are-huge-for-devs-using-docker/
CC-MAIN-2017-13
refinedweb
791
50.77
Note to Self by One of the cool things about Git is that it has strong cryptographic integrity. If you change any bit in the commit data or any of the files it keeps, all the checksums change, including the commit SHA and every commit SHA since that one. However, that means that in order to amend the commit in any way, for instance to add some comments on something or even sign off on a commit, you have to change the SHA of the commit itself. Wouldn't it be nice if you could add data to a commit without changing its SHA? If only there existed an external mechanism to attach data to a commit without modifying the commit message itself. Happy day! It turns out there exists just such a feature in newer versions of Git! As we can see from the Git 1.6.6 release notes where this new functionality was first introduced: * "git notes" command to annotate existing commits. Need any more be said? Well, maybe. How do you use it? What does it do? How can it be useful? I'm not sure I can answer all of these questions, but let's give it a try. First of all, how does one use it? Well, to add a note to a specific commit, you only need to run git notes add [commit], like this: $ git notes add HEAD This will open up your editor to write your commit message. You can also use the -m option to provide the note right on the command line: $ git notes add -m 'I approve - Scott' master~1 That will add a note to the first parent on the last commit on the master branch. Now, how to view these notes? The easiest way is with the git log command. $ git log master You can see the notes appended automatically in the log output. You can only have one note per commit in a namespace though (I will explain namespaces in the next section), so if you want to add a note to that commit, you have to instead edit the existing one. You can either do this by running: $ git notes edit master~1 Which will open a text editor with the existing note so you can edit it: I approve - Scott # # Write/edit the notes for the following object: # # # # kidgloves.rb | 2 -- # 1 files changed, 0 insertions(+), 2 deletions(-) ~ ~ ~ ".git/NOTES_EDITMSG" 13L, 338C Sort of weird, but it works. If you just want to add something to the end of an existing note, you can run git notes append SHA, but only in newer versions of Git (I think 1.7.1 and above). Notes Namespaces Since you can only have one note per commit, Git allows you to have multiple namespaces for your notes. The default namespace is called 'commits', but you can change that. Let's say we're using the 'commits' notes namespace to store general comments but we want to also store bugzilla information for our commits. We can also have a 'bugzilla' namespace. Here is how we would add a bug number to a commit under the bugzilla namespace: $ git notes --ref=bugzilla add -m 'bug #15' 0385bcc3 However, now you have to tell Git to specifically look in that namespace: $ git log --show-notes=bugzilla (bugzilla): bug #15 Notice that it also will show your normal notes. You can actually have it show notes from all your namespaces by running git log --show-notes=* - if you have a lot of them, you may want to just alias that. Here is what your log output might look like if you have a number of notes namespaces: $ git log -1 --show-notes=*: I approve of this, too - Scott Notes (bugzilla): bug #15 Notes (build): build successful (8/13/10) You can also switch the current namespace you're using so that the default for writing and showing notes is not 'commits' but, say, 'bugzilla' instead. If you export the variable GIT_NOTES_REF to point to something different, then the --ref and --show-notes options are not neccesary. For example: $ export GIT_NOTES_REF=refs/notes/bugzilla That will set your default to 'bugzilla' instead. It has to start with the 'refs/notes/' though. Sharing Notes Now, here is where the general usability of this really breaks down. I am hoping that this will be improved in the future and I put off writing this post because of my concern with this phase of the process, but I figured it has interesting enough functionality as-is that someone might want to play with it. So, the notes (as you may have noticed in the previous section) are stored as references, just like branches and tags. This means you can push them to a server. However, Git has a bit of magic built in to expand a branch name like 'master' to what it really is, which is 'refs/heads/master'. Unfortunately, Git has no such magic built in for notes. So, to push your notes to a server, you cannot simply run something like git push origin bugzilla. Git will do this: $ git push origin bugzilla error: src refspec bugzilla does not match any. error: failed to push some refs to '[email protected]:schacon/kidgloves.git' However, you can push anything under 'refs/' to a server, you just need to be more explicit about it. If you run this it will work fine: $ git push origin refs/notes/bugzilla Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (3/3), 263 bytes, done. Total 3 (delta 0), reused 0 (delta 0) To [email protected]:schacon/kidgloves.git * [new branch] refs/notes/bugzilla -> refs/notes/bugzilla In fact, you may want to just make that git push origin refs/notes/* which will push all your notes. This is what Git does normally for something like tags. When you run git push origin --tags it basically expands to git push origin refs/tags/*. Getting Notes Unfortunately, getting notes is even more difficult. Not only is there no git fetch --notes or something, you have to specify both sides of the refspec (as far as I can tell). $ git fetch origin refs/notes/*:refs/notes/* remote: Counting objects: 12, done. remote: Compressing objects: 100% (8/8), done. remote: Total 12 (delta 0), reused 0 (delta 0) Unpacking objects: 100% (12/12), done. From github.com:schacon/kidgloves * [new branch] refs/notes/bugzilla -> refs/notes/bugzilla That is basically the only way to get them into your repository from the server. Yay. If you want to, you can setup your Git config file to automatically pull them down though. If you look at your .git/config file you should have a section that looks like this: [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = [email protected]:schacon/kidgloves.git The 'fetch' line is the refspec of what Git will try to do if you run just git fetch origin. It contains the magic formula of what Git will fetch and store local references to. For instance, in this case it will take every branch on the server and give you a local branch under 'remotes/origin/' so you can reference the 'master' branch on the server as 'remotes/origin/master' or just 'origin/master' (it will look under 'remotes' when it's trying to figure out what you're doing). If you change that line to fetch = +refs/heads/*:refs/remotes/manamana/* then even though your remote is named 'origin', the master branch from your 'origin' server will be under 'manamana/master'. Anyhow, you can use this to make your notes fetching easier. If you add multiple fetch lines, it will do them all. So in addition to the current fetch line, you can add a line that looks like this: fetch = +refs/notes/*:refs/notes/* Which says also get all the notes references on the server and store them as though they were local notes. Or you can namespace them if you want, but that can cause issues when you try to push them back again. Collaborating on Notes Now, this is where the main problem is. Merging notes is super difficult. This means that if you pull down someone's notes, you edit any note in a namespace locally and the other developer edits any note in that same namespace, you're going to have a hard time getting them back in sync. When the second person tries to push their notes it will look like a non-fast-forward just like a normal branch update, but unlike a normal branch you can't just run git pull and then try again. You have to check out your notes ref as if it were a normal branch, which will look ridiculously confusing and then do the merge and then switch back. It is do-able, but probably not something you really want to do. Because of this, it's probably best to namespace your notes or better just have an automated process create them (like build statuses or bugzilla artifacts). If only one entity is updating your notes, you won't have merge issues. However, if you want to use them to comment on commits within a team, it is going to be a bit painful. So far, I've heard of people using them to have their ticketing system attach metadata automatically or have a system attach associated mailing list emails to commits they concern. Other people just use them entirely locally without pushing them anywhere to store reminders for themselves and whatnot. Probably a good start, but the ambitious among you may come up with something else interesting to do. Let me know!
http://git-scm.com/blog/2010/08/25/notes.html
CC-MAIN-2014-41
refinedweb
1,634
69.62
Arduino IDE for ESP8266 question for esp8266 boffins! PROGMEM what section should it end up in... i've got 3 strings in print Serial.printf_P( PSTR("hello-pgm")); Serial.print("hello-not"); Serial.print( F("hello-F()")); "hello-pgm" and "hello-F()" both end up in section .irom0.text: "hello-not" ends up in section .rodata: as #define PROGMEM ICACHE_RODATA_ATTR should it not all be in .rodata.. I'm not entirely sure which sections are in RAM or not... timer0_write(count)giving it the instruction count to trigger on, once it fires you have to call write again to get it to fire again. To calculate the trigger count, call timer0_read()and add your count to it and pass this to write. Use something like this and avoid constants so you don't care about 80, 160. uint32_t usToTicks(uint32_t us) const { return (clockCyclesPerMicrosecond() * us); // converts microseconds to tick } uint32_t ticksToUs(uint32_t ticks) const { return (ticks / clockCyclesPerMicrosecond()); // converts from ticks back to microseconds } This is what the servo timer stuff uses when using timer0. static uint32_t ticksPeriod = 500;//still in us void onTimer0(){ uint32_t ticksAtEnter = timer0_read(); //do your thing // NO YIELD OR DELAY timer0_write(ticksAtEnter + ticksPeriod); } void setup() { //convert period in us to ticks ticksPeriod *= clockCyclesPerMicrosecond(); uint32_t ticksAtInit = timer0_read(); timer0_attachInterrupt(&onTimer0); timer0_write(ticksAtInit + ticksPeriod); }
https://gitter.im/esp8266/Arduino?at=57a0e1921c2bf6621bb61923
CC-MAIN-2019-47
refinedweb
214
56.76
Opened 9 years ago Closed 7 years ago Last modified 5 years ago #7287 closed Uncategorized (fixed) Newforms' initial values as models, for ModelChoiceField etc Description ModelChoiceField cleans form data into model instances, but forms don't accept model instances as initial values. This seems a bit inconsistent to me. Patch attached. Attachments (1) Change History (16) Changed 9 years ago by comment:1 Changed 9 years ago by Another inconsistency. If my choices are of type int, clean should bring them back to int. At the moment, it seems to leave them as unicode. comment:2 Changed 9 years ago by comment:3 Changed 9 years ago by That patch is not cool. Importing all of django.db.models just for rendering a widget isn't a good idea, since you might be using forms standalone. Plus it's a leaky abstraction. We've tried very hard to keep that leakiness in django.forms.models in that part of the code. If that import is the price to be paid for using instances as default data, then you can't use instances there. If you can do it without having to import models, the enhancement has a chance (although a hasattr test would look a bit ugly, too, so I'm coming up a bit blank on how to do it). comment:4 Changed 9 years ago by Yep no worries wasn't aware of that. Maybe we throw it into django.forms.models instead then? comment:5 Changed 9 years ago by I've been bitten by this inconsistency myself. I think maybe an easy way to clear it up would be to edit and add some initial values for non-text fields so that it's obvious that you can't just pass in models. comment:6 Changed 9 years ago by A note in the docs is probably the right approach here. comment:7 Changed 9 years ago by Also bitten by this, and took a while to track down the real cause. I'm storing cleaned data for a multi-page form in the session, and using that as initial data when users go back to a previous page. When returning to a previous page, ModelChoiceField's were suddenly using the cleaned model instance's __unicode__ as the form widget's value, which raised ValueError: invalid literal for int() with base 10 (see #8974). It makes sense that I should be able to pass back in the cleaned_data that I'm getting out of a form. comment:8 Changed 9 years ago by I don't think that this is going to be solved by documentation. I've had the "ValueError: invalid literal for int() with base 10" error on a multi-form page (not a multi-page form, in my case :) ), also, and I managed to get rid of it by changing db/models/forms/init.py (line ~356) to: def get_db_prep_value(self, value): if value is None: return None try: return int(value) except: return int(value.pk) I then had a problem with the Select widget comparing each c in "choices" to the contents of the set "selected_choices." The real issue was that c could be any one of several types, but you have to compare to the db values (primary keys, in my case) for equality (list membership). I got around that by subclassing the Select widget with something like this: class GoodSelect(Select): """ Fix the ForeignKey bug. It's a known issue: """ def render_option(self, option_value, option_label, selected_choices): selected_html = (option_value in selected_choices) and u' selected="selected"' or '' return u'<option value="%s"%s>%s</option>' % ( escape(option_value), selected_html, conditional_escape(force_unicode(option_label))) def render_options(self, choices, selected_choices): # Normalize to strings. selected_choices = set([self.__pk_or_unicode(v) for v in selected_choices]) output = [] for option_value, option_label in chain(self.choices, choices): if isinstance(option_label, (list, tuple)): output.append(u'<optgroup label="%s">' % escape(force_unicode(option_value))) for option in option_label: output.append(self.render_option(*option)) output.append(u'</optgroup>') else: output.append(self.render_option(option_value, option_label, selected_choices)) return u'\n'.join(output) def __pk_or_unicode(self,obj): # If the object is already unicode, try to convert it to an int first. # If that doesn't work, then just return the force_unicode bit. # This is *probably* safe because when we try to put stuff in the db, we convert it anyway. if isinstance(obj,unicode): try: return int(obj) except: return force_unicode(obj) elif isinstance(obj,str): return force_unicode(obj) else: return obj.pk I'm still exploring to come up with a better fix, but for now, this is functioning well for me. comment:9 Changed 9 years ago by comment:10 Changed 8 years ago by Milestone post-1.0 deleted comment:11 Changed 7 years ago by removing has_patch since the patch isn't a doc patch comment:12 Changed 7 years ago by Patch should be aware of "to_field_name" param for ModelChoiceField, which is pk by default, but can be overriden by this param. comment:13 Changed 7 years ago by comment:14. Cleans initial values that are model instances into their pk values
https://code.djangoproject.com/ticket/7287
CC-MAIN-2017-26
refinedweb
851
54.22
#include <unistd.h> pid_t fork(void); The fork() function shall create a new process. The new process (child process) shall be an exact copy of the calling process (parent process) except as detailed below: When the application calls fork() from a signal handler and any of the fork handlers registered by pthread_atfork() calls a function that is not asynch-signal-safe, the behavior is undefined.. The child process shall not be traced into any of the trace streams of its parent process. All other process characteristics defined by IEEE Std 1003.1-2001 shall be the same in the parent and child processes. The inheritance of process characteristics not defined by IEEE Std 1003.1-2001 is unspecified by IEEE Std 1003.1-2001.: The fork() function may fail if: The following sections are informative. IEEE Std 1003.1-2001 does not require, or even permit, this behavior. However, it is pragmatic to expect that problems of this nature may continue to exist in implementations that appear to conform to this volume of IEEE Std 1003.1-2001 IEEE Std 1003.1-2001. IEEE Std 1003.1-2001.>
http://www.makelinux.net/man/3posix/F/fork
CC-MAIN-2015-40
refinedweb
188
66.64
This patch relies on changes introduced in D46946 and must be upstreamed after it.... XOP handling? Have you seen the current llvm-dev thread about adding a generic rotate intrinsic? This transform has problems when some of the instructions get hoisted from loops (and that's likely the most important consideration for perf). Here's a minimal example to demonstrate: #include <immintrin.h> void rotateInLoop(unsigned *x, unsigned N, __m128i *a, __m128i b) { for (unsigned i = 0; i < N; ++i) x[ _mm_extract_epi32(_mm_rolv_epi32(a[i], b), 0) ] = i; } Before this patch: $ ./clang rotv.c -S -O1 -o - -mavx512vl ... LBB0_2: vmovdqa (%rdx), %xmm1 vprolvd %xmm0, %xmm1, %xmm1 vmovd %xmm1, %esi movslq %esi, %rsi movl %ecx, (%rdi,%rsi,4) incq %rcx addq $16, %rdx cmpq %rcx, %rax jne LBB0_2 After this patch: LBB0_2: vmovdqa (%rdx), %xmm2 vpsllvd %xmm0, %xmm2, %xmm3 vpsrlvd %xmm1, %xmm2, %xmm2 vpor %xmm3, %xmm2, %xmm2 vmovd %xmm2, %esi movslq %esi, %rsi movl %ecx, (%rdi,%rsi,4) incq %rcx addq $16, %rdx cmpq %rcx, %rax jne LBB0_2 I think you'll either need to implement this first: ...or limit this patch to the non-variable rotates, or just wait for the generic intrinsic? Why are we going through shift intrinsics to do this? Why can't we just emit shl and lshr instructions directly? Just create and And with 31 or 63? I believe one of the signatures of CreateAnd even takes a uint64_t as an argument. Because emitting shifts in IR is more complicated than just adding an shl/lshr node due to those poison values (see D46946) and would create some redundant code. I guess I can use simplifyX86immShift directly instead of emitting a call here. As for the bug - much more than one instruction gets thrown out of the loop after applying shift lowering patch - I'm leaning to leaving only non-variable intrinsics in this patch and implement variable ones after the generic intrinsic is introduced. @tkrupa If you still interesting in working on this, converting the rotations to generic funnel shift intrinsics is the better way to go - its a single intrinsic call so you don't have IR splitting issues, the AVX512 + XOP rotates respect the modulo amount and both the variable and splat-immediate variants are fully supported by the x86 backend for lowering. Instead of InstCombine I'd probably suggest performing this both in the clang frontend and as an auto upgrade for the existing intrinsics. I'm no longer working on this. AFAIK this task and D46946 have been reassigned to @Jianping or @LuoYuanke. In D47019#1332667, @tkrupa wrote: I'm no longer working on this. AFAIK this task and D46946 have been reassigned to @Jianping or @LuoYuanke. In which case I may do the rotation work myself - D55747 is the only outstanding issue AFAICT and it'd be good to start getting more thorough funnel shift usage into the code. @tkrupa This can now be abandoned now that the x86 vector rotation intrinsics emit/autoupgrade to generic funnel shifts.
http://reviews.llvm.org/D47019
CC-MAIN-2020-45
refinedweb
499
59.13
® Operator's Manual 27 Ton Log Splitter Model 570 IMPORTANT: Warning: Read safety rules and instructions This unit is equipped with an internal combustion carefully before operating equipment.. TROY-BILT PRINTED IN U.S.A. LLC, P.O. BOX 361131 CLEVELAND, OHIO 44136-0019 FORM NO. 769-01326 (7/2004) TABLEOFCONTENTS Content Page 3 5 7 8 Important Safe Operation Practices Assembling Your Log Splitter Know Your Log Splitter Operating Your Log Splitter Adjusting Your Log Splitter 10 Content Maintaining Your Log Splitter Storing Your Log Splitter Trouble Shooting Illustrated Parts List Warranty Page 10 12 12 14 16 FINDINGMODELNUMBER This Operator's Manual is an important part of your new log splitter. the hydraulic tank. This information will be necessary to use the manufacturer's web site and/or help from the Customer Support Department or an authorized service dealer. Copy the model number here: Copy the serial number here: CLEVELAND. OH44135 338-558-7220 1-880-520-552_ CUSTOMER SUPPORT Please do NOTretl/m the unitto the retailer from where it waspurchased,withoutfirst contactingCustomerSupport. If you have difficulty assembling this product or have any questions regarding the controls, operation or maintenance of this unit, you can seek help from the experts. Choose from the options below: Visit troybilt.com for many useful suggestions. Click on Customer Support button and you will get the four options reproduced here. Click on the appropriate button and help is immediately available. OOh¢_) '"......... "' '" " "'_ i/ > If you prefer to reach a Customer Support Representative, please call 1-800-520-5520. The engine manufacturer is responsible for all engine-related issues with regards to 3erformance, power-rating, specifications, warranty and service. Please refer to the engine manufacturer's Owner's/Operator's Manual, packed separately with your unit, for more information. SECTION1: IMPORTANT SAFEOPERATION PRACTICES _ This and/or symbol property points out safety instructions which, notinstructions followed, could theARNING: personal safety of important yourself and others. Read and followif all in thisendanger manual before attempting to operate this machine. Failure to comply with these instructions may result in personal injury. When you see this symbol - heed its warning.. GeneralPractices 1. 2. 3. 4. 5. 6. 7. 8. 9. 5. 6. 7. Read, understand, and follow all instructions on the machine and in the manual(s) before attempting to assemble and operate. Keep this manual in a safe place for future and regular reference and for ordering replacement parts. Be familiar with all controls and proper operation. Know how to stop the machine and disengage them quickly. Never allow children under 16 years to operate this machine. Children,16 years and over, should read and understand instructions and safety rules in this manual and should be trained and supervised by a parent. Never allow adults to operate this machine without proper instruction. Many accidents occur when more than one person operates the machine. If a helper is assisting in loading logs, never activate the control until the helper is a minimum of 10 feet from the machine. Keep bystanders, helpers, pets, and children at least 20 feet from the machine while it is in operation. Never allow anyone to ride on this machine. Never transport cargo on this machine.. Leaks can be detected by passing cardboard or wood, while wearing protective gloves and safety glasses, over the suspected area. Look for discoloration of cardboard or wood. If injured by escaping fluid, see a doctor immediately. Serious infection or reaction can develop if proper medical treatment is not administered immediately. Keep the operator zone and adjacent area clear for safe, secure footing. 8.. 9. This machine should be used for splitting wood only, do not use it for any other purpose. 10. Follow the instructions in the manual(s) provided with any attachment(s) for this machine. Preparation 1. 2. 3. 4. 5. 6. 7. 8. 9. Always wear safety shoes or heavy boots. Always wear safety glasses or safety goggles during operating this machine. Never wear jewelry or loose clothing that might become entangled in moving or rotating parts of the machine. Make sure machine is on level surface before operating. Always block machine to prevent unintended movement, and lock in either horizontal or vertical position. Always operate this machine from the operator zone(s) specified in the manual. Logs should be cut with square ends prior to splitting. Use log splitter in daylight or under good artificial light. To avoid personal injury or property damage use extreme care in handling gasoline. Gasoline is extremely flammable and the vapors are explosive. Serious personal injury can occur when gasoline is spilled on yourself or your clothes which can ignite. Wash your skin and changefill the fuel tank. Fill tank to no more than 1/2 inch below bottom of filler neck to provide space for fuel expansion. g. Replace gasoline cap and tighten securely. j, If gasoline is spilled, wipe it off the engine and equipment, move machine to another area. Wait 5 minutes before starting the engine. Never store the machine or fuel container inside where there is an open flame, spark or pilot light as on a water heater, space heater, furnace, clothes dryer or other gas appliances. Allow machine to cool 5 minutes before storing. Maintenance and Storage 1. 2. 3. Operation 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. Before starting this machine, review the "Safety Instructions". Failure to follow these rules may result in serious injury to the operator or bystanders. Never leave this machine unattended with the engine running. Do not operate machine while under the influence of alcohol, drugs, or medication. Never allow anyone to operate this machine without proper instruction. Always operate this machine with all safety equipment in place and working. Make sure all controls are properly adjusted for safe operation. Do not change the engine governor settings or overspeed the engine. The governor controls the maximum safe operating speed of the engine. When loading a log, always place your hands on the sides of the log, not on the ends, and never use your foot to help stabilize a log. Failure to do so, may result in crushed or amputated fingers, toes, hand, or foot. Use only your hand to operate the controls. Never attempt to split more than one log at a time unless the ram has fully extended and a second log is needed to complete the separation of the first log. For logs which are not cut square, the least square end and the longest portion of the log should be placed toward the beam and wedge, and the square end placed toward the end plate.. Always keep fingers away from any cracks that open in the log while splitting. They can quickly close and pinch or amputate your fingers. Keep your work area clean. Immediately remove split wood around the machine so you do not stumble over it. Never move this machine while the engine is running.. Do not tow machine faster than 45mph. See Transporting the Log Splitter section in this manual for proper towing instructions once all federal, local, or state requirements are met. 4. 5. 6. 7. 8. 9. Stop the engine, disconnect the spark plug and ground it against the engine before cleaning, or inspecting the machine. Stop the engine and relieve hydraulic system pressure before repairing or adjusting fittings, hoses, tubing, or other system components. To prevent fires, clean debris and chaff from the engine and muffler areas. If the engine is equipped with a spark arrester muffler, clean and inspect it regularly according to manufacturers instructions. Replace if damaged. Periodically check that all nuts and bolts, hose clamps, and hydraulic fittings are tight to be sure equipment is in safe working condition. Check all safety guards and shields to be sure they are in the proper position. Never operate with safety guards, shields, or other protective features removed. The pressure relief valve is preset at the factory. Do not adjust the valve. Never attempt to move this machine over hilly or uneven terrain without a tow vehicle or adequate help. For your safety, replace all damaged or worn parts immediately with original equipment manufacturer's (O.E.M.) parts only. "Use of parts which do not meet the original equipment specifications may lead to improper performance and compromise safety!" Do not alter this machine in any manner, alterations such as attaching a rope or extension to the control handle, or adding to the width or height of the wedge may result in personal injury. YourResponsibility Restrict the use of this power machine to persons who read, understand and follow the warnings and instructions in this manual and on the machine. Some of the safety labels are reproduced here. Always follow directions on safety labels found on your equipment. MOVING LOG • Lock Jacks SPLITTER a,d BY HAND; ,downpos "Donotattempttomovelogsplltterj p o dow, a s ope by ha,d S32422 n i TOWING • :" . ae LGG 2euper with 2 SPLITTER; ball e ass Latch in full ._ poslt i_n. or ,g er i hitch securely j J. _eupe ad _,en sneede.I " refer to owner s mammL Raise jack stan.I a_d latch securely . Donottowfastertha_14_mph. High sp_ed may ¢_use loss .ChecT:local._ateandf_dera _q.i_ment s before t owi_lg public road. _f c_t_tr(*l. I m_ any SECTION2: ASSEMBLING YOURLOGSPLITTER UnpackingfromCrate AttachingtheTongue • • • • • _ Pry top, sides, and ends off the crate. Set panels aside to avoid tire puncture or personal injury. Remove and discard plastic bag that covers unit. Remove any loose parts if included with unit (i.e., operator's manual, etc.) Cut and remove straps which secure parts to bottom of crate. Unbolt any remaining parts which may be bolted to the bottom of the crate. thisARNING: machine. Some Use extreme components caution areunpacking very heavy and will require additional people or mechanical handling equipment. LoosePartsIn Carton • • • With the log splitter still standing upright, remove two hex bolts, lock washers, and hex nuts from the end of the tongue. See Figure 1. Align the holes in the tongue with the holes in the tank bracket and secure with hardware just removed. See Figure 1. NOTE: High pressure hose must be above the tongue assembly. Connecting Cylinderto Beam • The log splitter is shipped with the beam in the vertical position. Pull out the vertical beam lock, rotate it back, and pivot the beam to the horizontal position until it locks. See Figure 2. Tongue assembly Tail light kit BeforeAssembly • Disconnect the spark plug wire and ground against the engine to prevent unintended starting. Vertical Beam Lock NOTE: Reference to right or left hand side of the log splitter is observed from the operating position. Assembling the Tongue Attachingthe Jack Stand • • The jack stand is shipped in the transport position. Remove the spring clip and clevis pin and pivot the jack stand towards the ground to the operating position. Secure the jack stand in position with the clevis pin and spring clip. See Figure 1. Spring Clip Bracket Lock Washer Tongue Hex Nut \ Jack Stand Figure 2 Disconnect the dislodger from the beam weld bracket by removing the six hex screws and flat washers. See Figure 3. "_ Dislodger Bracket ...... Washer Hex Bolt Figure 1 Figure 3 • Disconnect the log cradle from the beam on the side of the control valve. See Figure 4. PreparingtheLog Splitter • • Lock Washer Nut • Lubricate the beam area (where the splitting wedge will slide) with engine oil; do not use grease. Remove vented reservoir dipstick, which is located in front of the engine on top of the reservoir tank. See Figure 6. Fill the reservoir tank with Dexron III automatic transmission fluid or 10W AW hydraulic fluid. NOTE: The reservoir tank has a capacity of 3.5 gallons. Check fluid level using the dipstick. See Figure 6. Do not overfill. Replace vented dipstick securely, tightening it until the top of the threads are flush with top of the pipe. Hex Bolt Dipstick \ Disconnect Log Cradle ...... Figure 4 • • Lift and slide the cylinder up to the top of beam and into the weld brackets. Attach the dislodger over the wedge assembly and secure with hardware previously removed, to the weld brackets. See Figure 5. \\ Reservoir Tank NOTE: Once the six hex screws are tightened, there may be a slight gap between the dislodger and the weld brackets. This gap is normal. Figure 6 • • Reattach the log cradle to the side of the beam with the control valve, aligning the ends of the cradle with the beam flanges. See Figure 5. Roll log splitter off the bottom crate. • • • Make sure the spark plug wire is disconnected. Then prime the pump by pulling the recoil starter as faras it will go. Repeat approximately 10 times. Reconnect the spark plug wire and start engine following instructions in the OPERATION section. Use control handle to engage the wedge to the farthest extended position. Then retract the wedge. Refill tank as specified on the dipstick. NOTE: Failure to refill the tank will void un#'s warranty. • • _ \ \Attach dislodger here Extend and retract the wedge 12 complete cycles to remove trapped air in the system (the system is "self-bleeding"). Refill reservoir within range marked on the dipstick. been drawn intoMuch WARNING: the cylinder of the original and hoses. fluid has Make certain to refill the reservoir to prevent damage to the hydraulic pump. NOTE: Some fluid may overflow from the vent plug as the system builds heat and the fluid expands and seeks a balanced level. cradle here • Figure 5 Attach taillights as instructed in the taillight kit manual included with your log splitter. SECTION3: KNOWYOURLOGSPLITTER Compare the illustration in Figure 7 below with the controls on your log splitter, and get familiar with its features before starting to operate. Know how to stop the machine quickly in the event of an emergency. How it works / Handle _ Reverse " (Automatic) Log tTo r,eturn weage I Neutral I Assembly \\ Control Handle \\ To stop wedge I I/ Forward____ v To split (Manual) wood Jack Stand Beam Lock Cradle Beam Lock Tank Tail Light Horizontal Beam Lock Vertical Beam Lock Figure 7 ControlHandle EngineControls The control handle has three positions. See Figure 7. • FORWARD: Move control handle FORWARD or DOWN to move wedge to split wood. See the accompanying engine manual for the location and function of the controls on the engine. NOTE: Control handle will return to neutral position as soon as handle is released. (Forward Position only) • • NEUTRAL: Release the control handle or move the lever to neutral position to stop the wedge movement. REVERSE: Move control handle BACK or UP to return the wedge toward the cylinder. The control handle stays in the return (Reverse) position and returns to neutral automatically when fully retracted. NOTE: Reverse position may also be operated back to neutral position manually, if necessary. BeamLocks These two locks, as their name suggests, are used to secure the beam in the horizontal or the vertical position. The vertical beam lock is located next to the oil filter. The horizontal beam lock is located on the beam support latch bracket. See Figure 7. StoppingEngine • Move throttle control handle to STOP or OFF • • position. Turn off the engine switch, if so equipped. Disconnect spark plug wire and ground against the engine to prevent unintended starting. IMPORTANT:Your log splitter is shipped with motor oil in the engine. However, you MUST check the oil level before operating. Be careful not to overfill. SECTION4: OPERATING YOURLOGSPLITTER WARNING: Read, understand, and follow all instructions and warnings on the machine and in this manual before operating. WARNING: Wear leather work gloves, safety shoes, ear protection, and safety glasses when operating a log splitter. Ensure safe footing. GasandOilFill-Up • _ Service the engine with gasoline and oil as instructed in the engine manual packed with your log splitter. Read instructions carefully. handling gasoline. WARNING: UseGasoline extreme is care extremely when flammable and the vapors are explosive. Never fuel machine indoors or while the engine is hot or running. IMPORTANT:Your log splitter is shipped with motor oil in the engine. However, you MUST check the oil level before operating. Be careful not to overfill. NOTE: Gasoline can be added to the engine when the log splitter is in either the horizontal or vertical position. However, there are less obstructions when the unit is in the vertical position. StartingEngine • • • • • • • • • Attach spark plug wire to spark plug. Make certain the metal cap on the end of the spark plug wire is fastened securely over metal tip of the spark plug. Turn fuel valve, if equipped, to the ON position. Move choke lever, if equipped, to CHOKE position. If the engine is equipped with a primer, follow instructions in the engine manual to prime it. Turn the throttle control handle to the FAST position. Grasp starter handle and pull rope out slowly until engine reaches start of compression cycle (rope will pull slightly harder at this point). Pull rope with a rapid, full arm stroke. Keep firm grip on starter handle. Let rope rewind slowly. Repeat until engine cranks. After engine starts, move choke lever to OFF position. Place throttle lever to the speed desired. In cold weather, run wedge up or down beam 6 to 8 times to circulate the hydraulic fluid. BeforeEachUse • • • • • Remove the dipstick and check hydraulic fluid level. Refill if necessary. Check engine oil level. Refill if necessary. Fill up gasoline if necessary. Lubricate with engine oil the beam area where splitting wedge will slide. Do not use grease to lubricate. Make sure to lubricate both the front and the back of the beam face. Attach spark plug wire to the spark plug. Usingthe Log Splitter • • • • • • Place the log splitter on level, dry ground. Place the beam in either the horizontal or vertical position and lock in place with the appropriate locking rod. Block the front and back of both wheels. Place the log against the end plate and only split wood in the direction of the grain. To stabilize the log, place your hand only on sides of log. Never place hand on the end between the log and the splitting wedge. Only one adult should stabilize the log and operate the control handle, so the operator has full control over the log and the splitting wedge. ControlHandle 1. Move control handle FORWARD or DOWN to split wood. 2. Release the control handle to stop the wedge movement. Move control handle BACK or UP to return the 3. wedge. LogDislodger • The log dislodger is designed to remove any partially split wood from the wedge. This may occur while splitting large diameter wood or freshly cut wood. _ wood from the wedge WARNING: Never remove with yourpartially hands. split Fingers may become trapped between split wood. • To remove partially split wood from wedge, move control handle to REVERSE position until wedge is fully retracted to allow split wood portion to contact the log dislodger. Once removed from wedge with log dislodger, split wood from opposite end or in another location. • VerticalPosition _ startingareas a warm theARNING: muffler and When surrounding areengine, hot and can cause a burn. Do not touch. • Pull the horizontal beam lock out to release the beam and pivot the beam to the vertical position. • • Tolockthebeamintheverticalposition,pullouton theverticalbeamlockandrotateit tosecurethe beam.SeeFigure7. Standinfrontoftheunittooperatethecontrol handleandtostabilizethelog.SeeFigure8A. 8. RaisingandLoweringBeam • Use control handle to run wedge up and down beam 6 to 8 times to circulate the hydraulic fluid, which will warm and thin the fluid. • • Place log splitter on a firm, level surface. To raise the beam for vertical operation: Pull out the vertical beam lock on the tongue. Pivot beam lock down to release the beam. • • _ib HorizontalPosition • Pull the vertical beam lock out and rotate it down. See Figure 7. Pivot beam to the horizontal position. The beam will lock automatically in horizontal position. Stand behind the reservoir tank to operate control handle and to stabilize the log. See Figure 8B. OperatingTips Always: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Use clean fluid and check fluid level regularly. Use Dexron III Automatic Transmission Fluid or 10W AW hydraulic fluid. Use a filter (clean or replace regularly) Use a breather cap on fluid reservoir. Make sure pump is mounted and aligned properly. Use a flexible"spider" type coupling between engine and pump drive shafts. Keep hoses clear and unblocked. Bleed air out of hoses before operating. Flush and clean hydraulic system before restarting after servicing. Use "pipe dope" on all hydraulic fittings. Allow time for warm-up before splitting wood. Prime the pump before initial start-up byturning over the engine with spark plug disconnected. Split wood along the grain (lengthwise) only. Never: 1. 2. 3. 4. 5. 6. 7. Use when fluid is below 20 ° F or above 150 ° F. Use a solid engine/pump coupling. Operate through relief valve for long. Attempt to adjust unloading or relief valve settings without pressure gauges. Operate with air in hydraulic system. Use teflon tape on hydraulic fittings. Attempt to cut wood across the grain. Move the beam to the vertical position. Secure it with the beam lock on the reservoir tank assembly. ARNING: use the log splitter in the vertical positionAlways when splitting heavy logs. To lower the beam: Pull out the horizontal beam lock on the reservoir tank. Pivot beam lock down to release the beam. Carefully pull back on beam and lower it to the horizontal position.The horizontal beam lock will lock automatically. See Figure 7. Figure 8 • Attempt to remove partially split wood from the wedge with your hands. Fully retract wedge to dislodge wood with log dislodger. Transporting the Log Splitter IMPORTANT:Always turn fuel valve to OFF position before transporting the log splitter. • • • Lower the beam to its horizontal position. Make certain the beam is locked securely with the horizontal beam lock. Remove spring clip and clevis pin from jack stand. Support the tongue and pivot the jack stand up against the tongue. See Figure 9. Tongue Clevis Pin _ Jack Stand Figure 9 Secure with the spring clip and clevis pin previously removed. See Figure 9.. Connect the safety chains to the towing vehicle. Plug in the tail lights, if so equipped, to the tail light connector on the tow vehicle. _ andARNING: check local,Do state, not tow andfaster federal than 45mph requirements before towing on any public road. NOTE: Use caution when backing up. It recommended to use a spotter outside the vehicle. is SECTION5: ADJUSTING YOURLOGSPLITTER NOTE: The gibs may be rotated and/or turned over for even wear. _ adjustments ARNING: without Do notfirst at any stopping time make engine, any disconnecting spark plug wire, and grounding it against the engine. Always wear safety glasses during operation or while performing any adjustments or repairs. • • • • WedgeAssemblyAdjustment As normal wear occurs and there is excessive "play" between the wedge and beam, adjust the bolts on the side of the wedge assembly to eliminate excess space between the wedge and the beam. See Figure 10. • • Loosen the lock nuts under the each back plate and slide the gibs out. Turn or replace thegibs. Reassemble the back plate and secure with the lock washers and lock nuts. Readjust the bolts on the side of the wedge assembly. Adjustment Bolt \\ \ Loosen the jam nuts on the two adjustment bolts on the side of the wedge. Turn the adjustment bolts in until snug and then back them off slowly until the wedge assembly will slide on the beam. Tighten the jam nuts securely against the side of the wedge to hold the adjustment bolts in this position. \\ \ Gib Adjustment Periodically remove and replace the "gibs" (spacers) between the wedge assembly and the back plate. See Figure 10. Figure 10 SECTION6: MAINTAININGYOURLOGSPLITTER _ _ repairing, ARNING: or inspecting, Before cleaning, disengage lubricating, the control handle and stop engine. Disconnect the spark plug wire and ground it against the engine to prevent unintended starting. Always wear safety glasses during operation or while performing any adjustments or repairs. 4. 5. 6. Conditionsthat will voidWarranty 1. 2. 3. 7. 8. 9. theARNING: hoses to burst, Higher cylinder pressure to rupture, could cause and intense fluid to be released, which could result in serious personal injury. Use of incorrect hydraulic fluid Allowing the flexible pump coupler to deteriorate without proper and regular inspection Lack of lubrication or improper lubrication of the beam or unit Improper adjustment of splitting wedge Excessive heating of the hydraulic system Attempting to start unit in temperatures under 20°F without pre-heating fluid in reservoir 10. Unattended leaks in hydraulic system Failure to maintain proper fluid level in reservoir Changing the relief valve setting or pressure adjustment of control valve without proper knowledge and instruction from the factory Disassembling the pump 10 • HydraulicFluid • Check the hydraulic fluid level in the log splitter reservoir tank before each use. • Maintain fluid level within the range specified on the dipstick at all times. Change the hydraulic fluid in the reservoir every 100 hours of operation. Disconnect the suction hose from the bottom of the reservoir tank and drain the fluid into a suitable • • • • • container. Refill using only Dexron III automatic transmission fluid or 10W AW hydraulic fluid. • NOTE: Please dispose of used hydraulic fluid and engine oil at approved recycling centers only. • Since contaminants in fluid may damage the hydraulic components, you will have to drain the fluid and flush the reservoir tank and hoses with kerosene whenever any repair work is performed on the tank, hydraulic pump or valve. For this job, contact your nearest service dealer. • • • HydraulicFilter • Change the hydraulic filter every 50 hours of operation. Use only a 10 micron hydraulic filter. Order part number 723-0405. • • BeamandSplittingWedge •. portion of the coupling half. (There must be space between end of the engine support bracket and coupling half). Tighten set screw. Install pump coupling half and key on pump shaft. Rotate coupling half until set screw faces opening in shield. Do not tighten set screw. Install nylon "spider" onto engine coupling half. Align pump coupling half with nylon "spider" by rotating engine using starter handle. Slide coupling half into place while guiding three mounting bolts through holes in pump support bracket. Secure with nuts and washers removed earlier. Set.010" to.060" clearance between the nylon "spider" and the engine coupling half by sliding a matchbook cover between the nylon "spider" and the engine coupling half and moving pump coupling half as needed. Secure pump coupling half with set screw. See Figure 11. NOTE: Make certain proper clearance before tightening set screw. Gear Pump Make certain to readjust the adjustment bolts so wedge moves freely, but no excess space exists between the wedge plate and the beam. Set I-- ---_ is obtained i_! -_ Screw__ "_ Nylon "Spider"-_............ __I[i HoseClamps • Remove three nuts and lock washers that secure the pump to the coupling shield. Two nuts are at the bottom corners and one is in the top center. Remove the pump. Rotate the engine by slowly pulling starter handle until engine coupling half set screw is visible. Loosen set screw using allen wrench and slide coupling half off engine shaft. Loosen set screw on pump coupling half and remove coupling half. Slide new engine coupling half onto the engine shaft until the end of the shaft is flush with the inner Check, before each use, if hose clamps on the suction hose (attached to the side of the pump) are tight. Check the hose clamps on the return hose at least once a season. Insert I_ _"o l Steel Coupling ___Halves Clearance _ _FF _i_,___ Engine Refer to the engine manual for all maintenance needs. E ngi_n e ................................ 1F_ Figure 11 FlexiblePumpCoupler Tires The flexible pump coupler is a nylon "spider" insert, located between the pump and the engine shaft. Over time, the coupler will harden and deteriorate. See sidewall of tire for recommended pressure. Maximum tire pressure under any circumstances p.s.i. Maintain equal pressure on all tires. Replace the coupler if you detect vibration or noise coming from the area between the engine and the pump. If the coupler fails completely, you will experience a loss of power. _ IMPORTANT: Never hit the engine shaft in any manner, as a blow will cause permanent damage to the engine. 11 is 30 p.s.i.) when seating WARNING: Excessive beads pressure may cause (over tire/rim 30 assembly to burst with force sufficient to cause serious injury. Refer to sidewall of tire for recommended pressure. SECTION7: STORINGYOURLOGSPLITTER Prepare your log splitter for storage at the end of the season or if the log splitter will not be used for 30 days or more. _ • • • in the fuel tank inside WARNING: Never of store building machine where with fumes fuel may reach an open flame or spark, or where ignition sources are present such as hot water and space heaters, furnaces, clothes dryers, stoves, electric motors, etc. • • • • Clean the log splitter thoroughly. NOTE: We do not recommend the use of pressure washers or garden hose to clean your unit. They may cause damage to the log splitter components or the engine. The use of water will result in shortened life and reduce serviceability. • • Drain fuel tank. Always drain fuel into approved container outdoors, away from open flame. Be sure that engine is cool before draining the fuel. Do not smoke while handling fuel. Start the engine and let it run until the fuel lines and carburetor are empty. Remove spark plug. Holding a rag over the cylinder hole, pour approximately 1/2 ounce (approximately one tablespoon) of engine oil into cylinder and crank slowly to distribute the oil. Replace spark plug. Do not store gasoline from one season to another. Replace your gasoline can if it starts to rust. Rust and/or dirt in the gasoline will cause problems. Store unit in a clean, dry area. Do not store next to corrosive materials, such as fertilizer. NOTE: If storing in an unventilated or metal storage shed, be certain to rustproof the equipment by coating with a light off or silicone. Wipe unit with an oiled rag to prevent rust, especially on the wedge and the beam. SECTION8: TROUBLESHOOTING Problem Engine fails to start Cause 1. 2. 3. 1. 2. 3. Connect wire to spark plug. Fill tank with clean, fresh gasoline. Move throttle lever to FAST position. 4. Move choke to CHOKE position. 5. Prime engine. 6. 6. Clean fuel line. 7. Faulty spark plug. 7. Clean, adjust gap, or replace. 1. 2. 1. 2. Connect and tighten spark plug wire. Move choke lever to OFF position. 3. Spark plug wire loose. Unit running on CHOKE, if so equipped. Blocked fuel line or stale fuel. 3. 4. 5. 6. Water or dirt in fuel system. Dirty air cleaner. Carburetor out of adjustment. 4. 5. 6. Clean fuel line; fill tank with clean, fresh gasoline Drain fuel tank. Refill with fresh fuel. Clean or replace air cleaner. See authorized service dealer. 1. 2. 3. Engine oil level low. Dirty air cleaner. Carburetor not adjusted properly. 1. 2. 3. Fill crankcase with proper oil. Clean or replace air cleaner. See authorized service dealer. 5. Engine overheats Spark plug wire disconnected. Fuel tank empty or stale fuel. Throttle control handle not in correct starting position. Choke, if equipped, not in CHOKE position. Engine (if equipped with a primer) not primed properly. Blocked fuel line. 4. Engine runs erratic Remedy NOTE: For repairs beyond those listed above, contact your nearest authorized service center. 12 HydraulicTroubleshooting Problem Cylinder rod will not move Slow cylinder shaft speed while extending and retracting. Cause Remedy 1. Broken drive shaft. 1. See authorized service dealer. 2. Shipping plugs left in hydraulic hoses. 2. 3. 3. 4. Set screws in coupling not adjusted properly. Loose shaft coupling. 5. 6. 7. 8. 9. 10. Gear sections damaged. Damaged relief valve. Hydraulic lines blocked. Incorrect oil level. Damaged directional valve. Blocked directional valve. 5. 6. Disconnect hydraulic hoses, remove shipping plugs, reconnect hoses. See operator's manual for correct adjustment. Correct engine/pump alignment as necessary. See authorized service dealer. See authorized service dealer. 7. 8. 9. Flush and clean hydraulic system. Check oil level. See authorized service dealer. 1. 2. Gear sections damaged. Excessive pump inlet vacuum. 3. 4. 5. 6. 7. 8. 1. Slow engine speed. Damaged relief valve. Incorrect oil level. Contaminated oil. Directional valve leaking internally. Internally damaged cylinder. Broken seals. 2. 4. 10. Flush and clean hydraulic system. 1. See authorized service dealer. 3. 4. 5. Make certain pump inlet hoses are clear and unblocked-use short, large diameter inlet hoses. See authorized service dealer. See authorized service dealer. Check oil level. 6. 7. 8. Drain oil, clean reservoir, and refill. See authorized service dealer. See authorized service dealer. Scored cylinder. 1. 2. See authorized service dealer. See authorized service dealer. 1. 2. 3. Small gear section damaged. Pump check valve leaking. Excessive pump inlet vacuum. 1. 2. 3. 4. 5. Incorrect oil level. Contaminated oil. 6. 7. Directional valve leaking internally. Overloaded cylinder. 4. 5. 6. See authorized service dealer. See authorized service dealer. Make certain pump inlet hoses are clear and unblocked. Check oil level. Drain oil, clean reservoir, and refill. See authorized service dealer. 8. Internally damaged cylinder. 8. Do not attempt to split wood against the grain. See authorized service dealer. Engine stalls during splitting 1. 2. Low horsepower/weak Overloaded cylinder. 1. See authorized service dealer. 2. Do not attempt to split wood against the grain or see authorized service dealer. Engine will not turn or stalls under low load conditions 1. 2. 3. 4. 5. Engine/pump misalignment. Frozen or seized pump. Low horsepower/weak engine. Hydraulic lines blocked. Blocked directional valve. Leaking pump shaft seal 1. Broken drive shaft. 1. 2. 3. 4. 5. 1. Correct alignment as necessary. See authorized service dealer. See authorized service dealer. Flush and clean hydraulic system. Flush and clean hydraulic system. See authorized service dealer. 2. 3. 4. 5. Engine/pump misalignment. Gear sections damaged. Poorly positioned shaft seal. Plugged oil breather. 2. 3. 4. Correct alignment as necessary. See authorized service dealer. See authorized service dealer. 5. Make certain reservoir is properly vented. Leaking cylinder Engine runs but wood will not split or wood splits too slowly engine. 2. 7. NOTE: For repairs beyond those listed here, contact your nearest authorized service center. 13 SECTION9: PARTSLISTFORMODEL570 1\ 16 29 26 28 \ 16 13 31 18 / C 65 A I 70 45 74 68 \\\\ 75 \ \\\ 77 \ 72 \ For Taillight. Ground Wire 78 48.. 82\ \ 81 \ 79 _ t If equipped 14 Model570 Ref. No. Part No. Ref. No. Part Description Part No. Part Description 1. 2. 718-0769 727-0634 Hydraulic Cylinder 44. 45. 781-0686 681-0161 Log Tray Bracket Hydraulic Tube 3. 710-1018 Hex Cap Screw 1/2-20 x 2.75 46. 726-0214 Push Cap 4. 737-0192 732-0583 5. 781-0526t 90 Degree Solid Adapter Hose Guard 47. 48. 710-0521 Compression Spring Hex Bolt 3/8-16 x 3" 6. 737-0153 Return Elbow 49. 781-0690 Lock Rod 7. 718-0481 Control Valve 50. 737-0348A 8. 9. 781-0538t 710-1806 Hose Guard Hex Cap Screw 1/2-13 x 3.25 51. 52. 710-1338 710-0654A Vented Dipstick Hex Screw 5/16-24 x 3.25 Hex Washer Screw 3/8-16 x 1.0 10. 719-0550 Wedge Assembly 53. 634-0186 11. 737-0238 54. 712-0359 Wheel Assembly Slotted Nut 3/4-16 12. 712-0239 Nipple Pipe 1/2-14 Lock Nut 1/2-20 55. 714-0162 Cotter Pin 734-0873 HubCap Frame Assembly 13. 712-0711 Hex Jam Nut 3/8-24 56. 14. 710-0459A Hex Cap Screw 3/8-24 x 1.5 57. 719-0353 Coupling Shield 15. 16. 781-0351 710-1260A Adjustable Gib Hex Washer Screw 5/16-18 x.75 58. 59. 714-0122 718-0686 Square Key 3/16" x.75 17. 781-1054 Cylinder Support Bracket 60. 712-3057 18. 681-0162 736-0119 736-0300 Beam Assembly Flat Washer.406 ID x.875 OD 61. 19. 62. 781-0097 20. 710-0874 Hex Washer Screw, 5/16-18 x 1.25 63. 727-0633 Rear Coupling Support Bracket Hose 21. 781-1048 718-0683 Gear Pump (1 lgpm) 781-0790 736-0921 Dislodger Back Plate Lock Washer 1/2 64. 22. 23. 65. 66. 737-0329 727-0502 45 Degree Elbow 24. 737-0312 Adapter 3/4-14 67. 781-0788 25. 727-0443 Return Hose 3/4" ID x 44" Lg. 68. 747-1261 Tongue Tube Assembly Latch Rod 26. 726-0132 Hose Clamp 5/8" 69. 781-1045 Latch 27. 737-0316 Filter Housing 70. 715-0120 Spiral Pin 28. 731-2499 71. 732-3127 29. 30. 710-1238 712-3010 Steel Fender (MTD Red) Hex Washer Screw 5/16-18 x.875 Hex Nut 5/16-18 72. 74. 736-0169 710-0944 Spring Compression L-Washer 3/8" 31. 736-0119 Lock Washer 5/16 75. 736-0262 Hex Cap Screw 3/8-16 x 4.25 Flat Washer.385 ID x.870 OD 32. 781-1024 76. 713-0433 Chain 77. 750-0497 Spacer.375 ID x.625 OD Hitch Coupling Flexible Coupling Hex Nut, 5/16-24 Lock Washer 5/16" ID High Pressure Hydraulic Hose 33. 723-0405 Fender Mounting Bracket Oil Filter 34. 710-3038 Hex Cap Screw, 5/16-18 x.875 78. 681-04030 35. 681-0164 Light Bracket Assembly - LH 79. 712-0375 Hex Lock Nut 3/8-16 36. 625-0062 711-0813 Clevis Pin 37. 38. 711-1587 736-0116 Taillight Clevis Pin Flat Washer.635 ID x.93 OD 80. 81. 82. 736-0185 732-0194 39. 714-0470 Cotter Pin 83. 781-0789 Spring Pin Jack Stand 40. 781-1027 Light Bracket - RH 84. 736-0351 Flat Washer.760 ID x.500 OD 85. 736-0371 Flat Washer 86. 712-3022 Hex Lock Nut, 1/2-13 87. 712-3008 Jam Nut, 3/8-16 41. 710-3097 42. 712-3017 Carriage Bolt 3/8-16 x 1.0 Hex Nut, 3/8-16 43. 781-0682 Log Tray tlfequipped 15 Flat Washer.375 ID x.738 OD MANUFACTURER'S LIMITED WARRANTY FOR: ® The limited warranty set forth below is given by Troy-Bilt LLC with respect to new merchandise purchased and used in the United States, its possessions and territories. "Troy-Bilt" warrants this product against defects in material and workmanship for a period of two (2) years commencing on the date of original purchase Troy-Bilt: batteries, belts, blades, blade adapters, Troy-Bilt LLC at P.O. Box 361131, Cleveland, Ohio 44136-0019, or call 1-800-520-5520 or 1330-558-7220, or log on to our Web site at. This limited warranty does not provide coverage in the following cases: a. b. c. d. The engine or component parts thereof. These items may carry a separate manufacturer's warranty. Refer to applicable manufacturer's warranty for terms and conditions. dealer. e. f. g. Troy-Bilt does not extend any warranty for products sold or exported outside of the United States, its possessions and territories, except those sold through Troy-Bilt's authorized channels of export distribution. Replacement parts that are not genuine Troy-Bilt Troy-Bilt. During the period of the warranty, the exclusive remedy is repair or replacement of the product as set forth above. The provisions as set forth in this warranty provide the sole and exclusive remedy arising from the sale. Troy-Bilt Purchase to obtain warranty coverage. Proof of Troy-Bilt LLC, P.O.BOX361131CLEVELAND,OHIO44136-0019; Phone:1-800-520-5520,1-330-558-7220
http://manualzz.com/doc/1726603/mtd-27-ton-operator-s-manual
CC-MAIN-2018-26
refinedweb
6,659
68.87
See also Berkeley DB notes on AIX Normal build command for AIX using xlC_r and xlc_r and no options: sh buildall.sh -x xlC_r -c xlc_r I get a link error regarding open and open64 on AIX 5.3 while linking against Berkeley DB. On AIX 5.3, you may see a link failure while building Berkeley DB XML against Berkeley DB 6.1.x with "open" or "open64" in the message. This may occur when trying to link dbxml_dump or dbxml_load. This can be worked around by adding this line to db-6.1.x/build_unix/db.h: #include <fcntl.h> It should be added near the top of the file, before the include of db.h. After this, rebuild Berkeley DB by changing to the directory db-6.1.x/build_unix and running "make clean; make; make install." I get an error regarding truncate64 and stat64 on AIX 5.3 while building Berkeley DB. On AIX 5.3, you may see a compilation failure while building Berkeley DB 4.3.29 of the nature, "...truncate64 is not a member of..." or "...stat64 is not a member of..." This can be worked around by adding this line to db-4.3.29/dbinc/db.in: #include <unistd.h> It should be added just after the line: #ifndef __NO_SYSTEM_INCLUDES. After this, re-run the buildall.sh script, and be sure that Berkeley DB configures itself again, regnerating the file, db-4.3.29/build_unix/db.h.
https://docs.oracle.com/cd/E17276_01/html/programmer_reference_xml/aix_build.html
CC-MAIN-2018-17
refinedweb
246
79.46
Enumerating Instances of SQL Server (ADO.NET) SQL Server permits applications to find SQL Server instances within the current network. The SqlDataSourceEnumerator class exposes this information to the application developer, providing a DataTable containing information about all the visible servers. This returned table contains a list of server instances available on the network that matches the list provided when a user attempts to create a new connection, and expands the drop-down list containing all the available servers on the Connection Properties dialog box. The results displayed are not always complete.: All of the available servers may or may not be listed. The list can vary depending on factors such as timeouts and network traffic. This can cause the list to be different on two consecutive calls. Only servers on the same network will be listed. Broadcast packets typically won't traverse routers, which is why you may not see a server listed, but it will be stable across calls. Listed servers may or may not have additional information such as IsClustered and version. This is dependent on how the list was obtained. Servers listed through the SQL Server browser service will have more details than those found through the Windows infrastructure, which will list only the name. SQL Server provides information for the SqlDataSourceEnumerator through the use of an external Windows service named SQL Browser. This service is enabled by default, but administrators may turn it off or disable it, making the server instance invisible to this class. The following console application retrieves information about all of the visible SQL Server instances and displays the information in the console window. using System.Data.Sql; class Program { static void Main() { // Retrieve the enumerator instance and then the data. SqlDataSourceEnumerator instance = SqlDataSourceEnumerator.Instance; System.Data.DataTable table = instance.GetDataSources(); // Display the contents of the table. DisplayData(table); Console.WriteLine("Press any key to continue."); Console.ReadKey(); } private static void DisplayData(System.Data.DataTable table) { foreach (System.Data.DataRow row in table.Rows) { foreach (System.Data.DataColumn col in table.Columns) { Console.WriteLine("{0} = {1}", col.ColumnName, row[col]); } Console.WriteLine("============================"); } } } SQL Server and ADO.NET ADO.NET Managed Providers and DataSet Developer Center
https://msdn.microsoft.com/en-us/library/a6t1z9x2.aspx
CC-MAIN-2017-09
refinedweb
362
51.75
Survey period: 12 Mar 2012 to 19 Mar 2012 The Windows 8 consumer preview is out. Taken if for a test drive yet? Marc A. Brown wrote:It seems to be faster than Vista Slacker007 wrote:What do you like most about Metro, just curious? public class Naerling : Lazy<Person>{ public void DoWork(){ throw new NotImplementedException(); } } Lakamraju Raghuram wrote:unless you are ready to format/burn your hard disk and particularly if you are using XP. Mauro Gagna wrote:The answer is virtualization. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://codeproject.freetls.fastly.net/Surveys/1266/Have-you-downloaded-the-Windows-8-preview?msg=4187386#xx4187386xx
CC-MAIN-2022-05
refinedweb
113
62.98
The program I wrote below is supposed to play a guessing game based on the computer picking a random letter. All it does is loop twice then say you win. Also the if higher/lower portion is not processing properly. eg: (depending on if the if statement is guess < randomLetter) guess a - the letter is lower, guess a - You win. I have spent hours looking thru this site and others for help for most of my problems but I generally can't find the answer I need or am just so C++ disabled I can't see the answer in front of me. #include <iostream> #include <string> #include <algorithm> #include <cstdlib> #include <ctime> using namespace std; int main() { //declaration of variables int number = 0; string letters = "abcdefghijklmnopqrstuvwxyz"; string randomLetter = ""; string guess = ""; //initialize the random number generator srand(time(NULL)); //assign a the random value to number number = 0 + rand() % (26 - 10 + 1); //find the letter that coresponds to the number randomLetter.assign(letters, number, 0); //get users guess cout << "The computer has secretly picked a letter." << endl; cout << "Try to guess the letter." << endl; cout << "Your guess is: "; getline(cin, guess); transform(guess.begin(), guess.end(), guess.begin(), tolower); // verify that user entered only one letter if (guess.size() == 1) { //repeat code till users guess is correct while (guess != randomLetter) { //display higher or lower clue if (guess < randomLetter) cout << "The letter you are looking for is lower in the alphabet." << endl; else cout << "The letter you are looking for is higher in the alphabet." << endl; //end ifs cout << "Please try again!" << endl; cout << "Your guess is: "; getline(cin, guess); transform(guess.begin(), guess.end(), guess.begin(), tolower); }//end while cout << "That is correct!" << endl; cout << " YOU WIN !!! " << endl; } else cout << "Please enter one letter only."; return 0; }// end of main function
http://cboard.cprogramming.com/cplusplus-programming/5454-help-new-cplusplus-most-other-programming-printable-thread.html
CC-MAIN-2015-40
refinedweb
301
64.81
batou 1.0b18. 1.0b12 (2013-11-04) - Added branch argument to mercurial.Clone. Setting a branch automatically updates to the branch head on deploy. This is mostly useful for development environments. - Create the ‘secrets’ directory if it doesn’t exist, yet. Also, disallow editing secret files for non-existing environments. - Support continuing remote bootstrapping if we failed after creating the initial remote directory but were unable to use Mercurial. - #12898: build.Configure component was broken when using the default prefix. 1.0b11 (2013-10-17) #12897: Use non-SSL pypi mirror for downloading virtualenv to fix tests failing randomly on machines that (for some reason) can’t validate PyPI’s certificate. #12911: Ensure that we can configure file owners when they don’t exist during configure phase yet. #12912: Fix untested and broken file ownership management. #12847: Clean up unicode handling for File and Content components and templating. #12910: Remote deployments failed when using bundles for transfers if no changes needed bundling. #12766: Allow bootstrapping a batou project in an existing directory to support migration from 0.2. #12283: Recognize files as ‘is_template’ by default. Auto-detect source files in the definition directory if they have the same basename. This is what you want in 99% of all cases. Explicitly stating either the ‘content’ or ‘source’ parameter disables auto-detection. Now you can write this: File(‘foo’) and have components/x/foo recognized as the source file and handled as a template. Use ConfigParser instead of configobj which is effectively unmaintained (see) and support lists separated by newlines in addition to commas. 1.0b10 (2013-09-27) - Package our own virtualenv instead of depending on the system-installed one. This should alleviate troubles due to old virtualenv versions that package distribute, which causes conflicts with recent setuptools versions (#12874). - Update supervisor version to 3.0. 1.0b9 (2013-08-22) - Update Package component so it ignores installed packages when installing. This way, we actually install setuptools even when distribute is installed. (Otherwise it’s a no-op since distribute tells pip that setuptools is already satisfied). - Fix update process: wrong call to old ‘.batou/bin/batou’ failed and early bootstrapping would downgrade temporarily which is confusing and superfluous. Fixes #12739. 1.0b8 (2013-08-17) - Remove superfluous mkdir call during remote bootstrap. - Make batou init print that it’s working. Bootstrapping can take a while, so at least signal that something’s going on. 1.0b7 (2013-08-17) - Depend on Python2.7 to be available on the PATH during early bootstrap. Otherwise our chances to get a 2.7 virtualenv are pretty small, too. - Improve project template: ignore the work/ directory by default. 1.0b6 (2013-08-17) - More MANIFEST inclusions: bootstrap-template. 1.0b5 (2013-08-17) - Improve MANIFEST so we actually package the init template and other generated files, like version.txt and requirements.txt. 1.0b4 (2013-08-17) - Provide a simple project-creation command, both for pip-installed batou’s as well as spawning new projects from existing ones. Fixes #12730 - Fix #12679: make timeouts configurable. - Removed re-imports from batou main module to support light-weight self-installation and bootstrapping. I.e. ‘from batou import Component’ no longer works. - Provide a single main command together with a ‘bootstrap’ wrapper that you can check into your project and that is maintained during updates automatically. It also provides fully automatic bootstrapping, installation, upgrading and other maintenance. - Fix Python package installation version check. - Don’t use bin/buildout bootstrap command anymore. PIP installs a sufficient bin/buildout so buildout can do the rest internally. - Install zc.buildout during bootstrapping phase using PIP to avoid bootstrap.py problems. - Shorten URLs in the Build component to their basename. - Add ‘assert_cmd’ API to support simpler assertions for verify when needing to check the result of an external command. - Switch to asking pip installing eggs instead of flat installations as namespaces seem to collide otherwise. - Remove non-functional deprecated ‘md5sum’ attribute. - Components are context managers now. If you provide __enter__ it will be called before verify() and if you provide __exit__ this will be called after update (always - even if update isn’t actually called). This allows you to manage temporary state on the target system more gracefully. See the DMGExtractor for an example. - Major refactoring of internal data structures to simplify and improve test coverage. Some breakage to be expected: - Components do not have a-edit’ wrapper script to allow re-encrypting without re-entering the editor. - Consistently switch to using setuptools. - Fix #12399: incorrect stat attributes for Owner and Group - Add exclude parameter to Directory component. - Add env parameter to Component.cmd() (and corresponding build_environment parameter to the Build component) to allow adding/overriding environment variables. 1.0b3 (2013-07-09) - Enable logging in the remote core to see what’s going on on the remote side. - Try to better format exceptions from the remote side. - Try harder to get virtualenv back into a working state. - Allow remote deployments from root of repository. - Make PIP management more robust. 1.0b2 (2013-07-09) - Add component to manage PIP within a virtual env. - Add component to manage packages with PIP within a virtual env. - Restructure buildout component to make it more robust regarding setuptools/distribute preparation. Also remove usage of bootstrap completely as we rely on virtualenv anyway. 1.0b1 (2013-07-09) - Apply semantic versioning: initial development is over, so this is 1.0 now. - Major revamp of secrets management: - switch to GPG (instead of aespipe) - turn secrets into a core feature, removing the need for a special component - Add ‘–single’ to suppress parallel bootstrapping.18.xml
https://pypi.python.org/pypi/batou/1.0b18
CC-MAIN-2017-22
refinedweb
939
51.14
By: Jan Zumwalt - October 15, 2013 Introduction I have a new project that needed good 2D graphics. Even though SDL1, EGL, GLES, GLES2, VG are preloaded with the "Raspian" version of RPI that I am using, I was interested in scaling and rotation and wanted to checkout SDL2. A google search turns up almost nothing so I found myself blazing a trail. Neither the "synaptic" or "apt-get" repositories have the SDL2 library at the time of this writing. Oddly (at least for Raspbian) there does not seem to be any issue with the installation. The only comprehnesive example I found was this utube video that includes links to a custom version that uses "CodeBlocks". I could not get it working on my Raspbian (Weezy Debian) version. I was desperate and was not optimistic - but once you know one trick, it is REALLY EASY to get SDL2 running! Get Loaded 1) Download - Get the linux generic release and unzip it in any directory. The really neat thing is the necessary Debian install files are provided and will setup automatically. 2) Install - Navigate to the Debian subdirectory. The instructions are in the text file "install.txt". All that is needed is to go to the unzip directory as root and run the following command './configure; make; make install'. It took about 50min to 1hr to do it's thing. 3) Done - It's ready... except ... I spent the next two days not getting anything to compile. I eventually found out that SDL1 and SDL2 expect a custom made config program to pass the compiler library and include file information. This program is set up during the install process. 4) - Compile - Use these commands for SDL1 or SDL2 compiles For sdl1 progs use... Code: Select all gcc `sdl-config --cflags --libs` -v -o myprog main.c > compile.log 2>&1 Code: Select all gcc `sdl2-config --cflags --libs` -v -o myprog main.c > compile.log 2>&1 (Note: I tried for several days to manually provide the LIB and INC settings but I was unable to get anything to compile. You can run the "sdl2-config --cflags --libs" from a terminal and see what is being sent to the gcc compiler but when I tried to cut and past the same thing in a manual compile, it just would not work for me.) Goodies! I quickly found that SDL needs several support libraries to be able to use images and fonts. The packages are located at SDL2_image-2.0.0 - bmp is built in but you need this for jpg, png etc. SDL2_mixer-2.0.0 - audio support SDL2_net-2.0.0 - network support SDL2_ttf-2.0.12 - trutype font support SDL_rtf-0.1.0 - rich text format support The install instructions for each package are the same so I have provided just one example. Image Support, SDL2_image-2.0.0 - instructions are for 'root' user (you can also use sudo) 1) Download the package from 2) unzip 3) cd to the working directory 4) autogen.sh 5) ./configure && make && make install When you are done, you will receive this message I just include the directories with the -L and -I compiler commandsI just include the directories with the -L and -I compiler commandsLibraries <program> <source> >> compile.log 2>&1 That's it, your done! A discussion of various options in using these packages can be found at ... 92#p428692 Development Environment For completeness I am providing my actual compiler script "compile.sh" and ".desktop" link so it can all be compiled from a text or xterm terminal. This file is called "compile.sh" and can be run from a terminal or "desktop" link. Code: Select all #!/bin/bash printf "\n" printf "\t+----------------------------+\n" printf "\t| Compile main.c |\n" printf "\t| SDL2 test |\n" printf "\t+----------------------------+\n\t" printf "\n\t Start Compile: "; date date > compile.log; printf "\n" >> compile.log # sdl2_test main.c >> compile.log 2>&1 # run program if [ -f sdl2_test ]; then ./sdl2_test printf "\tEnd of program... \n" else cat ./compile.log printf "\t *** There where errors, compile aborted. ***\n" fi # required so xterm will not close printf "\n\t%s" "press any key to exit: " read -n 1 This is the "compile.desktop" GUI link file to be able to click on and compile from a xterm. Code: Select all [Desktop Entry] Name=Compile Comment=Compiles C program Type=Application Encoding=UTF-8 Terminal=false StartupNotify=true Icon=/usr/share/icons/gear_lnk.png Exec=xterm -sb -rightbar -fg Beige -bg "rgb:00/10/20" -e "cd /root/<path to program> && ./compile.sh" Here is the "main.c" SDL2 test program... Code: Select all #include <SDL2/SDL.h> // +--------------------------------------------------+ // | Minimal SDL2 Test Program | // | Creates a graphics window then shuts it down | // | after 10 seconds. | // +--------------------------------------------------+ int main ( int argc, char** argv ) { SDL_Window *window; SDL_Init(SDL_INIT_VIDEO); window = SDL_CreateWindow( "SDL2 TEST PROGRAM", // window title SDL_WINDOWPOS_CENTERED, // the x position of the window SDL_WINDOWPOS_CENTERED, // the y position of the window 400,400, // window width and height SDL_WINDOW_RESIZABLE // create resizeable window ); if(window == NULL) // if no win, show error { printf("Could not create SDL2 test window: %s\n", SDL_GetError()); return 1; } SDL_Delay(10000); // 10sec delay SDL_DestroyWindow(window); // kill window SDL_Quit(); // clean up return 0; // return success } Lets all get together and start sharing our SDL2 programs!!! Test Programs Once you install SDL2 a comprehensive set of test routines are provided with the core package - but no compile info is provided. It was at this point that I ran into my problems until I ran across the need to pass LIB & INC info using the "sdl-config" program. I created a script that tries to compile all of them. Some will compile while others don't. I have not looked at the problems but I expect to find they need the "image" and "font" packages. compile_all.sh Code: Select all #!/bin/bash # +--------------------------------------+ # | Compile all SDL2 Test Programs | # +--------------------------------------+ while read F ; do printf "\n" printf "\t+----- Compileing $F.c -----+\n" printf "\n\t Start time: " date printf "\n" gcc `sdl2-config --cflags --libs` -o $F $F.c printf "\n\t End time: " date printf "\n" done <./filelist.txt Code: Select all checkkeys loopwave testatomic testaudioinfo testdrawchessboard testerror testfile testgamecontroller testgesture testgl2 testgles testhaptic testiconv testjoystick testkeys testloadso testlock testmessage testmultiaudio testoverlay2 testplatform testpower testresample testrumble testsem testshader testshape testspriteminimal teststreaming testthread testtimer testver torturethread These programs don't compile. They also did not compile after loading Image and Font support. testautomation_audio testautomation testautomation_clipboard testautomation_events testautomation_keyboard testautomation_main testautomation_mouse testautomation_pixels testautomation_platform testautomation_rect testautomation_render testautomation_rwops testautomation_sdltest testautomation_stdlib testautomation_surface testautomation_syswm testautomation_timer testautomation_video testdraw2 testime testintersections testnative testnativew32 testnativex11 testrelative testrendercopyex testrendertarget testscale testsprite2 testwm2 To compile a single program use this script compile1.sh Code: Select all #!/bin/bash # check for 2 arguments ARGS=2 # number of arguments if [ $# -ne $ARGS ]; then printf "\t +---------------------------------------+\n" printf "\t | Compile SDL2 Program |\n" printf "\t | |\n" printf "\t | compile1.sh ver Oct 1, 2013 |\n" printf "\t | |\n" printf "\t | Usage: |\n" printf "\t | compile1.sh <program> <source> |\n" printf "\t | |\n" printf "\t | Example: |\n" printf "\t | compile1.sh myprog main.c |\n" printf "\t +---------------------------------------+\n" exit 1 # general error fi printf "\n" printf "\t+----- Compileing $2 -----+\n" printf "\n\t Start time: "; date # $1 $2 >> compile.log 2>&1 printf "\n\t End time: "; date printf "\n" exit 0 According to the test programs there are some problems with programs trying to use OpenGL, OpenGles, and Heptic that is pre-loaded on the Raspbian. So, it would be nice to integrate these other common graphics packages along with the other major SDL2 packages - fonts, images, rich text, mixer, etc at
https://lb.raspberrypi.org/forums/viewtopic.php?t=58180&p=544740
CC-MAIN-2019-18
refinedweb
1,268
64.81
Details - Type: Bug - Status: In Progress (View Workflow) - Priority: Critical - Resolution: Unresolved - Component/s: mailer-plugin - Labels:None - Environment:Docker image jenkins/jenkins:2.121.2 - Similar Issues: Description Hello - Jenkins 2.121.2 (official Docker image) - Mailer Plugin 1.21 Sometimes pipelines in Jenkins fails with next exception: javax.activation.UnsupportedDataTypeException: no object DCH for MIME type multipart/mixed; boundary="----=_Part_232_854096954.1535491539067" at javax.activation.ObjectDataContentHandler.writeTo(DataHandler.java:896) at javax.activation.DataHandler.writeTo(DataHandler.java:317) at javax.mail.internet.MimeBodyPart.writeTo(MimeBodyPart.java:1476) at javax.mail.internet.MimeMessage.writeTo(MimeMessage.java:1772) at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:1099) Caused: javax.mail.MessagingException: IOException while sending message; nested exception is: javax.activation.UnsupportedDataTypeException: no object DCH for MIME type multipart/mixed; boundary="----=_Part_232_854096954.1535491539067" at com.sun.mail.smtp.SMTPTransport.sendMessage(SMTPTransport.java:1141) at javax.mail.Transport.send0(Transport.java:195) at javax.mail.Transport.send(Transport.java:124) at org.jenkinsci.plugins.workflow.steps.MailStep$MailStepExecution.run(MailStep.java:142) at org.jenkinsci.plugins.workflow.steps.MailStep$MailStepExecution.run(MailStep.java:128)) Example of pipeline: node { mail(to: "[email protected]", subject: "test", body: "test") } Settings: - SMTP server: mail.example.com - Default user e-mail suffix: @example.com Restart of Jenkins temporary solve the issue. Please fix the bug. Thank you Attachments Issue Links Activity It seems to be an issue with the context class loader. By running lots of jobs, I could see that the thread's current class loader was null when the problem occurred. I can re-create the issue exactly by running Thread.currentThread().setContextClassLoader(null); just before the call to Transport.send(mimeMessage); Still investigating... I've been trying to track this down, but have run out of time. It definitely seems to be the issue mentioned above that the contextClassLoader is set to null, but where this is getting set to null I don't know. I've traced this back up the stack and all the threads that it's operating on are spawned by the pipeline plugins (when a thread spawns another thread, the contextClassLoader is inherited from the parent). I could see that the threads in the pipeline plugins had the contextClassLoader set to `null` sometimes, but I couldn't track down where in its myriads of threads and threadpools the root of the issue where it was setting the contextClassLoader to null that triggered this issue (there are lots of instances where it does set it to null, but a lot of those are 'valid' given the context. Unfortunately, I can only see this issue on our build infrastructure which is hard to debug (attaching a debugger will basically lock up the Jenkins instance) as there is so much going on. Mykola Ulianytskyi - are you able to recreate the issues in a fresh/newly installed instance that I could run locally to get a better handle on what's going on? Remote debugging isn't working for me. What I was attempting to do was to launch Jenkins with debug flags and attach a debugger to Thread.setContextClassLoader and get the debugger to break if it is setting it no null. As I said, this happened so much it was hard to identify where this was doing it 'wrongly'. Maybe Mykola Ulianytskyi could have some luck tracing it down? Some possibly useful stack traces I got are attached if they are useful to anyone. I got lots where it was in the sun.net. package, but these seem valid given the comments here and given the fact that that thread is not re-used. As a workaround, I have raised as a workaround, so you may find a local build of this deployed to your Jenkins instance will be a workaround for the issue (it's not an actual fix). It detects if the contextClassLoader is null before attempting to send the email, and then sets it to the default one in Jenkins. I filed a PR that would address one potential way this could happen in a slightly more general way than the workflow-basic-steps workaround in, but I still do not understand the actual root cause. I would be very interested to know if anyone who is experiencing this problem still sees it in after updating to version 2.17 of the Pipeline Step API Plugin, which should be available from the update center in a few hours. Devin Nusbaum I am seeing the bug at the moment with the Pipeline Step API 2.16. I'll try to bump soon. Mailer plugin v 1.22 but same LTS version (2.121.3). A workaround we found is to add the MIME types explicitly, like so import javax.activation.MailcapCommandMap; import javax.activation.CommandMap; @NonCPS def setupMail(){ MailcapCommandMap mc = (MailcapCommandMap) CommandMap.getDefaultCommandMap(); mc.addMailcap("text/html;; x-java-content-handler=com.sun.mail.handlers.text_html"); mc.addMailcap("text/xml;; x-java-content-handler=com.sun.mail.handlers.text_xml"); mc.addMailcap("text/plain;; x-java-content-handler=com.sun.mail.handlers.text_plain"); mc.addMailcap("multipart/*;; x-java-content-handler=com.sun.mail.handlers.multipart_mixed"); mc.addMailcap("message/rfc822;; x-java-content- handler=com.sun.mail.handlers.message_rfc822"); } node { setupMail() mail( from: '[email protected]', replyTo: '[email protected]', to: [email protected], subject: "Hi there MIME", body: "It Works!") } Hope this helps Mykola Ulianytskyi - we're having the same behaviour on our build platform - maybe we can share some details to diagnose the problem. I'm having trouble re-creating it locally on my dev machine and finding it hard to trace? Are you running on build agents, or building on the master itself? Was this a fresh install of the docker image or have you got any other plugins installed too? How often do you get the issues? I'm not seeing it locally on a fresh jenkins install, but we see it ~5% of the time on our long-running jobs on our CI system. We run one-shot agents so hard to trace exactly what the infrastructure is that it fails on when it fails to trace the issue. As it's our main CI system, attaching a debugger isn't possible
https://issues.jenkins.io/browse/JENKINS-53305
CC-MAIN-2021-21
refinedweb
1,037
50.43
Eric W. Biederman wrote:>> I'm fine with such situations, since we need containers mostly, but what makes>> me really afraid is that it introduces hard to find/fix/maintain issues. I have no>> any other concerns.> > Hard to find and maintain problems I agree should be avoided. There are only two> ways I can see coping with the weird interactions that might occur.>> 1) Assert weird interactions will never happen, don't worry about it,> and stomp on any place where they can occur. (A fully isolated container approach).> > 2) Assume weird interactions happen and write the code so that it simply> works if those interactions happen, because for each namespace you have> made certain the code works regardless of which namespace the objects are> in.> > The second case is slightly harder. But as far as I can tell it is more robust> and allows for much better incremental development.hmm, slightly, I would say much harder and these weird interactions arevery hard to anticipate without some experience in the field. We couldcontinue on arguing for ages without making any progress.let's apply that incremental development approach now. Let's work on simplenamespaces which would make _some_ container scenarios possible and notall. IMHO, that would mean tying some namespaces together and find a way tounshare them safely as a whole. Get some experience on it and then work onunsharing some more independently for the benefit of more use casescenarios. I like the concept and I think it will be useful.just being pragmatic, i like things to start working in simple cases beforeover optimizing them.cheers,C.-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to [email protected] majordomo info at read the FAQ at
http://lkml.org/lkml/2006/7/13/231
CC-MAIN-2014-35
refinedweb
297
56.45
1 /*2 * @(#)ReentrantLock.java 1.7 04/07/133 *4 * Copyright 2004 Sun Microsystems, Inc. All rights reserved.5 * SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.6 */7 8 package java.util.concurrent.locks;9 import java.util.*;10 import java.util.concurrent.*;11 import java.util.concurrent.atomic.*;12 13 /**14 * A reentrant mutual exclusion {@link Lock} with the same basic15 * behavior and semantics as the implicit monitor lock accessed using16 * <tt>synchronized</tt> methods and statements, but with extended17 * capabilities.18 *19 * <p> A <tt>ReentrantLock</tt> is <em>owned</em> by the thread last20 * successfully locking, but not yet unlocking it. A thread invoking21 * <tt>lock</tt> will return, successfully acquiring the lock, when22 * the lock is not owned by another thread. The method will return23 * immediately if the current thread already owns the lock. This can24 * be checked using methods {@link #isHeldByCurrentThread}, and {@link25 * #getHoldCount}. 26 *27 * <p> The constructor for this class accepts an optional28 * <em>fairness</em> parameter. When set <tt>true</tt>, under29 * contention, locks favor granting access to the longest-waiting30 * thread. Otherwise this lock does not guarantee any particular31 * access order. Programs using fair locks accessed by many threads32 * may display lower overall throughput (i.e., are slower; often much33 * slower) than those using the default setting, but have smaller34 * variances in times to obtain locks and guarantee lack of35 * starvation. Note however, that fairness of locks does not guarantee36 * fairness of thread scheduling. Thus, one of many threads using a37 * fair lock may obtain it multiple times in succession while other38 * active threads are not progressing and not currently holding the39 * lock.40 * Also note that the untimed {@link #tryLock() tryLock} method does not41 * honor the fairness setting. It will succeed if the lock42 * is available even if other threads are waiting.43 *44 * <p> It is recommended practice to <em>always</em> immediately45 * follow a call to <tt>lock</tt> with a <tt>try</tt> block, most46 * typically in a before/after construction such as:47 *48 * <pre>49 * class X {50 * private final ReentrantLock lock = new ReentrantLock();51 * // ...52 *53 * public void m() { 54 * lock.lock(); // block until condition holds55 * try {56 * // ... method body57 * } finally {58 * lock.unlock()59 * }60 * }61 * }62 * </pre>63 *64 * <p>In addition to implementing the {@link Lock} interface, this65 * class defines methods <tt>isLocked</tt> and66 * <tt>getLockQueueLength</tt>, as well as some associated67 * <tt>protected</tt> access methods that may be useful for68 * instrumentation and monitoring.69 *70 * <p> Serialization of this class behaves in the same way as built-in71 * locks: a deserialized lock is in the unlocked state, regardless of72 * its state when serialized.73 *74 * <p> This lock supports a maximum of 2147483648 recursive locks by75 * the same thread. 76 *77 * @since 1.578 * @author Doug Lea79 * 80 */81 public class ReentrantLock implements Lock , java.io.Serializable {82 private static final long serialVersionUID = 7373984872572414699L;83 /** Synchronizer providing all implementation mechanics */84 private final Sync sync;85 86 /**87 * Base of synchronization control for this lock. Subclassed88 * into fair and nonfair versions below. Uses AQS state to89 * represent the number of holds on the lock.90 */91 static abstract class Sync extends AbstractQueuedSynchronizer {92 /** Current owner thread */93 transient Thread owner;94 95 /**96 * Perform {@link Lock#lock}. The main reason for subclassing97 * is to allow fast path for nonfair version.98 */99 abstract void lock();100 101 /** 102 * Perform non-fair tryLock. tryAcquire is103 * implemented in subclasses, but both need nonfair104 * try for trylock method105 */106 final boolean nonfairTryAcquire(int acquires) { 107 final Thread current = Thread.currentThread();108 int c = getState();109 if (c == 0) {110 if (compareAndSetState(0, acquires)) {111 owner = current;112 return true;113 }114 }115 else if (current == owner) {116 setState(c+acquires);117 return true;118 }119 return false;120 }121 122 protected final boolean tryRelease(int releases) {123 int c = getState() - releases;124 if (Thread.currentThread() != owner)125 throw new IllegalMonitorStateException ();126 boolean free = false;127 if (c == 0) {128 free = true;129 owner = null;130 }131 setState(c);132 return free;133 }134 135 protected final boolean isHeldExclusively() {136 return getState() != 0 && owner == Thread.currentThread();137 }138 139 final ConditionObject newCondition() {140 return new ConditionObject();141 }142 143 // Methods relayed from outer class144 145 final Thread getOwner() {146 int c = getState();147 Thread o = owner;148 return (c == 0)? null : o;149 }150 151 final int getHoldCount() {152 int c = getState();153 Thread o = owner;154 return (o == Thread.currentThread())? c : 0;155 }156 157 final boolean isLocked() {158 return getState() != 0;159 }160 161 /**162 * Reconstitute this lock instance from a stream163 * @param s the stream164 */165 private void readObject(java.io.ObjectInputStream s)166 throws java.io.IOException , ClassNotFoundException {167 s.defaultReadObject();168 setState(0); // reset to unlocked state169 }170 }171 172 /**173 * Sync object for non-fair locks174 */175 final static class NonfairSync extends Sync {176 /**177 * Perform lock. Try immediate barge, backing up to normal178 * acquire on failure.179 */180 final void lock() {181 if (compareAndSetState(0, 1))182 owner = Thread.currentThread();183 else184 acquire(1);185 }186 187 protected final boolean tryAcquire(int acquires) { 188 return nonfairTryAcquire(acquires);189 }190 }191 192 /**193 * Sync object for fair locks194 */195 final static class FairSync extends Sync {196 final void lock() { 197 acquire(1); 198 }199 200 /**201 * Fair version of tryAcquire. Don't grant access unless202 * recursive call or no waiters or is first.203 */204 protected final boolean tryAcquire(int acquires) { 205 final Thread current = Thread.currentThread();206 int c = getState();207 if (c == 0) {208 Thread first = getFirstQueuedThread();209 if ((first == null || first == current) && 210 compareAndSetState(0, acquires)) {211 owner = current;212 return true;213 }214 }215 else if (current == owner) {216 setState(c+acquires);217 return true;218 }219 return false;220 }221 }222 223 /**224 * Creates an instance of <tt>ReentrantLock</tt>.225 * This is equivalent to using <tt>ReentrantLock(false)</tt>.226 */227 public ReentrantLock() { 228 sync = new NonfairSync();229 }230 231 /**232 * Creates an instance of <tt>ReentrantLock</tt> with the233 * given fairness policy.234 * @param fair true if this lock will be fair; else false235 */236 public ReentrantLock(boolean fair) { 237 sync = (fair)? new FairSync() : new NonfairSync();238 }239 240 /**241 * Acquires the lock. 242 *243 * <p>Acquires the lock if it is not held by another thread and returns 244 * immediately, setting the lock hold count to one.245 *246 * <p>If the current thread247 * already holds the lock then the hold count is incremented by one and248 * the method returns immediately.249 *250 * <p>If the lock is held by another thread then the251 * current thread becomes disabled for thread scheduling 252 * purposes and lies dormant until the lock has been acquired,253 * at which time the lock hold count is set to one. 254 */255 public void lock() {256 sync.lock();257 }258 259 /**260 * Acquires the lock unless the current thread is 261 * {@link Thread#interrupt interrupted}.262 *263 * <p>Acquires the lock if it is not held by another thread and returns 264 * immediately, setting the lock hold count to one.265 *266 * <p>If the current thread already holds this lock then the hold count 267 * is incremented by one and the method returns immediately.268 *269 * <p>If the lock is held by another thread then the270 * current thread becomes disabled for thread scheduling 271 * purposes and lies dormant until one of two things happens:272 *273 * <ul>274 *275 * <li>The lock is acquired by the current thread; or276 *277 * <li>Some other thread {@link Thread#interrupt interrupts} the current278 * thread.279 *280 * </ul>281 *282 * <p>If the lock is acquired by the current thread then the lock hold 283 * count is set to one.284 *285 * <p>If the current thread:286 *287 * <ul>288 *289 * <li>has its interrupted status set on entry to this method; or 290 *291 * <li>is {@link Thread#interrupt interrupted} while acquiring 292 * the lock,293 *294 * </ul>295 *296 * then {@link InterruptedException} is thrown and the current thread's 297 * interrupted status is cleared. 298 *299 * <p>In this implementation, as this method is an explicit interruption 300 * point, preference is 301 * given to responding to the interrupt over normal or reentrant 302 * acquisition of the lock.303 *304 * @throws InterruptedException if the current thread is interrupted305 */306 public void lockInterruptibly() throws InterruptedException { 307 sync.acquireInterruptibly(1);308 }309 310 /**311 * Acquires the lock only if it is not held by another thread at the time312 * of invocation.313 *314 * <p>Acquires the lock if it is not held by another thread and315 * returns immediately with the value <tt>true</tt>, setting the316 * lock hold count to one. Even when this lock has been set to use a317 * fair ordering policy, a call to <tt>tryLock()</tt> <em>will</em>318 * immediately acquire the lock if it is available, whether or not319 * other threads are currently waiting for the lock. 320 * This "barging" behavior can be useful in certain 321 * circumstances, even though it breaks fairness. If you want to honor322 * the fairness setting for this lock, then use 323 * {@link #tryLock(long, TimeUnit) tryLock(0, TimeUnit.SECONDS) }324 * which is almost equivalent (it also detects interruption).325 *326 * <p> If the current thread327 * already holds this lock then the hold count is incremented by one and328 * the method returns <tt>true</tt>.329 *330 * <p>If the lock is held by another thread then this method will return 331 * immediately with the value <tt>false</tt>. 332 *333 * @return <tt>true</tt> if the lock was free and was acquired by the334 * current thread, or the lock was already held by the current thread; and335 * <tt>false</tt> otherwise.336 */337 public boolean tryLock() {338 return sync.nonfairTryAcquire(1);339 }340 341 /**342 * Acquires the lock if it is not held by another thread within the given 343 * waiting time and the current thread has not been 344 * {@link Thread#interrupt interrupted}.345 *346 * <p>Acquires the lock if it is not held by another thread and returns 347 * immediately with the value <tt>true</tt>, setting the lock hold count 348 * to one. If this lock has been set to use a fair ordering policy then349 * an available lock <em>will not</em> be acquired if any other threads350 * are waiting for the lock. This is in contrast to the {@link #tryLock()}351 * method. If you want a timed <tt>tryLock</tt> that does permit barging on352 * a fair lock then combine the timed and un-timed forms together:353 *354 * <pre>if (lock.tryLock() || lock.tryLock(timeout, unit) ) { ... }355 * </pre>356 *357 * <p>If the current thread358 * already holds this lock then the hold count is incremented by one and359 * the method returns <tt>true</tt>.360 *361 * <p>If the lock is held by another thread then the362 * current thread becomes disabled for thread scheduling 363 * purposes and lies dormant until one of three things happens:364 *365 * <ul>366 *367 * <li>The lock is acquired by the current thread; or368 *369 * <li>Some other thread {@link Thread#interrupt interrupts} the current370 * thread; or371 *372 * <li>The specified waiting time elapses373 *374 * </ul>375 *376 * <p>If the lock is acquired then the value <tt>true</tt> is returned and377 * the lock hold count is set to one.378 *379 * <p>If the current thread:380 *381 * <ul>382 *383 * <li>has its interrupted status set on entry to this method; or 384 *385 * <li>is {@link Thread#interrupt interrupted} while acquiring386 * the lock,387 *388 * </ul>389 * then {@link InterruptedException} is thrown and the current thread's 390 * interrupted status is cleared. 391 *392 * <p>If the specified waiting time elapses then the value <tt>false</tt>393 * is returned.394 * If the time is 395 * less than or equal to zero, the method will not wait at all.396 *397 * <p>In this implementation, as this method is an explicit interruption 398 * point, preference is 399 * given to responding to the interrupt over normal or reentrant 400 * acquisition of the lock, and over reporting the elapse of the waiting401 * time.402 *403 * @param timeout the time to wait for the lock404 * @param unit the time unit of the timeout argument405 *406 * @return <tt>true</tt> if the lock was free and was acquired by the407 * current thread, or the lock was already held by the current thread; and408 * <tt>false</tt> if the waiting time elapsed before the lock could be 409 * acquired.410 *411 * @throws InterruptedException if the current thread is interrupted412 * @throws NullPointerException if unit is null413 *414 */415 public boolean tryLock(long timeout, TimeUnit unit) throws InterruptedException {416 return sync.tryAcquireNanos(1, unit.toNanos(timeout));417 }418 419 /**420 * Attempts to release this lock. 421 *422 * <p>If the current thread is the423 * holder of this lock then the hold count is decremented. If the424 * hold count is now zero then the lock is released. If the425 * current thread is not the holder of this lock then {@link426 * IllegalMonitorStateException} is thrown.427 * @throws IllegalMonitorStateException if the current thread does not428 * hold this lock.429 */430 public void unlock() {431 sync.release(1);432 }433 434 /**435 * Returns a {@link Condition} instance for use with this 436 * {@link Lock} instance.437 *438 * <p>The returned {@link Condition} instance supports the same439 * usages as do the {@link Object} monitor methods ({@link440 * Object#wait() wait}, {@link Object#notify notify}, and {@link441 * Object#notifyAll notifyAll}) when used with the built-in442 * monitor lock.443 *444 * <ul>445 *446 * <li>If this lock is not held when any of the {@link Condition}447 * {@link Condition#await() waiting} or {@link Condition#signal448 * signalling} methods are called, then an {@link449 * IllegalMonitorStateException} is thrown.450 *451 * <li>When the condition {@link Condition#await() waiting}452 * methods are called the lock is released and, before they453 * return, the lock is reacquired and the lock hold count restored454 * to what it was when the method was called.455 *456 * <li>If a thread is {@link Thread#interrupt interrupted} while457 * waiting then the wait will terminate, an {@link458 * InterruptedException} will be thrown, and the thread's459 * interrupted status will be cleared.460 *461 * <li> Waiting threads are signalled in FIFO order462 *463 * <li>The ordering of lock reacquisition for threads returning464 * from waiting methods is the same as for threads initially465 * acquiring the lock, which is in the default case not specified,466 * but for <em>fair</em> locks favors those threads that have been467 * waiting the longest.468 * 469 * </ul>470 *471 * @return the Condition object472 */473 public Condition newCondition() {474 return sync.newCondition();475 }476 477 /**478 * Queries the number of holds on this lock by the current thread.479 *480 * <p>A thread has a hold on a lock for each lock action that is not 481 * matched by an unlock action.482 *483 * <p>The hold count information is typically only used for testing and484 * debugging purposes. For example, if a certain section of code should485 * not be entered with the lock already held then we can assert that486 * fact:487 *488 * <pre>489 * class X {490 * ReentrantLock lock = new ReentrantLock();491 * // ... 492 * public void m() { 493 * assert lock.getHoldCount() == 0;494 * lock.lock();495 * try {496 * // ... method body497 * } finally {498 * lock.unlock();499 * }500 * }501 * }502 * </pre>503 *504 * @return the number of holds on this lock by the current thread,505 * or zero if this lock is not held by the current thread.506 */507 public int getHoldCount() {508 return sync.getHoldCount();509 }510 511 /**512 * Queries if this lock is held by the current thread.513 *514 * <p>Analogous to the {@link Thread#holdsLock} method for built-in515 * monitor locks, this method is typically used for debugging and516 * testing. For example, a method that should only be called while517 * a lock is held can assert that this is the case:518 *519 * <pre>520 * class X {521 * ReentrantLock lock = new ReentrantLock();522 * // ...523 *524 * public void m() { 525 * assert lock.isHeldByCurrentThread();526 * // ... method body527 * }528 * }529 * </pre>530 *531 * <p>It can also be used to ensure that a reentrant lock is used532 * in a non-reentrant manner, for example:533 *534 * <pre>535 * class X {536 * ReentrantLock lock = new ReentrantLock();537 * // ...538 *539 * public void m() { 540 * assert !lock.isHeldByCurrentThread();541 * lock.lock();542 * try {543 * // ... method body544 * } finally {545 * lock.unlock();546 * }547 * }548 * }549 * </pre>550 * @return <tt>true</tt> if current thread holds this lock and 551 * <tt>false</tt> otherwise.552 */553 public boolean isHeldByCurrentThread() {554 return sync.isHeldExclusively();555 }556 557 /**558 * Queries if this lock is held by any thread. This method is559 * designed for use in monitoring of the system state, 560 * not for synchronization control.561 * @return <tt>true</tt> if any thread holds this lock and 562 * <tt>false</tt> otherwise.563 */564 public boolean isLocked() {565 return sync.isLocked();566 }567 568 /**569 * Returns true if this lock has fairness set true.570 * @return true if this lock has fairness set true.571 */572 public final boolean isFair() {573 return sync instanceof FairSync;574 }575 576 /**577 * Returns the thread that currently owns this lock, or578 * <tt>null</tt> if not owned. Note that the owner may be579 * momentarily <tt>null</tt> even if there are threads trying to580 * acquire the lock but have not yet done so. This method is581 * designed to facilitate construction of subclasses that provide582 * more extensive lock monitoring facilities.583 * @return the owner, or <tt>null</tt> if not owned.584 */585 protected Thread getOwner() {586 return sync.getOwner();587 }588 589 /**590 * Queries whether any threads are waiting to acquire this lock. Note that591 * because cancellations may occur at any time, a <tt>true</tt>592 * return does not guarantee that any other thread will ever593 * acquire this lock. This method is designed primarily for use in594 * monitoring of the system state.595 *596 * @return true if there may be other threads waiting to acquire597 * the lock.598 */599 public final boolean hasQueuedThreads() { 600 return sync.hasQueuedThreads();601 }602 603 604 /**605 * Queries whether the given thread is waiting to acquire this606 * lock. Note that because cancellations may occur at any time, a607 * <tt>true</tt> return does not guarantee that this thread608 * will ever acquire this lock. This method is designed primarily for use609 * in monitoring of the system state.610 *611 * @param thread the thread612 * @return true if the given thread is queued waiting for this lock.613 * @throws NullPointerException if thread is null614 */615 public final boolean hasQueuedThread(Thread thread) { 616 return sync.isQueued(thread);617 }618 619 620 /**621 * Returns an estimate of the number of threads waiting to622 * acquire this lock. The value is only an estimate because the number of623 * threads may change dynamically while this method traverses624 * internal data structures. This method is designed for use in625 * monitoring of the system state, not for synchronization626 * control.627 * @return the estimated number of threads waiting for this lock628 */629 public final int getQueueLength() {630 return sync.getQueueLength();631 }632 633 /**634 * Returns a collection containing threads that may be waiting to635 * acquire this lock. Because the actual set of threads may change636 * dynamically while constructing this result, the returned637 * collection is only a best-effort estimate. The elements of the638 * returned collection are in no particular order. This method is639 * designed to facilitate construction of subclasses that provide640 * more extensive monitoring facilities.641 * @return the collection of threads642 */643 protected Collection<Thread > getQueuedThreads() {644 return sync.getQueuedThreads();645 }646 647 /**648 * Queries whether any threads are waiting on the given condition649 * associated with this lock. Note that because timeouts and650 * interrupts may occur at any time, a <tt>true</tt> return does651 * not guarantee that a future <tt>signal</tt> will awaken any652 * threads. This method is designed primarily for use in653 * monitoring of the system state.654 * @param condition the condition655 * @return <tt>true</tt> if there are any waiting threads.656 * @throws IllegalMonitorStateException if this lock 657 * is not held658 * @throws IllegalArgumentException if the given condition is659 * not associated with this lock660 * @throws NullPointerException if condition null661 */ 662 public boolean hasWaiters(Condition condition) {663 if (condition == null)664 throw new NullPointerException ();665 if (!(condition instanceof AbstractQueuedSynchronizer.ConditionObject ))666 throw new IllegalArgumentException ("not owner");667 return sync.hasWaiters((AbstractQueuedSynchronizer.ConditionObject )condition);668 }669 670 /**671 * Returns an estimate of the number of threads waiting on the672 * given condition associated with this lock. Note that because673 * timeouts and interrupts may occur at any time, the estimate674 * serves only as an upper bound on the actual number of waiters.675 * This method is designed for use in monitoring of the system676 * state, not for synchronization control.677 * @param condition the condition678 * @return the estimated number of waiting threads.679 * @throws IllegalMonitorStateException if this lock 680 * is not held681 * @throws IllegalArgumentException if the given condition is682 * not associated with this lock683 * @throws NullPointerException if condition null684 */ 685 public int getWaitQueueLength(Condition condition) {686 if (condition == null)687 throw new NullPointerException ();688 if (!(condition instanceof AbstractQueuedSynchronizer.ConditionObject ))689 throw new IllegalArgumentException ("not owner");690 return sync.getWaitQueueLength((AbstractQueuedSynchronizer.ConditionObject )condition);691 }692 693 /**694 * Returns a collection containing those threads that may be695 * waiting on the given condition associated with this lock.696 * Because the actual set of threads may change dynamically while697 * constructing this result, the returned collection is only a698 * best-effort estimate. The elements of the returned collection699 * are in no particular order. This method is designed to700 * facilitate construction of subclasses that provide more701 * extensive condition monitoring facilities.702 * @param condition the condition703 * @return the collection of threads704 * @throws IllegalMonitorStateException if this lock 705 * is not held706 * @throws IllegalArgumentException if the given condition is707 * not associated with this lock708 * @throws NullPointerException if condition null709 */710 protected Collection<Thread > getWaitingThreads(Condition condition) {711 if (condition == null)712 throw new NullPointerException ();713 if (!(condition instanceof AbstractQueuedSynchronizer.ConditionObject ))714 throw new IllegalArgumentException ("not owner");715 return sync.getWaitingThreads((AbstractQueuedSynchronizer.ConditionObject )condition);716 }717 718 /**719 * Returns a string identifying this lock, as well as its lock720 * state. The state, in brackets, includes either the String721 * "Unlocked" or the String "Locked by"722 * followed by the {@link Thread#getName} of the owning thread.723 * @return a string identifying this lock, as well as its lock state.724 */725 public String toString() {726 Thread owner = sync.getOwner();727 return super.toString() + ((owner == null) ?728 "[Unlocked]" :729 "[Locked by thread " + owner.getName() + "]");730 }731 }732 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
https://kickjava.com/src/java/util/concurrent/locks/ReentrantLock.java.htm
CC-MAIN-2021-21
refinedweb
3,782
53.41
/Yc (Create Precompiled Header File) The latest version of this topic can be found at -Yc (Create Precompiled Header File). Instructs the compiler to create a precompiled header (.pch) file that represents the state of compilation at a certain point. Syntax /Yc[filename] Arguments filename Specifies a header (.h) file. When this argument is used, the compiler compiles all code up to and including the .h file. Remarks filename, the compiler compiles all code up to and including the specified file for subsequent use with the /Yu option. If the options /Yc filename and /Yu (Use Precompiled Header File) filename occur on the same command line and both reference, or imply, the same file name, /Yc filename Example Consider the following code: #include <afxwin.h> // Include header for class library #include "resource.h" // Include resource definitions #include "myapp.h" // Include information specific to this app ... When this code is compiled with the command CL /YcMYAPP.H PROG.CPP, the compiler saves all the preprocessing for AFXWIN.h, RESOURCE.h, and MYAPP.h in a precompiled header file called MYAPP.pch. See Also Compiler Options Setting Compiler Options
https://docs.microsoft.com/en-us/previous-versions/7zc28563%28v%3Dvs.140%29
CC-MAIN-2019-30
refinedweb
187
59.5
hai!! i am using s7g2 100 pin custom board, my RTC working fine , but when i off and ON the power,RTC not reading current updated value, it goes to initialized value only. In below i have updated my code for your reference kindly suggest me the solution what should update for further process. #include "RTC_thread.h"#include<stdio.h>#include"flash_header.h"#define SEMI_HOSTINGextern char rtc_time_s[10],calender_s[8],rtc_time_hr_s[2],rtc_time_min_s[2],rtc_time_sec_s[2],rtc_time_year_s[2],rtc_time_month_s[2],rtc_time_day_s[2],rtc_time_wday_s[1],rtc_time_hr_min_s[10];char rtc_year_flag,rtc_month_flag,rtc_day_flag,rtc_hour_flag,rtc_min_flag,rtc_sec_flag,rtc_wday_flag; char extra_symbols[5]={':','-','.'};extern int rtc_hour_file_ex=0,rtc_minutes_file_ex=0,rtc_year_file_ex=0,rtc_sec_file_ex=0;int clock_sett_hr_s_file_ex,clock_sett_min_s_file_ex,clock_sett_sec_s_file_ex,clock_sett_year_s_file_ex,clock_sett_month_s_file,clock_sett_day_s_file_ex;int pfc_input1;extern int rtc_hr,rtc_min,rtc_sec,rtc_year,rtc_month,rtc_day,rtc_wday;extern float rtc_hr_min;float rtc_hr_min_add;//ssp_err_t err;void RTC_thread_entry(void){ rtc_time_t get_time; ssp_err_t err; err = g_rtc0.p_api->open (g_rtc0.p_ctrl, g_rtc0.p_cfg); if (err == SSP_SUCCESS) { err = g_rtc0.p_api->periodicIrqRateSet (g_rtc0.p_ctrl, RTC_PERIODIC_IRQ_SELECT_1_SECOND); if (err == SSP_SUCCESS) { err = g_rtc0.p_api->calendarCounterStart (g_rtc0.p_ctrl); if (err == SSP_SUCCESS) { err = g_rtc0.p_api->irqEnable (g_rtc0.p_ctrl, RTC_EVENT_PERIODIC_IRQ); } } err = SSP_SUCCESS; rtc_time_t set_time = { 0 }; set_time.tm_hour = 11; set_time.tm_min = 00; set_time.tm_sec = 02; set_time.tm_year = 2021 - 1900; set_time.tm_mon = 2; set_time.tm_mday = 10; /*rtc_hr=atoi(clock_sett_hr_s_file_ex); rtc_min=atoi(clock_sett_min_s_file_ex); rtc_sec=atoi(clock_sett_sec_s_file_ex); rtc_year=atoi(clock_sett_year_s_file_ex); rtc_month=atoi(clock_sett_month_s_file); rtc_day=atoi(clock_sett_day_s_file_ex); */ /* set_time.tm_hour = clock_sett_hr_s_file_ex; set_time.tm_min =flash_data.flash_data_from_firmware_to_guix[47]; set_time.tm_sec =flash_data.flash_data_from_firmware_to_guix[48]; set_time.tm_year =2021- 1900; set_time.tm_mon =2; set_time.tm_mday = 9; set_time.tm_wday=1;*/ err = g_rtc0.p_api->calendarTimeSet (g_rtc0.p_ctrl, &set_time, true); if( err != SSP_SUCCESS) { __BKPT(1); } } while (1) { if(rtc_hour_flag==1) { rtc_time_t set_time = { 0 }; set_time.tm_hour = flash_data.flash_data_from_guix_to_firmware[46]; set_time.tm_min = rtc_min; set_time.tm_sec = rtc_sec; set_time.tm_year =rtc_year; set_time.tm_mon = rtc_month; set_time.tm_mday = rtc_day; tx_thread_sleep (1); err = g_rtc0.p_api->calendarTimeSet (g_rtc0.p_ctrl, &set_time, true); if( err != SSP_SUCCESS) { __BKPT(1); } rtc_hour_flag=0; } else if(rtc_min_flag==1) { rtc_time_t set_time = { 0 }; set_time.tm_hour = rtc_hr; set_time.tm_min = flash_data.flash_data_from_guix_to_firmware[47]; set_time.tm_sec = rtc_sec; set_time.tm_year =rtc_year; set_time.tm_mon = rtc_month; set_time.tm_mday = rtc_day; tx_thread_sleep (1); rtc_min_flag=0; err = g_rtc0.p_api->calendarTimeSet (g_rtc0.p_ctrl, &set_time, true); if( err != SSP_SUCCESS) { __BKPT(1); } } else if(rtc_sec_flag==1) { rtc_time_t set_time = { 0 }; set_time.tm_hour = rtc_hr; set_time.tm_min = rtc_min; set_time.tm_sec = flash_data.flash_data_from_guix_to_firmware[48]; set_time.tm_year =rtc_year; set_time.tm_mon = rtc_month; set_time.tm_mday = rtc_day; tx_thread_sleep (1); rtc_sec_flag=0; err = g_rtc0.p_api->calendarTimeSet (g_rtc0.p_ctrl, &set_time, true); if( err != SSP_SUCCESS) { __BKPT(1); } } else if(rtc_year_flag==1) { rtc_time_t set_time = { 0 }; set_time.tm_hour = rtc_hr; set_time.tm_min = rtc_min; set_time.tm_sec = rtc_sec; set_time.tm_year = flash_data.flash_data_from_guix_to_firmware[42]- 1900; set_time.tm_mon = rtc_month; set_time.tm_mday = rtc_day; tx_thread_sleep (1); rtc_year_flag=0; err = g_rtc0.p_api->calendarTimeSet (g_rtc0.p_ctrl, &set_time, true); if( err != SSP_SUCCESS) { __BKPT(1); } } else if(rtc_month_flag==1) { rtc_time_t set_time = { 0 }; set_time.tm_hour = rtc_hr; set_time.tm_min = rtc_min; set_time.tm_sec = rtc_sec; set_time.tm_year =rtc_year; set_time.tm_mon = flash_data.flash_data_from_guix_to_firmware[43]; set_time.tm_mday = rtc_day; tx_thread_sleep (1); rtc_month_flag=0; err = g_rtc0.p_api->calendarTimeSet (g_rtc0.p_ctrl, &set_time, true); if( err != SSP_SUCCESS) { __BKPT(1); } } else if(rtc_day_flag==1) { rtc_time_t set_time = { 0 }; set_time.tm_hour = rtc_hr; set_time.tm_min = rtc_min; set_time.tm_sec = rtc_sec; set_time.tm_year = rtc_year; set_time.tm_mon = rtc_month; set_time.tm_mday = flash_data.flash_data_from_guix_to_firmware[44]; tx_thread_sleep (1); rtc_day_flag=0; err = g_rtc0.p_api->calendarTimeSet (g_rtc0.p_ctrl, &set_time, true); if( err != SSP_SUCCESS) { __BKPT(1); } } else{ } err =g_rtc0.p_api->calendarTimeGet (g_rtc0.p_ctrl, &get_time); if( err != SSP_SUCCESS) { __BKPT(1); } sprintf(rtc_time_hr_s,"%02d", get_time.tm_hour); sprintf(rtc_time_min_s,"%02d", get_time.tm_min); sprintf(rtc_time_sec_s,"%02d", get_time.tm_sec); sprintf(rtc_time_year_s,"%02d", get_time.tm_year); sprintf(rtc_time_month_s,"%02d", get_time.tm_mon); sprintf(rtc_time_day_s,"%02d", get_time.tm_mday); sprintf(rtc_time_hr_min_s,"%02d%c%02d", get_time.tm_hour,extra_symbols[2],get_time.tm_min); rtc_hr=atoi(rtc_time_hr_s); rtc_min=atoi(rtc_time_min_s); rtc_sec=atoi(rtc_time_sec_s); /////// year 2021-1900= 121 here ignoring first decimal 1 so swapping done in below rtc_time_year_s[0]=rtc_time_year_s[1]; rtc_time_year_s[1]=rtc_time_year_s[2]; rtc_year=atoi(rtc_time_year_s); rtc_month=atoi(rtc_time_month_s); rtc_day=atoi(rtc_time_day_s); rtc_hr_min=atof(rtc_time_hr_min_s); rtc_hr_min=rtc_hr_min+0.001; sprintf(rtc_time_s,"%02d%c%02d%c%02d", get_time.tm_hour,extra_symbols[0],get_time.tm_min,extra_symbols[0],get_time.tm_sec); // sprintf(calender_s,"%02d%c%02d%c%02d", get_time.tm_mday,extra_symbols[1],get_time.tm_mon,extra_symbols[1],get_time.tm_year); sprintf(calender_s,"%02d%c%02d%c%02d", get_time.tm_mday,extra_symbols[1],get_time.tm_mon,extra_symbols[1],rtc_year); tx_thread_sleep (100); // delay 100 x 10ms = 1s and then read the time again to test }} You have the RTC configure to use the LOCO, not Sub-oscillator in the configuration of the RTC driver. In the project you attached, in the configuration of the RTC driver, the RTC is set to use the LOCO, not the Sub-Clock. Hi, Try using Sub-Clock while not on debugging mode and check, We had a similar issue which was solved by this. Regards, Surojit hai !. thanks for reply!.. I checked which you have suggested, same problem i am facing . i am using super capacitor during power down mode from super capacitor 2.7v is going to rtc You also have to enable Voltage monitor 0 to use the VBATT Battery Backup Function :- 12.3.2 VBATT Battery Power Supply Switch UsageThe battery power supply switch can switch the power supply from the VCC pin to the VBATT pin when the voltagebeing applied to the VCC pin drops. When the voltage rises, this switch changes the power supply from the VBATT pinto the VCC pin. Note: You must enable voltage monitor 0 resets to use the battery backup function. Voltage monitor 0 level must behigher than the VBATT switch level. Also, if you have the CGC driver set to configure the subclock drive on reset, the subclock will be stopped at reset in R_CGC_init() :- /** SubClock will stop only if configurable setting is Enabled */#if (CGC_CFG_SUBCLOCK_AT_RESET_ENABLE == 1) r_cgc_clock_stop(gp_system_reg, CGC_CLOCK_SUBCLOCK); // stop SubClock CGC_ERROR_RETURN((SSP_SUCCESS == r_cgc_wait_to_complete(CGC_CLOCK_SUBCLOCK, CGC_CLOCK_CHANGE_STOP)), SSP_ERR_HARDWARE_TIMEOUT); r_cgc_delay_cycles(gp_system_reg, CGC_CLOCK_SUBCLOCK, SUBCLOCK_DELAY); // Delay for 5 SubClock cycles. r_cgc_subclock_drive_set(gp_system_reg, CGC_CFG_SUBCLOCK_DRIVE); // set the SubClock drive according to the configuration#endif hai jeremy!!!.... Thanks for your support!.. where i have to enable the voltage monitor 0 . in which driver it available . Voltage monitor 0 is configured in the BSP tab of the configurator Thanks for your support!.. i have checked your above suggestion still same problem only facing . kindly check my below configurations and and above message i have updated the cgc ,rtc configuration code for your reference . If the RTC has stopped after power has been reapplied, I would check the VBATT power supply. hai!. Kindly update me the solution to solve this problem. The project in the link works on the S7G2_DK board, the RTC keeps running on VBATT when the power is removed from Vcc. renesasrulz.com/.../download rtc_test_.zip Thanks for your support!.. i have checked your above example code in my device after power off rtc gets reset ,same problem facing . After power off super capacitor providing 2.8 v to vbat pin. kindly update me the solution to solve this problem. facing this problem last three weekS and also i have atteched my rtc and lcd code which i have tested in my custom board s7g2 100 pin IC, kindly take it for your reference. sry i didnt get your point yes, just for testing purpose only i used LOCO beforei tested in sub clock .kindly find my new attachment for your reference RTC_TEST_AAD.zip If the project I posted doesn't work on your hardware, I cannot rule out the issue you are seeing is caused by a hardware issue with your board. 2021.rtc_test__working.zip hai!... I have attached my schematic and code in above.kindly take it for your reference,and revert me is it correct or have to make any configuration settings or schematic change should be done or not. In the attached project you still have the RTC driver set to configure the RTC Hardware n the Open() call :- this means the RTC will be stopped when Open API is called, this is not how I had the RTC driver configured in the sample project I posted. hai!..... IF I configured the rtc hardware in open() call- NO , my controller is not reading the values at power on time and you posted code is not get Building in my IDE. Have you looked at my example? yes, i have looked your shared project and same configuration made in my project but problem not solved kindly check my entire configuration and code for your reference. My code only configures and starts the RTC if the RTC is not running, if it is running on return from VBATT power only the RTC open is called, it is not configured or started, e.g. :- err = g_rtc0.p_api->open(g_rtc0.p_ctrl, g_rtc0.p_cfg); if (SSP_SUCCESS != err) { while(1); } err = g_rtc0.p_api->infoGet(g_rtc0.p_ctrl, &rtc_status); if (SSP_SUCCESS != err) { while(1); } if ((RTC_STATUS_STOPPED == rtc_status.status)) { err = g_rtc0.p_api->configure(g_rtc0.p_ctrl, NULL); //p_extended currently not used. if (SSP_SUCCESS != err) { while(1); } /* Saturday 1st Jan 2000 00:00:00 */ calendar_set_time.tm_hour = 0; calendar_set_time.tm_isdst = 0; calendar_set_time.tm_mday = 1; calendar_set_time.tm_min = 0; calendar_set_time.tm_mon = 0; calendar_set_time.tm_sec = 0; calendar_set_time.tm_wday = 6; calendar_set_time.tm_yday = 0; calendar_set_time.tm_year = 2000 - 1900; err = g_rtc0.p_api->calendarTimeSet(g_rtc0.p_ctrl, &calendar_set_time, true); if (SSP_SUCCESS != err) { while(1); } else { /* RTC is running, don't reconfigure it */ } In my project the RTC driver does not initalise the RTC hardware in the call to Open() (configuring the RTC HW in the call to open() will stop the RTC and reset the registers, this is not the behaviour that is required when running the RTC on VBATT) :- The CGC driver does not set the driver strength (as this will stop the Sub-Clock and introduce inaccuracy into the RTC) :- and the Voltage 0 monitoring circuit is enabled (this is required when suing the RTC in battery backup) :- You need to look at my project and understand how it works, and implement something similar in your project. hai!.. During power down mode my super capacitor providing 2.7v to Vbatt pin, This voltage is sufficient for rtc operation? I have an HE-PMI board. I put in a battery and my clock remembers the time when it wakes up... but it doesn't count time while the power is off. I tried changing the SSC to not configure the clock on open, but that didn't help. Do I need to configure a subclock, like you have here? Or is the HE-PMI just not set up to maintain the clock from battery (which seems unlikely given it has a battery slot)? Only the subclock operates in battery backup operation, if you want the RTC to continue to count in battery backup mode, the RTC needs to be operating from the subclock, not the LOCO :- I see. I didn't expect supporting the battery to be quite so involved, but thank goodness you are here to tell us the handful of secret steps that make it easy. Although I don't use Eclipse, I found I can open the configuration.xml file with the synergy_standalone.exe (ironically found in the SSC/eclipse directory) and view your configuration. However, that didn't help as much as I'd hoped; because it's for a DK board I can't tell what you've changed, so I had to rely on the screenshots you highlighted. I found the CGC driver already in my HAL stack (after fruitlessly trying to add one as a new stack) . I'm assuming your main oscillator wait time is different than mine because they're different boards. Since you didn't highlight it, I didn't change it. I found the OFS1 settings by clicking the BSP tab (this was not obvious to me at first). I added the initialization code you provided after adapting it to my circumstances. I didn't use the battery backed RAM markers, though; it's quite likely people will start my device without a battery, so I just have to reconfigure it then. (Is a 2 second delay really necessary? That seems like a long time.) And it works. YAY! Once again Jeremy rides to the rescue. However, I left my board powered on last night (when it was running off of LOCO), and when I came in this morning it had lost 2 minutes over the course of 15 hours. Not good! I will try it again tonight and see if the sub-clock drive is better. I think the load capacitors on the 32kHz sub oscillator crystal are incorrect on the PE-HMI, so they will pull the frequency of the 32.768kHz sub oscillator away from the centre frequency. See these 2 posts :- The sub oscillator crystal used on the PE-HMI is ABS07-32.768KHZ-T which specifies CL as 12.5pF (i.e. the total load capacitance seen by the crystal should be 12.5pF), the circuit on the PE-HMI is :- I used a 2 second wait time before using the sub oscillator because 32kHz crystals can take a while to stabilise. The S7G2 HW manual has this note on Table 60.14 :- Note 1. When setting up the sub-clock oscillator, ask the oscillator manufacturer for an oscillation evaluation and use theresults as the recommended oscillation stabilization time. After changing the setting in the SOSCCR.SOSTP bitto start sub-clock operation, only start using the sub-clock oscillator after the sub-clock oscillation stabilizationtime elapses with an adequate margin. Two times the oscillation wait time is recommended.
https://renesasrulz.com/synergy/f/synergy---forum/17375/rtc-not-working-on-power-down/57086
CC-MAIN-2021-21
refinedweb
2,204
52.36
#include <BCP_message.hpp> Inheritance diagram for BCP_proc_id: The implementation of the message passing protocol must also implement how the processes are identified. All methods are pure virtual, enforcing the correct overriding of the methods. Definition at line 29 of file BCP_message.hpp. Being virtual, the destructor invokes the destructor for the real type of the object being deleted. Definition at line 35 of file BCP_message.hpp. This query method determines whether the current process is the same as the one given in the argument. Returns true if the two processes are the same, false otherwise. Implemented in BCP_single_id. Create a new process id that describes the same process. Cloning is used instead of the copy constructor since this is an abstract base class. Implemented in BCP_single_id. Referenced by BCP_buffer::operator=().
http://www.coin-or.org/Doxygen/CoinAll/class_b_c_p__proc__id.html
crawl-003
refinedweb
130
52.36
Registers and Hardware In the Hello World tutorial we glossed over a lot of detail about what we were writing to and how the hardware access works. For this tutorial, we're going to revisit our blinking LED and go into more detail about the hardware involved. A register is a memory address that is tied to hardware, effectively the hardware either listens to what is written to a register, or updates the contents of a register for us to read and act on. A peripheral, like say a USART or a timer, has a collection of registers that makes up the capabilities of that peripheral. On the Kakapo, those collections are represented by C structs, with different instances of the same peripheral using the same base struct. These are pulled into our code with the include near the top of the hello world example: #include <avr/io.h> In our Hello World example, we were accessing a collection called "PORTE". PORTE is an instance of a digital IO port. Digital IO ports have a bunch of registers associated with them, to define the direction of each pin, the output state, the input being sensed on the pin, as well as more advanced features such as interrupts and pin behaviour. (For more information about C structs, you may wish to Google them.) There are similar instances called PORTA, PORTB, PORTC, and PORTD. All of them have the same collection of registers associated with them. When we write to, say, the OUT register of a port, this has the immediate effect of changing the output of one or more pins on this port. Similarly, if we read the IN register of a port, we get the current sense level of each of the pins. There is a direction register (DIR) for whether a pin is an input or output. Lastly, as noted in the Hello World example, OUT and DIR have bitmask access registers as well. Try attaching an LED to another IO pin, and modifying Hello World to blink it instead. The IO pins are all marked on the board with their port and pin number (eg, PC1 is Port C, pin 1), and you can use any of them as digital IO. Not all registers are used this way, however. Some registers have many different functions or settings on specific bits of the register. This is where the friendly names for parts of a register come into play. Our Hello World code used "PIN3_bm" to refer to pin 3 of a digital IO port. But what is this actually doing? The OUT register is just a number, 1 bit for each pin. We could have written a literal number to the register like this: PORTE.OUTSET = 0x08 But this is much less clear about what it means. To help us, the io.h we included also has many useful values as friendly names. "PIN3_bm" is an example of a friendly name. These are used to make it more obvious what we are fiddling with in a register. For example, to set a output pin mode as "wired AND", we can use the following: PORTC.PIN3CTRL = ((PORTC.PIN3CTRL & ~(PORT_OPC_gm)) | PORT_OPC_WIREDAND_gc); Because we don't have bitmask access, we have to set this feature by first clearing the OPC bits from the existing register value ("PORTC.PIN3CTRL &~(PORT_OPC_gm)"), and then applying the bits for the value we want. As you can see, while this is not the easiest thing to read, it is much easier than: PORTC.PIN3CTRL = ((PORTC.PIN3CTRL & ~(0x38)) | 0x28); You can find many of these friendly names by reading the appropriate headers (avr/iox64d4.h for the Kakapo) and by reading the datasheet for the ATXMEGA64D4.
http://hairy.geek.nz/projects/kakapo/registers-and-hardware/
CC-MAIN-2018-13
refinedweb
618
71.34
Web programming is the science of coming up with increasingly complicated ways of concatenating strings. -- Greg BrockmanIn the description of its authors, Spark is a Sinatra inspired micro web framework for quickly creating web applications in Java with minimal effort. Indeed, the minimalist sense is conveyed by the project's homepage, which contains an hello world example and links to all the relevant resources. Jump into a code sampleThis is an Hello world web application realized with Spark. Loading will display Hello World! (without any HTML code). import static spark.Spark.*; import spark.*; public class SparkExample { public static void main(String[] args) { get(new Route("/hello") { @Override public Object handle(Request request, Response response) { return "Hello World!"; } }); } } Features (or lack thereof) Spark's API contains few features, but very focused ones. Request and Response objects provide full control over HTTP headers and functionalities. You can access query and URL parameters; specific headers like Content-Length and Content-Type are provided by a dedicated method, while you can always access any header from the parametric interface. This interface is really equivalent to the Servlet API. Routes can be provided in a sequential order, each specified for a particular HTTP method (GET or POST usually, but not exclusively) and linking to a callback which will be executed. The callback is the point of conjunction between your application and the HTTP world: separation of concerns is easy to maintain. Filters are a series of hooks that can be inserted before or after the execution of a request; they can optionally match only certain routes. Filters, which are small objects where a single method should be implemented, are the most coupled mechanism introduced by the Spark framework. How it works: containers Spark is a bit different from the classic Java frameworks. It starts itself and configure the application with a main() method, not with the classic inversion of control mechanism where some servlets are deployed into a container. The advantages of this approach are that it simplifies end-to-end testing, and gives you back the control over the lifecycle of your objects. An hexagonal application uses a few servlets just to wrap its own code, so why bother playing with servlets when you can have your own main? The disadvantages of the approach are that you won't get ant functionality from the container, which does not exist here; and you won't be able to access the standard Servlet API but only Spark's one (may be a sacrilege). Actually Spark features Jetty as an embedded container, so the Servlet API is just wrapped by it. You can even run Spark applications in Tomcat or other web servers supporting servlets, and it will integrate like many other frameworks by providing a fixed web.xml. But in my opinion you can just go for Spring in that case. How it works: configuration Another pecularity of Spark is that configuration (like routes) are mapped via Java code: there are no XML files, no annotations, and no INIs. Rather you can use a DSL written in Java, with quite a bit of easy to read static imports (and indeed used at the highest level of abstraction, not inserted in your domain objects.) This syntax is inspired by Sinatra, a similar DSL for Ruby applications; in interpreted languages is quite common to use the language itself for configuration. After all, this choice forces you to treat configuration that may break the application as code; and when was the last time you changed a route specified in XML without changing any of the Java classes? Is this a trend? In general, in my little area of the programming world I'm noticing a softer approach to frameworks, where most of the functionality is provided via libraries instead of as invasive components inserted in the flow of control. This is also where the PHP world is heading (I know Java developers won't care, but still it's interesting):? [..] I don't like MVC because that's not how the web works. Symfony2 is an HTTP framework; it is a Request/Response framework. That's the big deal. The fundamental principles of Symfony2 are centered around the HTTP specification. -- Fabien Potencier Conclusions The Java platform is not necessarily tied to the mindset of giant enterprise frameworks; see also how beautifully the Play framework manages HTTP actions (although still with configuration over convention). Moreover, it does not matter in which language you do develop web applications: at least you can embrace HTTP as the lowest common denominator instead of the MVC machine of a framework. You may not need it, and you may also start not to like it anymore. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/spark-micro-framework
CC-MAIN-2017-17
refinedweb
793
51.07
* A Protected methods in different packages malini griddaluri Greenhorn Joined: May 07, 2002 Posts: 5 posted May 07, 2002 13:55:00 0 I have been trying the following code and it is not working. Can anyone let me know what seems to be the problem? (tried on windows 98 and 2000) /*base class*/ package pack1; import java.io.*; public class base { protected void amethod() { System.out.println("this is a protected method in base class"); } } /*derived class*/ package pack2; import pack1.*; import java.io.*; public class derived extends base { public void amethod() { System.out.println("This is a protected method in derived"); } public static void main(String[] args ) { base b1 = new base(); b1.amethod(); //doesnt work //polymorphism doesnt work too base b = new derived(); b.amethod(); } } my first question is why is the protected amethod() of base class not accessible from derived class and second as to why polymorphism is not working when i use protected method? If i have derived class in the same package as the base class it works fine. pack2\derived.java:18: Can't access protected method amethod in class pack1.base . pack1.base is not a subclass of the current class. b1.amethod(); //doesnt work ^ pack2\derived.java:22: Can't access protected method amethod in class pack1.base . pack1.base is not a subclass of the current class. b.amethod(); ^ 2 errors These are the errors being reported. Please point out the error. Thanks. [ May 07, 2002: Message edited by: malini griddaluri ] Malini Jose Botella Ranch Hand Joined: Jul 03, 2001 Posts: 2120 posted May 07, 2002 14:43:00 0 Please read JLS 6..6.2 to understand the details of the protected access. b1.amethod is not allowed because the type of the reference (base) is not derived or one of its subclasses. derived d = new derived() d.amethod is ok. The message given by the compiler gives a clue about the problem. [ May 07, 2002: Message edited by: Jose Botella ] SCJP2. Please Indent your code using UBB Code John Wetherbie Rancher Joined: Apr 05, 2000 Posts: 1449 posted May 07, 2002 14:48:00 0 Its because you are trying to access the base class protected method outside the derived class. If you do this: derived d1 = new derived(); d1.amethod(); in the main everything works fine. If you change the method in the base class to public everything also works fine. Hope this helps. The only reason for time is so that everything doesn't happen all at once. - Buckaroo Banzai Mike Kelly Ranch Hand Joined: Jul 18, 2001 Posts: 78 posted May 07, 2002 14:50:00 0 Malini, that's a tricky concept here's what's in my notes hope it helps: 128 A subclass in another package can only access protected members in superclass via references of it's own type(new A().i) or a subtype (new B().i). It is not possible to access it by reference to superclass. In this case of A a = new B() - a becomes reference to the superclass for B Dave Wingate Ranch Hand Joined: Mar 26, 2002 Posts: 262 posted May 07, 2002 15:06:00 0 This has been a real surprise for me as I've always heard that listing a method as protected meant that it could be accessed from any class in the same package or a subclass of the enclosing class . I took the above statement to mean that, when considering access to a protected method, I should ask myself the following question: In what type of enclosing class does the line b1.amethod() appear? Is the enclosing class (i.e. derived in the example) in the same package as, or a subclass of, the base class? If so, then the access is permitted. But from all of the above posts, I've obviously been mistaken. I'm wondering if the statement i've written above in bold is just plain wrong, or if I've misunderstood what it means? Could someone help me figure out where I've gone wrong? [ May 07, 2002: Message edited by: Dave Winn ] Fun programming etcetera! Corey McGlone Ranch Hand Joined: Dec 20, 2001 Posts: 3271 posted May 07, 2002 15:18:00 0 Take a look at this section of the JLS: §6.6.7 Example: protected Fields, Methods, and Constructors . There is a nice example there that illustrates what is happening in this case. If you have more questions after looking at that, let me know. Corey SCJP Tipline, etc. malini griddaluri Greenhorn Joined: May 07, 2002 Posts: 5 posted May 07, 2002 20:09:00 0 Hi All, Thank you very much for the clarification but I have a few more doubts. I perfectly understood what mike kelly was referring to : That you can access it only thru a reference of type derived or any of its sub class and not thru a reference of super class type but when i went to the J.L.S and read that section about the protected method access specification the following sentence seemed a little bit confusing . "). " could any of you explain what this means? what exactly is meant by " it is not involved in the implementation"? Rodney Teggins Greenhorn Joined: May 06, 2002 Posts: 10 posted May 08, 2002 01:21:00 0 Thanks for this explaination Jose, Corey, Mike et al as I too was somewhat confused about the meaning of protected although I didn't know it until I saw this post! R Jose Botella Ranch Hand Joined: Jul 03, 2001 Posts: 2120 posted May 08, 2002 06:26:00 0 A derived class is not said to be involved in the implementation of an instance of the base class, because the instance fields of the base class are initialized by the constructor of the base class, not by the constructor of the derived one. We could say that the base class is involved in the implementation of the instances of its subclasses, because once initialized the instance fields, they are inherited by the derived. This concept has buffled everybody reading the JLS, I guess. Thiru Thangavelu Ranch Hand Joined: Aug 29, 2001 Posts: 219 posted May 08, 2002 12:22:00 0 How do you call amethod() of base class from derived class? Thanks,<br />Thiru<br />[SCJP,SCWCD,SCBCD] I agree. Here's the link: subject: Protected methods in different packages Similar Threads Reflection access rules protected access from another package Package Problem... Protected Constructors doubt in protected access All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/237778/java-programmer-SCJP/certification/Protected-methods-packages
CC-MAIN-2014-49
refinedweb
1,107
70.94
We previously used NodeMCU to log temperature data in the Google sheet. Now here we are going to send data to Thinger.io IoT cloud and display it in an attractive graphical format. A BMP180 sensor is interfaced with NodeMCU ESP8266 to collect the temperature, humidity, and altitude data, which will be sent to the Thinger.io platform. In this tutorial, we will learn how to manage different features of the thinger.io platform, like devices, endpoints, data buckets, or access tokens. Components Required - NodeMCU ESP8266 - BMP180 Pressure sensor - Jumper Wires - Breadboard Circuit Diagram Circuit Diagram for this ESP8266 data logger is very straightforward, here only the BMP180 sensor is interfaced with NodeMCU. The BMP180 sensor uses the I2C communication protocol. So you need to connect the SCL and SDA pins of BMP180 to SCL and SDA pins (D1 and D2) of NodeMCU. Also, connect the VIN and GND pin of BMP180 to 3.3V and GND of NodeMCU. Do not connect the Sensor directly to 5V because it can damage the Sensor permanently. To learn more about NodeMCU, check various IoT projects based on NodeMCU ESP8266. Thinger.io Setup for ESP8266 Temperature Logger Thinger.io is an Open-Source Platform for the Internet of Things. It provides every needed tool to prototype, scale, and manage connected products in a very simple way. Thinger.io provides three essential tools i.e. Data Bucks, Dashboard, and Endpoint to work with devices data; these tools can be used to visualize the device data and extend the interoperability of the devices. Data Bucks: Data Bucks tool can be used to store device data in a scalable way, programming different sampling intervals or recording events raised by devices. Dashboard: Dashboard tool has some Panels with customizable widgets that can be created within minutes using drag and drop technology to visualize the real-time and stored data. Endpoints: Endpoints can be used to integrate the platform with other services like IFTTT, custom Web Services, emails, or call other devices. In this ESP8266 logging, we are going to explore these tools. To send data to Thinger.io, you need to create a free account on the Thinger.io platform and follow the below steps to connect your device. Step 1: The first step is to create a new device. To create a new device, click on Devices in the menu tab and then click on the Add Device button. Then fill the form with the device ID, description, and Credentials or generate random credentials for your device and click on ‘Add Device.’ That’s all; your device is ready to connect. In the next step, we will program the NodeMCU to send the data to the Thinger.io platform. IFTTT Setup for NodeMCU Data Logger Here we are using IFTTT to send Email warnings when the temperature goes beyond a limit. IFTTT (If This Then That) is a web-based service by which we can create chains of conditional statements, called applets. Using these applets, we can send Emails, Twitter, Facebook notifications. To use the IFTTT, login to the IFTTT account if you already have one or create an account. Now search for ‘Webhooks’ and click on the Webhooks in Services section. Then, in the Webhooks window, click on ‘Documentation’ in the upper right corner to get the private key. Copy this key, this key will be used while creating Endpoint in Thinger.io. After that, create an applet using Webhooks and Email services. To create an applet, click on your profile and then click on ‘Create.’ Now in the next window, click on the ‘This’ icon. Now search for Webhooks in the search section and click on ‘Webhooks.’ Now choose ‘Receive a Web Request’ trigger and enter the event name as a temp and then click on create a trigger. After this, click on ‘Then That’ and then click on Email. Now in email, click on ‘send me an email’ and enter the email subject and body and then click on create action. In the last step, click on ‘Finish’ to complete the Applet setup. Programming NodeMCU for Data Logging The complete code for sending data to Thinger.io is given at the end of the page. Here, we are explaining some important parts. Start the code by including all the required libraries. The ThingerESP8266.h is used to establish a connection between the IoT platform and the NodeMCU while Adafruit_BMP085.h is used to read the BMP sensor data. You can install the ThingerESP8266.h library from the Arduino IDEs library manager. #include <ThingerESP8266.h> #include <ESP8266WiFi.h> #include <Wire.h> #include <Adafruit_BMP085.h> Next, enter credentials in the code, so the device can be recognized and associated with your account. #define USERNAME "Your account Username" #define DEVICE_ID "NodeMCU" //Your Device Name #define DEVICE_CREDENTIAL "FcLySVkP8YFR" Then, enter your endpoint name. The endpoint is used to integrate the platform with external services like IFTTT, HTTTP request, etc. #define EMAIL_ENDPOINT "IFTTT" Define the variables to store the Pressure, Temperature, and Altitude data. int Pressure, Temperature, Altitude; Inside the void loop (), read the sensor data. The pson data type can hold different data types. So the Pson data type is used to receive multiple values at the same time. thing["data"] >> [](pson& out){ out["Pressure"] = bmp.readPressure()/100; out["Altitude"]= bmp.readAltitude(); out["Temperature"] = bmp.readTemperature(); }; Use if condition to call the Endpoint if the temperature value goes past 15 degrees. Here data is the Endpoint name. if(Temperature > 15){ thing.call_endpoint( EMAIL_ENDPOINT,"data");} Serial.print("Sending Data"); Logging Data on Thinger.io from NodeMCU Now connect the BMP sensor to NodeMCU and upload the code. The NodeMCU will use your account credentials to connect with the device that you created earlier. If it connects successfully, it will show connected, as shown in the below image: You can check your device statistics like Transmitted Data, Received Data, IP Address, Time Connected, etc. by just clicking on the device name from Devices menu. As we are now receiving the data, we will create a dashboard to visualize the data using the widgets. To create a Dashboard, click on Dashboards from the menu tab and then click on ‘Add Dashboard.’ Now in the next window, enter the dashboard details like dashboard name, ID, and Description and then click on Dashboard. After this, access the new dashboard by clicking on the Dashboard name. By default, the dashboard will appear empty. To add the Widgets, you first need to enable the edit mode by clicking on the upper-right switch of the dashboard. Then click on the ‘Add Widget’ button. When you click on the ‘Add Widget’ button, it will show a popup where you can select the widget type, background color, etc. In my case, I have selected the Gauge Widget. When you click on save, it will take you to the next screen where you need to select the Source Value, Device, Resource, Value, and Refresh mode. Select all the values and then click on the Save button. Now repeat the same procedure for the rest of the variables. My dashboard looked like this: Creating Endpoint in Thinger.io to send Email Alert Now we will create an Endpoint to integrate the Thinger.io with IFTTT. An endpoint can be called by the device to perform any action, like sending an email, send an SMS, call a REST API, interact with IFTTT, call a device from a different account, or call any other HTTP endpoint. To create an Endpoint, click on the ‘Endpoint’ option from the menu tabs and then click on ‘Add Endpoint.’ Now in the next window, enter the required details. The Details are: Endpoint Id: Unique identifier for your endpoint. Endpoint Description: Write a description or detailed information about your Endpoint. Endpoint Type: Select the Endpoint type from the given options. Maker Event Name: Enter your IFTTT applet name. Maker Channel Key: Your Webhooks secret key. After this, click on Test Endpoint to check if everything is working. It should send you an email with a warning about the temperature data. Instead of using IFTTT Webhook Trigger, you can send an Email or Telegram Message, or you can send an HTTP request using the Endpoint features. This is how a NodeMCU ESP8266 can be used to log temperature, pressure, and altitude data from the BMP180 sensor to the internet. A working video and complete code are given at the end of the page. #include <ThingerESP8266.h> #include <ESP8266WiFi.h> #include <Wire.h> #include <Adafruit_BMP085.h> #define USERNAME "choudharyas" #define DEVICE_ID "NodeMCU" #define DEVICE_CREDENTIAL "FcLySVkP8YFR" #define EMAIL_ENDPOINT "IFTTT" #define SSID "Galaxy-M20" #define SSID_PASSWORD "ac312124" int Pressure, Temperature, Altitude; Adafruit_BMP085 bmp; ThingerESP8266 thing(USERNAME, DEVICE_ID, DEVICE_CREDENTIAL); void setup() { Serial.begin(115200); thing.add_wifi(SSID, SSID_PASSWORD); if (!bmp.begin()) { Serial.println("Could not find a valid BMP085 sensor, check wiring!"); while (1) {} } } void loop() { Temperature = bmp.readTemperature(); thing["data"] >> [](pson& out){ out["Pressure"] = bmp.readPressure()/100; out["Altitude"]= bmp.readAltitude(); out["Temperature"] = bmp.readTemperature(); }; thing.handle(); thing.stream(thing["data"]); if(Temperature > 40){ thing.call_endpoint( EMAIL_ENDPOINT,"data");} Serial.print("Sending Data"); }
https://circuitdigest.com/microcontroller-projects/nodemcu-datalogger-to-save-temperature-and-pressure-data-on-thinger-io-cloud-platform
CC-MAIN-2020-34
refinedweb
1,513
67.25
by Pritish Vaidya How to make realtime SoundCloud Waveforms in React Native Introduction SoundCloud is a music and podcast streaming platform for listening to millions of authentic tracks. They have a really interactive interface for playing / listening to the tracks. The most important feature in their interface is showing the progress of the track based on its frequency waveform. This helps the users to identify the nature of it. They also have a blog post which describes how to use the waveform based on its image. It is hard to use the same techniques to generate the waveform in a React Native app. Their Waveform.js SDK translates a waveform into floating points to render on an HTML5 canvas and is currently no longer operational. In this article we’ll discuss how to use the same waveform for our React Native apps. Why Should I use SoundCloud’s Waveforms? - The SoundCloud’s waveform looks more impressive than the old boring way of showing the progress bar. - The pre-loaded waveform will give the user an idea of the different frequencies present in the song. - It is also much easier to show the buffered track percentage on a waveform rather than showing it on a blank progress bar. Let’s learn more about SoundCloud’s Waveforms The SoundCloud provides a waveform_url in its tracks API. - Each track has its own unique waveform_url. - The waveform_urlcontains a link to the image hoisted over the cloud. Example — As of now, every argument is static hence it is unusable in this current state. Therefore we need to re-create the waveform based on it using React Native’s containers in order to have access to the touch events, styles etc. Getting Started Here is a list of stuff that you will need: First, we need the sampling of the waveform. The trick is to replace .png with .json for the waveform_url . A GET call to it would give us a response object that contains - width (Width of the waveform) - height (Height of the waveform) - samples (Array) For more info, you can try out the following link. Dive into the code Add a Custom SoundCloudWave Component function percentPlayed (time, totalDuration) { return Number(time) / (Number(totalDuration) / 1000) } <SoundCloudWave waveformUrl={waveform_url} height={50} width={width} percentPlayable={percentPlayed(bufferedTime, totalDuration)} percentPlayed={percentPlayed(currentTime, totalDuration)} setTime={this.setTime} /> It would be better to create a custom SoundCloudWave component that can be used in multiple places as required. Here are the required props: - waveformUrl — The URL object to the waveform (accessible through the Tracks API) - height — Height of the waveform - width — Width of the waveform component - percentPlayable — The duration of the track buffered in seconds - percentPlayed — The duration of the track played in seconds - setTime — The callback handler to change the current track time. Get the samples fetch(waveformUrl.replace('png', 'json')) .then(res => res.json()) .then(json => { this.setState({ waveform: json, waveformUrl }) }); Get the samples by using a simple GET API call and store the result in the state. Create a Waveform Component import { mean } from 'd3-array'; const ACTIVE = '#FF1844', INACTIVE = '#424056', ACTIVE_PLAYABLE = '#1b1b26' const ACTIVE_INVERSE = '#4F1224', ACTIVE_PLAYABLE_INVERSE = '#131116', INACTIVE_INVERSE = '#1C1A27' function getColor( bars, bar, percentPlayed, percentPlayable, inverse ) { if(bar/bars.length < percentPlayed) { return inverse ? ACTIVE : ACTIVE_INVERSE } else if(bar/bars.length < percentPlayable) { return inverse ? ACTIVE_PLAYABLE : ACTIVE_PLAYABLE_INVERSE } else { return inverse ? INACTIVE : INACTIVE_INVERSE } } const Waveform = ( { waveform, height, width, setTime, percentPlayed, percentPlayable, inverse } ) => { const scaleLinearHeight = scaleLinear().domain([0, waveform.height]).range([0, height]); const chunks = _.chunk(waveform.samples, waveform.width/((width - 60)/3)) return ( <View style={[{ height, width, justifyContent: 'center', flexDirection: 'row', }, inverse && { transform: [ { rotateX: '180deg' }, { rotateY: '0deg'}, ] } ]}> {chunks.map((chunk, i) => ( <TouchableOpacity key={i} onPress={() => { setTime(i) }}> <View style={{ backgroundColor: getColor ( chunks, i, percentPlayed, percentPlayable, inverse ), width: 2, marginRight: 1, height: scaleLinearHeight(mean(chunk)) }} /> </TouchableOpacity> ))} </View> ) } The Waveform Component works as follows: - The Chunks split the samplesobject based on the widththat the user wants to render on the screen. - The Chunks are then mapped into a Touchableevent. The styles as width:2and height: scaleLinearHeight(mean(chunk)). This generates the meanfrom the d3-array. - The backgroundColoris being passed as a method with different parameters to the getColormethod. This will then determine the color to return based on the conditions set. - The Touchable onPressevent will call the custom handler passed into it, to set the new seek time of the track. Now this stateless component can be rendered to your child component as: render() { const {height, width} = this.props const { waveform } = this.state if (!waveform) return null; return ( <View style={{flex: 1, justifyContent: 'center'}}> <Waveform waveform={waveform} height={height} width={width} setTime={this.setTime} percentPlayed={this.props.percent} percentPlayable={this.props.percentPlayable} inverse /> <Waveform waveform={waveform} height={height} width={width} setTime={this.setTime} percentPlayed={this.props.percent} percentPlayable={this.props.percentPlayable} inverse={false} /> </View> ) } Here one of the waveform component is original and one inverted as in the SoundCloud’s player. Conclusion Here are the links to the react-native-soundcloud-waveform I’ve also made an app in react-native — MetalCloud for Metal Music fans where you can see the above component at work. Here are the links: Thanks for reading. If you liked this article, show your support by clapping to share with other people on Medium. More of the cool stuff can be found on my StackOverflow and GitHub profiles. Follow me on LinkedIn, Medium, Twitter for further update new articles.
https://www.freecodecamp.org/news/how-to-make-realtime-soundcloud-waveforms-in-react-native-4df0f4c6b3cc/
CC-MAIN-2022-05
refinedweb
893
55.44
the searchstring to the database should look like "jdbc:mysql:_databasename_", "username","password"); where the _databasename_ is the name of the group of tables in mysql. the default _databasename_ that comes with mysql is "mysql" and "test". Just write these names, not the whole path. webmaster import java.sql.*; import javax.swing.*; public class DataB { protected Connection connection; protected Statement statement; public DataB( String naamDatabank ) //naamDatabank = name of your Database { try { Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); connection = DriverManager.getConnection("jdbcdbc:"+naamDatabank); } catch(ClassNotFoundException cnfex) { JOptionPane.showMessageDialog(null, "jdbc odbc driver niet gevonden", "Fout", JOptionPane.ERROR_MESSAGE); } catch(SQLException sqlex) { JOptionPane.showMessageDialog(null, sqlex, "Fout", JOptionPane.ERROR_MESSAGE); } } twofuncky: the OP is using mysql, not the JDBC/ODBC bridge. the code you posted is not applicable ancient: im sure that the mysql documentation reads something entirely different timmytock: why are you writing your own database driver? just read the documentation that came with the mysql driver. it tells you how to install the jar, how to add it to the classpath, how to connect to the database.. it even gives sample java code, that works. (yor connection string looks like it is using a local path.. maybe you havent realised that mySql is a tcp/ip based database system, it doesnt accept local paths)
http://forums.devx.com/showthread.php?139357-help-help.-database-connection-problem&p=412004
CC-MAIN-2013-48
refinedweb
212
51.75
Wiki CherryPy / CherryPySpec The CherryPy HTTP framework Abstract -------- CherryPy is a framework for developing and deploying HTTP applications. CONTENTS ======== 1 Introduction 1 Purpose 2 Requirements 3 Terminology 4 Overview 2 Core 1 Applications 2 Requests and Responses 1 The Request object 2 The Response Object 3 Serving the Request and Response 4 Request Execution 5 Cleanup 3 Dispatchers 1 Invocation 2 request.handler 3 request.config 4 HTTP Servers 5 WSGI 6 Engines 3 Extensions 1 Hooks 1 Hook points 2 Hook objects 2 Tools 1 Decorators 2 Callables 3 Handlers 3 Toolboxes 4 Configuration 1 Scopes 2 Namespaces 1 Namespace handlers 3 Handler Attributes 4. Footnotes and References 1 Introduction CherryPy is a framework for developing and deploying HTTP applications. 1.1 Purpose This specification defines the composition and interaction of CherryPy components. 1.2 Requirements The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. See. Unless otherwise specified, all terminology used in this specification should be interpreted as that of "Hypertext Transfer Protocol -- HTTP/1.1" (RFC 2616) and "Uniform Resource Identifiers (URI): Generic Syntax and Semantics" (RFC 2396). Additional terms: handler (page handler) A callable which responds to a request, usually by returning an HTTP response body. handler (namespace handler) A callable which parses and applies a configuration entry based on a hierarchy of entry names. unexpected exception In the normal course of responding to requests, CherryPy raises known exceptions such as HTTPError, HTTPRedirect, and InternalRedirect in order to skip various parts of the request process. In addition, the exceptions SystemExit and KeyboardInterrupt are never handled by request objects, but are always passed outward to the caller. These are all "expected exceptions", and any other exception, therefore, is defined as an "unexpected exception". 1.4 Overview CherryPy consists of not one, but four separate API layers. The APPLICATION LAYER is the simplest. CherryPy applications are written as a tree of classes and methods, where each branch in the tree corresponds to a branch in the URL path. Each method is a 'page handler', which receives GET and POST params as keyword arguments, and returns or yields the (HTML) body of the response. The special method name 'index' is used for paths that end in a slash, and the special method name 'default' is used to handle multiple paths via a single handler. This layer also includes: * the 'exposed'. See Section 3. Finally, there is the CORE LAYER, which uses the core API's to construct the default components which are available at higher layers. You can think of the default components as the 'reference implementation' for CherryPy. Megaframeworks (and advanced users) may replace the default components with customized or extended components. The core API's are discussed in Section 2. 2 Core 2.1 Applications CherryPy uses an application object to implement a collection of URI's which maps to a collection of page handlers. This terminology is taken directly from Fielding, "..." The exact implementation of that mapping is dependent on the dispatcher(s) (section 2.3) which the application employs internally; by default, the external application interface only exposes a "script name" (root URI) for the entire collection. An application object MUST contain the following three attributes: * script_name: a string, containing the "mount point" for this object. A mount point is that portion of the URI which is constant for all URIs that are serviced by this application; it does not include scheme, host, or proxy ("virtual host") portions of the URI. It MUST NOT end in a slash. If the script_name refers to the root of the URI, it MUST be an empty string (not "/"). * config: a nested dict, containing configuration entries which apply to this application, of the form: {section: {entry name: value}}. The 'section' keys MUST be strings. If they represent URI paths, they MUST begin with a slash, and MUST be relative to this object's script_name. If they do not begin with a slash, they SHOULD be treated as arbitrary section names, which applications MAY use as they see fit. The 'entry name' keys MUST be strings, and in the case of path sections, SHOULD be namespaced (section 3.4). The values may be arbitrary Python values. * namespaces: a dict of configuration namespace names and handlers. See section 3.4. Application objects also MUST possess a "merge" method, that takes a single "config" argument, which MUST be a dict, nested in the same manner as the application object's config. The "merge" method MUST combine the supplied config with the application object's existing config dict in such a way that the supplied config overrides (overwrites) entries in the existing config. The "merge" method MUST NOT remove any values in the existing config unless replacing them with a new value, or performing the removal via a namespace handler. The "merge" method MUST pass all entries in the supplied config to the proper namespace handler (if any). It MUST NOT pass any entries from the existing config to namespace handlers, since these entries will have already been handled when they were first merged. Callers SHOULD NOT attempt to add config entries to the application object via any means other than passing a new config dict to the "merge" method. The specification of application objects excludes calling syntax by design; their implementation, however, MAY include additional methods which are used to associate them with an HTTP request, and even initiate the handling of each request. For example, the reference implementation extends the spec by adding a __call__ method which acts as a "WSGI application interface"; WSGI servers and middleware may then hand off request processing to such an application object by calling it. In addition, application objects MAY possess other attributes and methods which consumers can use to differentiate them. For example, a consumer might wish to use different application objects based on the "Accept" HTTP request header, in which case a cooperating creator of application objects could give each object an additional "accept" attribute. 2.2 Requests and Responses The CherryPy Request API involves the creation and handling of Request and Response objects, and also a caller. The caller is usually an HTTP server (section 2.4), although it may act through intermediaries such as a WSGI adapter (section 2.5) and/or an Engine (section 2.6). The rest of this section uses "HTTP server" to mean any combination of calling code, regardless of its architecture. The API is quite simple, and consists of five steps: 2.2.1 The Request Object An HTTP server obtains a request object by instantiating it directly. Each HTTP request MUST result in a separate request object. The constructor arguments for the request object are: local_host: an instance of http.Host corresponding to the server socket. remote_host: an instance of http.Host corresponding to the client socket. scheme: a string containing the protocol actually used for the HTTP conversation, lowercased. Usually, this will be either "http" or "https", but is open to extension. This should be provided by the server based on its own awareness of the conversation details; that is, it should not be obtained from any part of the request message itself. server_protocol: a string containing the HTTP-Version for which the server is at least conditionally compliant. Servers which meet all of the MUSTs in RFC 2616 should set this to "HTTP/1.1"; all others should use "HTTP/1.0" (lower versions are not explicitly supported). Once the HTTP server obtains the request object, it is free to modify it in any way it sees fit. Generally, this involves adding new server environment attributes such as 'login', 'multithread', 'app', 'prev' and so on. Some such additional attributes MAY be required by individual request implementations. Request objects SHOULD use hooks (section 3.1) and tools (section 3.2) to implement extensions. 2.2.2 The Response Object The HTTP server obtains a response object by instantiating it; there are no arguments. Each HTTP request MUST result in a separate response object. Once the HTTP server obtains the response object, it is free to modify it in any way it sees fit. Some additional attributes MAY be required by individual response implementations. 2.2.3 Serving the Request and Response Once the HTTP server has obtained a request and response object (and before executing the request object, section 2.2.4), it MUST register them both via: cherrypy.serving.load(req, resp) This makes the request and response objects available via cherrypy.request and cherrypy.response, respectively. 2.2.4 Request Execution When ready, the HTTP server calls the 'run' method of the Request. It takes the following arguments; the first four SHOULD be obtained directly from the HTTP Request-Line. * method: a string containing the HTTP request method token. Methods are case-sensitive. * path: a string containing the Request-URI, minus any query string. This string MUST be "% HEX HEX" decoded. * query_string: a string containing the query string from the URI. This string SHOULD NOT be "% HEX HEX" decoded. * req_protocol: a string containing the HTTP-Version of the request message; for example, "HTTP/1.1". * headers: a list of (name, value) tuples containing the request headers. * rfile: a file-like object containing the HTTP request entity. The 'run' method handles the request in any way it sees fit. The only constraint is that it MUST return the cherrypy.response object, which MUST be the same object that the HTTP server created, and which MUST have the following three attributes upon return: * status: a valid HTTP Status-Code and Reason-Phrase, e.g. "200 OK". * header_list: a list of (name, value) tuples of the response headers. * body: an iterable yielding strings. The HTTP server SHOULD then use these response attributes to build the outbound stream. Due to the vagaries of socket communications, and to reduce the burden on server authors, the HTTP server MAY iterate over the entire response body, or it may not. CherryPy application authors should not assume that page handlers which are generators will run to completion. 2.2.5 Cleanup Regardless of whether the HTTP server iterates over the entire response body or not, it MUST call the 'close' method of the request object once it has finished with the body. The 'close' method takes no args, and MUST be idempotent. Once an HTTP server obtains a request object, it MUST call the 'close' method, even if exceptions occur during the remainder of the process. Once the 'close' method returns (or errors), the HTTP server SHOULD delete all references to the request and response objects. In addition, the HTTP server MUST clear the serving object as follows: cherrypy.serving.clear() 2.3 Dispatchers A 'dispatcher' is the function or callable object which looks up the 'page handler' callable and collects config for the current request based on the path_info, other request attributes, and the application architecture. The default dispatcher discovers the page handler by matching path_info to a hierarchical arrangement of objects, starting at request.app.root. Other dispatchers MAY use other techniques to map the given URI (and other message parameters) to the proper handler. 2.3.1 Invocation Request objects MUST look up and call a dispatcher as early as possible after the headers are read and parsed, and MUST pass a single 'path_info' argument to the dispatcher. Dispatchers MUST be callable, and MUST take a single 'path_info' argument (a string). When called, they MUST set request.handler and request.config. In addition, if the handler is an "index" handler (designed to map to URI's which end in a slash ("/")), the dispatcher SHOULD set request.is_index to True. 2.3.2 request.handler The value bound to request.handler MUST be a callable object that takes no arguments. Note that instances of the builtin exceptions HTTPError, NotFound, and HTTPRedirect may be set as handlers, if appropriate. Because request.handler MUST take no arguments, it MAY be wrapped in an intermediary object which calls the "real" handler, allowing the "real" handler to be passed arguments which have been stored in the intermediary. For example, the LateParamPageHandler in the reference implementation wraps the "real" handler so that it can decide which arguments to pass to the handler (and can decide as late as possible). Such intermediaries SHOULD provide read-write access to the wrapped handler and SHOULD provide read/write access to the positional and keyword arguments which they will eventually pass to the wrapped handler. 2.3.3 request.config The value bound to request.config MUST be a new dict object (that is, not shared between requests) and MUST contain all entries found in cherrypy.config, and any entries found in cherrypy.request.app.config which apply to the current path_info or one of its hierarchical ancestors. Entries from app.config MUST override entries from cherrypy.config, and multiple entries in app.config MUST be collapsed into a single entry by retaining the value with the longest URI path. The request.config dict SHOULD also contain _cp_config entries from handler methods and their containers (such as controller classes) and merge those values into request.config. However, since the very nature of different dispatchers is to enable different controller architectures, the decision of where to attach and collect _cp_config entries is dispatcher-specific. Also, dispatchers SHOULD allow app.config entries to override _cp_config entries; this allows deployers to more easily override developer defaults. Dispatchers may be nested, and therefore a given dispatcher MAY call another and pass it a different 'path_info' argument (for example, the builtin VirtualHost dispatcher adds a prefix to the path_info value it receives before calling the next dispatcher). Some consumers may even wish to attach dispatchers as methods on their controller classes (which would then presumably set request.handler to a found method of that controller). 2.4 HTTP Servers An "HTTP server" is a component "that accepts connections in order to service [HTTP] requests by sending back [HTTP] responses." "HTTP communication usually takes place over TCP/IP connections." Server objects MUST possess the following attributes: * protocol_version: a string containing the HTTP-Version for which the server is at least conditionally compliant. * start: a method which starts the HTTP server. In order to make servers easier to write, this method MAY block until the server is stopped or interrupted. * ready: a boolean state flag, which the server MUST set internally to signal whether or not it is ready to receive requests from clients. * stop: a method which stops the HTTP server. This method MUST block until the server is truly stopped (all threads idle or shutdown and all sockets closed, including the listening socket). * restart: a method which calls stop, then start. * max_request_body_size: * max_request_header_size: * thread_pool: Servers which communicate over TCP SHOULD possess these additional attributes: * reverse_dns: * socket_file: * socket_host: * socket_port: * socket_queue_size: * socket_timeout: Servers which use SSL SHOULD possess these additional attributes: * ssl_certificate: * ssl_private_key: 2.5 WSGI See PEP 333. 2.6 Engines Engine objects MUST possess the following attributes: * state: a state flag, one of: * STOPPED = 0 * STARTING = None * STARTED = 1 * block: a method which MUST block until the 'state' is STOPPED or an exception is raised. This allows a main thread to wait while child threads respond to HTTP requests. If any exception is raised, the method SHOULD call its own 'stop' method. If KeyboardInterrupt or SystemExit is raised, the method MUST call server.stop. * restart: a method which MUST call the 'stop' method, and then the 'start' method. * start: a method which takes a single optional 'blocking' argument. If True, the 'start' method MUST call the 'block' method. The 'start' method MAY temporarily set 'state' to STARTING, but MUST set it to STARTED before either returning or blocking. * stop: a method which MUST set 'state' to STOPPED. Note that this will signal any thread which has called 'block' to stop blocking. * wait: a method which must block until the 'state' is STARTED. This allows a main thread to wait until the engine has started without having to block after that point. 3 Extensions 3.1 Hooks Hooks are optional callables which are invoked at various points in the request-handling process. They MAY be declared (attached) by the core, by application developers, and by deployers. 3.1.1 Hook points Each hook callable is bound to a "hook point", a named calling point inside the request-handling process. The exact list of available hook points is flexible, and SHOULD be specified by the request object (section 2.2.1). Request objects SHOULD implement the following hook points, and SHOULD call them according to the corresponding descriptions: * on_start_resource: called after the headers are read and parsed, and a page handler is located. * before_request_body: called just before the request entity body is read from the incoming stream. * before_handler: called just before the page handler is called. * before_finalize: called just before the response entity is checked for validity. For page handlers which buffer their output, this should be called after the entire response body has been buffered. For page handlers which stream their output, this should be called after the generator has been returned, but before it has been iterated over. This may be called more than once if errors occur. * on_end_resource: called just before the "run" method of the request object returns. * on_end_request: called after the entire response message has been written out to the client. This allows hook callables to run after unbuffered page handlers have terminated. In general, this should be run inside the request object's "close" method. * before_error_response: called just before generating a response due to an unexpected exception. * after_error_response: called just after generating a response due to an unexpected exception. 3.1.2 Hook objects In order to facilitate the declaration, inspection, and invocation of hook callables, each one MUST be wrapped in a Hook object. Each Hook object MUST possess the following attributes: * callback: The hook callable that this Hook object is wrapping, which will be called when the Hook is called. * failsafe: If True, the callback MUST be guaranteed to run even if other callbacks from the same call point raise any exceptions (other than KeyboardInterrupt and SystemExit). Because errors may be silenced by failsafe hooks, unexpected exceptions which occur during the execution of a hook MUST be logged. * priority: Defines the order of execution for a list of Hooks at the same hook point. Priority numbers SHOULD be limited to the closed interval [0, 100], but values outside this range are acceptable, as are fractional values. * kwargs: A set of keyword arguments that will be passed to the callable on each call. 3.2 Tools The Tool interface allows pluggable extensions, both simple and complex, to be declared by a uniform API. It also allows request objects to run code between the page handler lookup (section 2.3.2) and the first hook (section 3.1). This is essential to provide dynamic hook declarations based on the configuration in effect for each request. Tool objects MUST possess a single "_setup" method which takes no arguments. This method MUST be called after the request.handler has been obtained, and before the first hook point is reached. The reference implementation uses toolboxes (section 3.3), each with its own configuration namespace (section 3.4.2), to accomplish this. Tools SHOULD belong to a toolbox. The "_setup" method SHOULD attach hooks in order to invoke functionality at appropriate points in the request process. 3.2.1 Decorators Tool objects SHOULD be callable, and this feature SHOULD be used as a decorator to declare that a given tool applies to a given handler. For example, given a Tool object called "tools.proxy", the following code snippet would enable the tool for the given handler: @tools.proxy(base="") def whats_my_base(self): return cherrypy.request.base whats_my_base.exposed = True Note in particular that the Tool object must be called to be used in this fashion. This allows application developers to supply keyword arguments to the decorator that will then be used by the tool when its "_setup" method is called. That is, the following code is not expected to work (its behavior is undefined by this specification), since tools.proxy is used as a decorator itself, rather than the result of tools.proxy(): @tools.proxy def whats_my_base(self): return cherrypy.request.base whats_my_base.exposed = True Note also that the reference implementation does not wrap the original function; instead, it asserts that the decorated handler function has a configuration attribute (section 3.4.3) which enables the tool. Tool implementations SHOULD do likewise. 3.2.2 Callables Tool objects SHOULD expose an attribute named "callable", which allows the functionality of the tool to be invoked anywhere, most likely from within a page handler. If the tool object does not have invokable functionality, or if it uses cooperating hooks that are not useful in isolation, it SHOULD NOT expose the "callable" attribute. 3.2.3 Handlers Some tools are designed to circumvent the normal calling of a page handler; for example, a tool which finds static files and serves them as the response does not need to then call a separate handler. Such tools SHOULD expose a "handler" method, which allows the tool to be declared in place of a "normal" page handler method: from cherrypy.tools import staticdir class Root: nav = staticdir.handler(section="/nav", dir="nav", root=absDir) The "handler" method, if provided, MUST return a callable which can be used as a request.handler callable. That callable SHOULD have its "exposed" attribute set to True before being returned from the "handler" method. The reference implementation includes a HandlerTool class which implements these recommendations. 3.3 Toolboxes A toolbox is a set of tools sharing a single namespace. CherryPy uses the "tools" namespace for the built-in tools. Distinct toolboxes should be unaware of each other. 3.4 Configuration In CherryPy, "configuration" refers to the (declarative) values and attributes which affect the (imperative) behavior of a running program. Implementations MUST provide a means of declaring configuration values (indeed, they can hardly prevent normal code from being one); they MAY do so in formats other than Python code (such as INI-style config files). 3.4.1 Scopes CherryPy configuration is separated in several ways, each set of boundaries mapping directly to some user need. Configuration data MUST allow for two independent layers: that which applies to a single application and that which applies to ALL applications. The former is called "(per-)application" config, and the latter is called "global" (or "site-wide") config. Application config is further separated by URI in a hierarchical fashion. That is, each configuration entry for a given URI MUST apply to that URI and all its child URI's (all URI's that begin with the given URI), unless explicitly counteracted by an opposing entry for a child URI. In some cases, two different applications may share a common URI. For example, a WSGI dispatcher may choose one over another based on the contents of the "Accept" header. A more common example occurs when one application is "mounted" at "/" and another mounted at "/foo". When this occurs, the configuration of each application MUST be isolated to that application; that is, configuration entries from one application MUST NOT "leak" into another, even if they share the same URI-space. 3.4.2 Namespaces CherryPy config entries, whether global- or application-scoped, SHOULD be "namespaced"; that is, they should use a hierarchical naming scheme for the keys. The reference implementation, for example, adopts the Python "dotted attribute" notation, so that e.g. "tools.sessions.name" refers to a "tools" container (object) with a "sessions" attribute, and a "name" subattribute. This allows the parsing and activation of configuration data to be controlled by smaller "handler" components (at the least, one for each top-level namespace), rather than by a monolithic parser. 3.4.2.1 Namespace handlers Namespace handlers are objects which parse and activate configuration entries based on a hierarchy. In order to reduce confusion and allow for easy extension, CherryPy implementations SHOULD use sets of namespace handlers exclusively for this task. Each handler in a set MUST be either a callable which takes a key and a value argument, or a Python 2.5-style context manager [1] whose __enter__ method returns such a callable. The "key" argument MUST by a string, and that key MAY include further hierarchical delimiters (which the callable will parse on its own). The value's type and range are variable for each entry. 3.4.3 Page Handler Attributes In addition to allowing application developers and deployers to associate configuration with specific URI's, the implementation SHOULD allow them to associate configuration entries with specific page handlers. Because the mapping of URI's to page handlers is not 1:1, this allows maximum developer flexibility. 4. Footnotes and References [1] For a complete discussion of the use and requirements of context managers, see Updated
https://bitbucket.org/cherrypy/cherrypy/wiki/CherryPySpec
CC-MAIN-2018-26
refinedweb
4,162
55.03
There are various compound assignment operators, however it is only necessary to know the 4 basic compound assignment operators for the exam, being as follows: - += (Addition compound) - -= (Subtraction compound) - *= (Multiplication compound) - /= (Division compound) Essentially, they’re just a lazy way for developers to cut down on a few key strokes when typing their assignments. Consider the following examples that I’ve created to demonstrate this: package operators; public class AssignmentsOperator { public static void main(String[] args) { int x = 5; // This is the "longhand" way of adding 5 to x x = x + 5; // You could use the compound assingment as follows, to achieve the same outcome x += 5; // Lets look at the other 3... // This... x = x - 1; // ...is the same as x -= 1; // This... x = x * 10; // ...is the same as x *= 10; // This... x = x / 10; // ...is the same as x /= 10; } } That is it! Any suggestions, please let me know! Happy coding.
http://www.jameselsey.co.uk/blogs/techblog/compound-assignment-operators/
CC-MAIN-2016-50
refinedweb
154
62.88
I really really really want to have debugger integration with my Vim setup and while the plugins for old Vim were a little wacky, the new architecture of NeoVim seems promising, so I decided to give lldb.nvim a go. It didn't work. This is a ( Step 1: update Update your neovim to the latest release to avoid fighting issues that have already been solved. At the time of writing, I used: - nvim 0.1.2 from Homebrew - OS X 10.10.5 - XCode 7.0 - lldb-340.4.70 Step 2: Diagnose PyThreadState_get errorIf you're on OS X, chances are you have more than one Python version installed and that's where the trouble comes from. If you get this error message >>> import lldb Fatal Python error: PyThreadState_Get: no current thread it's most likely because you're trying to import a module that has been linked with a different version of Python. The lldb module comes with the XCode developer tools and was linked with the default system version of Python which lives in (remember this) /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python so this is the python version you should use to run the lldb.nvim remote plugin. On my system, BTW, lldb module lives inso this is the python version you should use to run the lldb.nvim remote plugin. On my system, BTW, lldb module lives in /Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources/Python Step 3: Install neovim Python module The neovim module has probably already been installed with neovim but perhaps not in the correct Python version. You can try to import neovimin the system Python. If it fails, you'll need to install it using easy_installor pip: sudo /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python -m easy_install neovim This will install the neovim package into the system Python distribution (needs sudo) using the easy_install tool. Step 4: Configure neovim to use the system Python In your neovim config file, add this line: let g:python_host_prog = '/System/Library/Frameworks/Python.framework/Versions/2.7/bin/python' This ensures that neovim will start the system Python (which has access to lldb and neovim modules) to host the plugin. After this, you should be all set! Step 5: Using $PYTHONPATH? If you do use $PYTHONPATH with your non-system Python, you'll have trouble as well. Before launching the system Python from nvim, you'll need to clean this variable otherwise the packages will interfere with the system Python's packages. I do that using a small wrapper script I do that using a small wrapper script ~/syspython2which gets invoked from nvim as the g:python_host_prog #!/bin/sh # running the OS X system python. Required to import the lldb module. export> ~/syspython2.log #echo "$@" >> ~/syspython2.log /System/Library/Frameworks/Python.framework/Versions/2.7/bin/python "$@" Step 6: Diagnose Still having trouble? - Don't forget to run :UpdateRemotePlugins - Enable logging in the ~/syspython2script - Check using pstree | lessif neovim is launching the correct Python binary - Double-check you can import neovim and lldb modules from the system Python - Make sure lldb.neovim is installed correctly - the file lldb.nvim/rplugin/python/lldb_nvim.pymust exist - NeoVim also tries to load Python 3 plugins, you may need to do the same for Python 3 - Try to debug /usr/local/Cellar/neovim/0.1.2/share/nvim/runtime/autoload/remote/host.vimusing debugging vim methods - More info about lldb Python module here on StackOverflow
https://blog.rplasil.name/2016/03/
CC-MAIN-2018-26
refinedweb
584
65.22
The .NET Stacks #10: .NET 5 taking shape, how approachable is .NET, a talk with Jeremy Likness, more! We discuss the latest on .NET 5, the approachability of .NET, a talk with Jeremy Likness, and more! We had a tremendously busy week in .NET and it shows in this week’s edition. This week, we: - Discuss the latest in .NET 5 - Think about what it’s like to be a beginner in the .NET ecosystem - Kick off an interview with Jeremy Likness, Microsoft’s Sr. PM of .NET Data - Take a trip around the community Also, say 👋🏻 to my parents! They just subscribed and this is the only line they’ll understand. 🤓 .NET 5 is close, so close This week, we hit the preview 7 release for .NET 5 (see the notes for .NET 5 Preview 7, EF Core 5 Preview 7, and ASP.NET Core updates in the new preview, as well as Steven Toub’s post on .NET 5 performance improvements). The next release for .NET 5 will be Release 8, and then there will the two RCs (each with “go live” licenses). .NET 5 is slated for official release in early November. What’s new in these previews? A few items of note: - Blazor WASM apps now target .NET 5 - Blazor improvements in debugging, accessibility, and performance - In EF, a DbContextFactory, ability to clear the DbContextstate, and transaction savepoints It looks like single-file apps, the ability for .NET Core apps to be published and distributed as a single executable file, is coming in the next release (release 8). How would you help a beginner get started on .NET? Dustin Gorski published a great post last week called .NET for Beginners. It certainly isn’t a quick read, but I would recommend giving it a read when you get a few minutes. He discusses why many feel that .NET isn’t very approachable for newcomers. It opened my eyes: I was aware of most of his criticisms, but I’ve been dealing with them for so long I haven’t thought about how difficult it can be for newcomers coming into .NET for the first time. From my perspective, what I agreed most with was: - How you have to know a lot to get started - How feature bloat really prevents beginners from knowing the best way to do something, when there are so many ways to do it (with varying performance impacts) - How the constant change and re-architecting of the platform makes it almost impossible to keep up Think about it: if someone can to you and asked how to get started in .NET using a sample application, what would you say? Would you start by explaining .NET Core and how it’s different than .NET Framework? And mention .NET Standard as a bridge between them? Or get them excited about .NET 5? Would you also bring up C# and F# and VB and the differences between them? What about Xamarin and Mono? What about a simple web site? Should they get started with Blazor? Traditional MVC? Razor Pages? One of my biggest takeaways from the piece: you have to know a lot of trivia to start developing in .NET. Feature bloat is real, especially when it comes to C#. Take, for example, the promise of immutability in C# 9 with records (which I’ve written about). Records are similar to—but have immutable features over—structs. The exact reasons aren’t important here (that records are meant for immutability and prevent boilerplate code) but a valid complaint shows: why are we introducing a new construct into the language where we could have improved on what already exists? If someone wanted to use immutability in .NET, what would you say now? Use records in C# 9? Use readonly structs? What about a tuple class? Or use F#? This is not to say Microsoft isn’t aware of all these challenges and isn’t trying to improve. They definitely are! You can head over to try.dot.net to run C# in the browser (with an in-browser tutorial). You can run code in the docs and in Microsoft Learn modules. C# 9 top-level statements take away that pesky Main method in console apps (yet another shameless plug). Coming from the slow-moving days from the bloated System.Web namespace, I never thought I would be hearing (and agreeing) about gripes of .NET moving too fast. This is a testament to the hard work by the folks at Microsoft. Taking hints from .NET 5, I hope .NET allows developers to focus on what’s important, not try to be everything to everyone, and avoid feature bloat in C#. I think a continued focus on approachability positively impacts all .NET developers. Dev Discussions: Jeremy Likness, Sr. PM of Data at Microsoft<< As a developer at heart, what kind of projects have you been tinkering with? My first consulting projects … involved XAML via WPF then Silverlight. I was a strong advocate of Silverlight because I had built some very complex web apps using JavaScript and managing them across browsers was a nightmare. …?” This is just a small portion of my interview with Jeremy. There is so much more at my site, including Jeremy’s unusual path and a crazy story about his interview! 🌎 Last week in the .NET world 🔥 The Top 3 - We’re getting so close to .NET 5: Jeremy Likness announces EF Core 5 Preview 7, Richard Lander announces .NET 5 Preview 7, and Microsoft also announces ASP.NET Core updates in .NET 5 Preview 7. - The .NET Foundation has begun the voting process for board elections. - Jon Hilton asks if Blazor EditForms is essential or too much magic. 📢 Announcements - Kayla Cinnamon announces the release of Windows Terminal Preview 1.2. - Tara Overfield announces the .NET Framework July 2020 update preview. - It looks like many-to-many is now working in the EF daily builds. - The Azure Service Fabric 7.1 announced a second refresh release. - The Windows team announced the first update to the Windows Package Manager. - Daniel Jurek talks about the July release of the Azure SDK. 📅 Community and events - SciFiDevCon is coming next week. - We had three community standups this week: Desktop discusses EF Core 5 updates, Entity Framework discusses scaffolding with Handlebars, and ASP .NET discusses web tools with Sayed Hashimi. - The DotNet Docs Show talks .NET desktop app development with Olia Gavrysh. 😎 Blazor - Lee Richardson compares Blazor WASM with Angular. - Ramkumar Shanmugam discusses how to use bUnit for Blazor and integrating it into your Azure pipeline. - Eilon Lipton provides a monthly update for hybrid Blazor apps in Mobile Blazor Bindings. - Marinko Spasojevic discusses Blazor WebAssembly forms, form validation, and @ref directives. - Andrea Chiarelli secures Blazor WebAssembly apps. - Marinko Spasojevic sorts in Blazor WebAssembly and ASP.NET Core Web API. - Matthew Jones builds Conway’s Game of Life in C# and Blazor WASM. 🚀 .NET Core - Michal Bialecki merges migrations in EF 5. - David Grace discusses creating your own logging provider to log to text files in .NET Core. - Thomas Levesque works through ASP.NET Core 3, IIS, and empty HTTP headers. - Carl Rippon does integration testing on ASP.NET Core Web API controllers with SQL. - Michal Bialecki adds EF Core 5 migrations to a .NET 5 project. - Jon P. Smith talks tips and techniques for configuring EF Core. - Jason Gaylord talks about adding Newtonsoft.JSON back to .NET Core 3.1 and later. - David Grace uses hosted services in ASP.NET Core to create a “most viewed” background service. ⛅ The cloud - Damien Bowden waits for Azure Durable Functions to complete. - Steve Fenton talks about how to start and stop an Azure App Service on a schedule with Azure Logic Apps. - Angie Doyle provides a monthly update on the Azure Portal. - Chris Noring learns durable functions with .NET Core and C#. - Jason Gaylord creates an Alexa skill using .NET Core and Azure. - Daniel Krzyczkowski talks event sourcing with Azure SQL and EF Core. - Aaron Powell continues working with GraphQL on Azure. - Microsoft appears to be working on an Azure-powered cloud PC service. - Steve Gordon introduces Docker ECS integration for AWS. - Itay Podhajcer deploys a serverless RabbitMQ cluster on Azure with .NET. - Damien Bowden uses Azure Key Vault and managed identities with Azure Functions. - Azure Tips and Tricks has a new article about Azure Functions and secure configuration with Azure Key Vault. - Over at AWS, Steve Roberts talks about AWS X-Ray for tracing distributed .NET apps. 📔 Languages - Josef Ottosson talks about how C# 9 records will change his life. - Derek Comartin wants you to stop throwing exceptions and start being explicit. - Khalid Abuhakmeh has fun with LINQ expression visitors. - Dave Brock goes on a C# 9 scavenger hunt. - Jeremy Likness looks behind the IQueryable curtain. - Jonathan Allen talks about lambda improvements with C# 9. - Anand Gupta dives deep on .NET garbage collection. - Jonathan Allen talks about range operators and pattern-matching in C# 9. - Ian Griffiths talks about supporting older runtimes with C# 8 nullable references. - Jonathan Channon talks about applicatives and custom operators in F#. 🔧 Tools - Khalid Abuhakmeh writes Xamarin.Mac Apps With JetBrains Rider. - Michael Shpilt offers some tricks on working with the Immediate Window in Visual Studio. - Adam Storr loves Resharper 2. - Joseph Guadagno debugs Azure Function Event Grid triggers locally with JetBrains Rider. - Over at the NDepend blog, some tips for working with Visual Studio files and layouts. - j2i.net does a review of the Windows Terminal Preview. - Scott Hanselman explores RepoDB, a .NET open source hybrid ORM library. 📱 Xamarin - David Ortinau draws UI with Xamarin.Forms shapes and paths. - Stefan Nenchev builds a sample app with Xamarin.Forms, Telerik UI, MVVMFresh, and EF Core. - Charlin Agramonte works through multilingual support in Xamarin.Forms. - Rendy Del Rosario creates a Xamarin Binding Library for iOS And Android. - Nick Randolph says: Xamarin.Forms is not just a XAML platform. - The Xamarin Podcast talks about Xamarin.Forms 4.7. 🎤 Podcasts - The .NET Rocks podcast has a lively discussion with Sebastien Lambla about Microsoft’s role in the OSS ecosystem. - The Microsoft 365 Developer Podcast discusses using Blazor to create Microsoft Teams tabs. - The 6-Figure Developer podcast talks with Edward Thomson about GitHub Actions. - The Coding Blocks Podcast talks about architecting for low-risk releases. - At Technology and Friends, David Giard talks with Dave Pine about .NET 5 and the Docs team. - The Azure DevOps Podcast talks with Jimmy Bogard about AutoMapper and MediatR. 🎥 Videos - The Xamarin Show discusses shapes, paths, and app themes in 4.7. - Data Exposed talks about automated backups for Azure SQL.
https://www.daveabrock.com/2020/08/01/dotnet-stacks-10/
CC-MAIN-2021-39
refinedweb
1,765
69.79
- Type: Bug - Status: Resolved - Priority: Blocker - Resolution: Fixed - Affects Version/s: 0.9.3 - - Component/s: storm-multilang - Labels:None - Environment:storm 0.9.3-incubating 1. steps to reproduce 1) write a topology with a python bolt, run the topology on storm; then there will be two process for the bolt: the worker(java process for ShellBolt), python process. 2)kill -9 the worker(java process for ShellBolt); 2. expected behavior the worker exit and the python process exist 3. actual, incorrect behavior the worker exit, but the python process never exist and fall into endless loop 4. analyse in storm.py,read tuple from stdin with follow function: def readMsg(): msg = "" while True: line = sys.stdin.readline()[0:-1] if line == "end": break msg = msg + line + "\n" return json_decode(msg[0:-1]) when sys.stdin is closed, EOF is encountered, readline() return None, so readMsg fall into endless loop. - links to -
https://issues.apache.org/jira/browse/STORM-351
CC-MAIN-2020-16
refinedweb
153
58.28
- NAME - SYNOPSIS - The Class::MethodMaker Method Installation Engine - Non-data-structure components - AUTHOR NAME Class::MethodMaker::Engine - The parameter passing, method installation & non-data-structure methods of Class::MethodMaker. SYNOPSIS This class is for internal implementation only. It is not a public API. The non-data-structure methods do form part of the public API, but not called directly: rather, called through the use/ import interface, as for data-structure methods. The Class::MethodMaker Method Installation Engine import. - SYNOPSIS Class::MethodMaker->import([scalar => [+{ -type => 'File::Stat', -forward => [qw/ mode size /], '*_foo' => '*_fig', '*_gop' => undef, '*_bar' => '*_bar', '*_hal' => '*_sal', }, qw/ -static bob /, ] ]); parse_options Parse the arguments given to import and call create_methods appropriately. See main text for options syntax. - SYNOPSIS,}, )}, - ARGUMENTS - target_class The class into which to install components - args The arguments to parse, as a single arrayref. - options A hashref of options to apply to all components created by this call (subject to overriding by explicit option calls). - renames A hashref of renames to apply to all components created by this call (subject to overriding by explicit rename calls). create_methods Add methods to a class. Methods for multiple components may be added this way, but create_methods handles only one set of options. parse_options is responsible for sorting which options to apply to which components, and calling create_methods appropriately. - SYNOPSIS Class::MethodMaker->create_methods($target_class, scalar => bob, +{ static => 1, type => 'File::Stat', forward => [qw/ mode size /], }, +{ '*_foo' => '*_fig', '*_gop' => undef, '*_bar' => '*_bar', '*_hal' => '*_sal', } ); - ARGUMENTS - targetclass The class to add methods to. - type The basic data structure to use for the component, e.g., scalar. - compname Component name. The name must be a valid identifier, i.e., a continguous non-empty string of word ( \w) characters, of which the first may not be a digit. - options}). - renamesis installed as xx_fig, *_baris installed as xx_bar, *_wizis not installed, *_halis installed as xx_sal, *_gopis not installed, and *_tomis installed as xx_tom. The value may actually be an arrayref, in which case the function may be called by any of the multiple names specified. install_methods - SYNOPSIS Class::MethodMaker->install_methods ($classname, { incr => sub { $i++ }, decr => sub { $i-- }, } ); - ARGUMENTS - target The class into which the methods are to be installed - methods The methods to install, as a hashref. Keys are the method names; values are the methods themselves, as code refs. Non-data-structure components new. Options - -hash The contructor. - -init This option causes the new method to call an initializor responsiblity of the user to ensure that an initmethod (or whatever name) is defined. - -singleton Creates a basic constructor which only ever returns a single instance of the class: i.e., after the first call, repeated calls to this constructor return the same instance. Note that the instance is instantiated at the time of the first call, not before. abstract use Class::MethodMaker [ abstract => [ qw / foo bar baz / ] ]; This creates a number of methods that will die if called. This is intended to support the use of abstract methods, that must be overidden in a useful subclass. copy. AUTHOR Martyn J. Pearce <[email protected]>
https://metacpan.org/pod/release/SCHWIGON/Class-MethodMaker-2.17/lib/Class/MethodMaker/Engine.pm
CC-MAIN-2016-07
refinedweb
509
55.84
In a previous blog post, assuming a node running and waiting for RPCs on address 127.0.0.1:9731. Since we will ask this node to forge a request, we really need to trust it, as a malicious node could send a different binary transaction from the one we sent him. Let’s take back our first operation: { " } ] } So, we need to translate this operation into a binary format, more amenable for signature. For that, we use a new RPC to forge operations. Under Linux, we can use the tool curl to send the request to the node: curl -v -X POST -H "Content-type: application/json" --data '{ " } ] }' Note that we use a POST request (request with content), with a Content-type header indicating that the content is in JSON format. We get the following body in the reply : "ce69c5713dac3537254e7be59759cf59c15abd530d10501ccf9028a5786314cf08000002298c03ed7d454a101eb7022bc95f7e5f41ac78d0860303c8010080c2d72f0000e7670f32038107a59a2b9cfefae36ea21f5aa63c00" This is the binary representation of our operation, in hexadecimal format, exactly what we were looking for to be able to include operations on the blockchain. However, this representation is not yet complete, since we also need the operation to be signed by the manager. To sign this operation, we will first use tezos-client. That’s something that we can do if we want, for example, to sign an operation offline, for better security. Let’s assume that we have saved the content of the string ( ce69...3c00 without the quotes) in a file operation.hex, we can ask tezos-client to sign it with: tezos-client --addr 127.0.0.1 --port 9731 sign bytes 0x03$(cat operation.hex) for bootstrap1 The 0x03$(cat operation.hex) is the concatenation of the 0x03 prefix and the hexa content of the operation.hex, which is equivalent to 0x03ce69...3c00. The prefix is used (1) to indicate that the representation is hexadecimal ( 0x), and (2) that it should start with 03, which is a watermark for operations in Tezos. We get the following reply in the console: Signature: Wonderful, we have a signature, in base58check format ! We can use this signature in the run_operation and preapply RPCs… but not in the injection RPC, which requires a binary format. So, to inject the operation, we need to convert to the hexadecimal version of the signature. For that, we will use the base58check package of Python (we could do it in OCaml, but then, we could just use tezos-client all along, no ?): pip3 install base58check python >>>import base58check >>>base58check.b58decode(b').hex() '09f5cd86126de436e3a7' All signatures in Tezos start with 09f5cd8612, which is used to generate the edsig prefix. Also, the last 4 bytes are used as a checksum ( e436e3a7). Thus, the signature itself is after this prefix and before the checksum: 637e08251cae64...174a8ff0d. Finally, we just need to append the binary operation with the binary signature for the injection, and put them into a string, and send that to the server for injection. If we have stored the hexadecimal representation of the signature in a file signature.hex, then we can use : curl -v -H "Content-type: application/json" '' --data '"'$(cat operation.hex)$(cat signature.hex)'"' and we receive the hash of this new operation: "oo1iWZDczV8vw3XLunBPW6A4cjmdekYTVpRxRh77Fd1BVv4HV2R" Again, we cheated a little, by using tezos-client to generate the signature. Let’s try to do it in Python, too ! First, we will need the secret key of bootstrap1. We can export from tezos-client to use it directly: $ tezos-client show address bootstrap1 -S Hash: tz1KqTpEZ7Yob7QbPE4Hy4Wo8fHG8LhKxZSx Public Key: edpkuBknW28nW72KG6RoHtYW7p12T6GKc7nAbwYX5m8Wd9sDVC9yav Secret Key: unencrypted:edsk3gUfUPyBSfrS9CCgmCiQsTCHGkviBDusMxDJstFtojtc1zcpsh The secret key is exported on the last line by using the -S argument, and it usually starts with edsk. Again, it is in base58check, so we can use the same trick to extract its binary value: $ python3 >>> import base58check >>> base58check.b58decode(b'edsk3gUfUPyBSfrS9CCgmCiQsTCHGkviBDusMxDJstFtojtc1zcpsh').hex()[8:72] '8500c86780141917fcd8ac6a54a43a9eeda1aba9d263ce5dec5a1d0e5df1e598' This time, we directly extracted the key, by removing the first 8 hexa chars, and keeping only 64 hexa chars (using [8:72]), since the key is 32-bytes long. Let’s suppose that we save this value in a file bootstrap1.hex. Now, we will use the following script to compute the signature: import binascii operation=binascii.unhexlify(open("operation.hex","rb").readline()[:-1]) seed = binascii.unhexlify(open("bootstrap1.hex","rb").readline()[:-1]) from pyblake2 import blake2b h = blake2b(digest_size=32) h.update(b'\x03' + operation) digest = h.digest() import ed25519 sk = ed25519.SigningKey(seed) sig = sk.sign(digest) print(sig.hex()) The binascii module is used to read the files in hexadecimal (after removing the newlines), to get the binary representation of the operation and of the Ed25519 seed. Ed25519 is an elliptive curve used in Tezos to manage tz1 addresses, i.e. to sign data and check signatures. The blake2b module is used to hash the message, before signature. Again, we add a watermark to the operation, i.e. \x03, before hashing. We also have to specify the size of the hash, i.e. digest_size=32, becase the Blake2b hashing function can generate hashes with different sizes. Finally, we use the ed25519 module to transform the seed (private/secret key) into a signing key, and use it to sign the hash, that we print in hexadecimal. We obtain: This result is exactly the same as what we got using tezos-client ! We now have a complete wallet, i.e. the ability to create transactions and sign them without tezos-client. Of course, there are several limitations to this work: first, we have exposed the private key in clear, which is usually not a very good idea for security; also, Tezos supports three types of keys, tz1 for Ed25519 keys, tz2 for Secp256k1 keys (same as Bitcoin/Ethereum) and tz3 for P256 keys; finally, a realistic wallet would probably use cryptographic chips, on a mobile phone or an external device (Ledger, etc.).. 9 thoughts on “An Introduction to Tezos RPCs: Signing Operations” Fabrice, you talk about signing the operation using tezos-client, which can then be used with the run_operation, however when . you talk about doing it in a script, it doesn’t include the edsig or checksum or converted back into a usable form for run_operations. Can you explain how this is done in a script? Thanks Anthony You are right, `run_operation` needs an `edsig` signature, not the hexadecimal encoding. To generate the `edsig`, you just need to use the reverse operation of `base58check.b58decode`, i.e. `base58check.b58encode`, on the concatenation of 3 byte arrays: 1/ the 5-bytes prefix that will generate the initial `edsig` characters, i.e. `0x09f5cd8612` in hexadecimal 2/ the raw signature `s` 3/ the 4 initial bytes of a checksum: the checksum is computed as `sha256(sha256(s))` Hi Fabrice. I will aprreciate if you show the coding of step 3. My checksum is always wrong. Thanks The checksum is on prefix + s. Here is a python3 script to do it: ./hex2edsig.py Fabrice, Thanks for the information would you be able to show the coding as you have done in your blog? Thanks Anthony Great information, but can the article be updated to include the things discussed in the comments? As I can’t see the private key of bootstrap1 I can’t replicate locally. Been going around in circles on that point
http://www.ocamlpro.com/2018/11/21/an-introduction-to-tezos-rpcs-signing-operations/
CC-MAIN-2020-16
refinedweb
1,202
55.24
Credit Card Fraud Detection using Random Forests Introduction Before proceeding towards the model, let us understand what credit card fraud is and how it happens. Credit card fraud is an unauthorized transaction done without your credit card. Thieves access the credit card credentials by phishing or credit card skimming. So by detecting such fraud, customers will not be charged for the items they did not purchase. Challenges Involved - A lot of transactions are processed throughout the day, so an enormous amount of data will be generated. Our model must be fast enough to detect the fraud within no time. - The number of fraud transactions throughout the day will be less, so the data will be imbalanced. - Availability of the data is very difficult because most of the transactions are private. - Incorrect classification of the data is another major issue, as every fraud transaction is not reported so they will be marked as valid transactions. - The scammers use robust methods against the models. Tackling the challenges - The model must be simple and fast so that fraudulent transactions are identified in less span of time. - Imbalance datasets can be handled using many techniques like downsampling and upsampling. - We can apply principal component analysis to the data to maintain the user's privacy. - It is necessary to have a trustworthy source for the data extraction in order to get a better model. - Make a simple and interpretable model so that when the scammer gets adapted to it, you can immediately change the model with some tweaks. So let's start the implementation of the model. Before implementing the model download the data from this link. We will be making our model in the jupyter notebook. Implementation Importing all the necessary libraries. import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt from matplotlib import gridspec Load the downloaded dataset using pandas. A correct path should be provided in order to load the dataset properly. It's better to keep the dataset and the model in the same directory. Let us analyze the data. data.head() Here Time, V1, V2, … are independent features of the dataset. There is no name, card number, or any other information in the dataset because the original was formatted using principal component analysis in order to protect the user's privacy. # getting shape and other details of the data print(data.shape) data.describe() Our dataset has 284807 data points and 31 columns. In 31 columns. there are 30 independent features and 1 dependent feature. Now, let us check how imbalanced our dataset is. fraud_ = data[data["Class"] == 1] valid_ = data[data["Class"] == 0] print("Total fraud transactions: ",len(fraud_)) print("Total valid transactions: ",len(valid_)) Total fraud transactions: 492 Total valid transactions: 284315 As we can see that there are only 492 fraudulent transactions and 284315 valid transactions. It means that the dataset is highly imbalanced. Not even one percent of the data has fraud transactions. So it indicates that the dataset is highly imbalanced. (A perfectly balanced dataset has an equal number of observations for all possible level combinations) Let us train a model using this unbalanced dataset. If the results are not good, we can change the imbalanced dataset to a balanced one by downsampling or upsampling the dataset. Checking the amount details of fraudulent and valid transactions. fraud_.Amount.describe() count 492.000000 mean 122.211321 std 256.683288 min 0.000000 25% 1.000000 50% 9.250000 75% 105.890000 max 2125.870000 Name: Amount, dtype: float64 valid_.Amount.describe() count 284315.000000 mean 88.291022 std 250.105092 min 0.000000 25% 5.650000 50% 22.000000 75% 77.050000 max 25691.160000 Name: Amount, dtype: float64 Observe that the average amount of fraudulent transactions is more than that of valid ones. Due to this, it is necessary to predict whether a transaction is fraudulent or valid. Plotting a correlation matrix. The correlation matrix gives us an idea about how features can correlate with each other. They highlight the features that are necessary for the prediction. Correlation: If the value of correlation is positive then the two variables move with coordination in the same direction, and if the value is negative then two variables move with coordination in the opposite direction. The value of the correlation coefficient lies between -1 to 1. 1 indicates that two variables have a perfect positive relationship and -1 indicates that the two variables have a perfect negative relationship. corrMatrix = data.corr() fig = plt.figure(figsize=(10,10)) sns.heatmap(corrMatrix, square = True, vmax = .5, vmin = -.5) plt.show() From the following heatmap, we can say that many of the features do not correlate which each other. V7 and V20 are features that are positively related to the Amount feature. At the same time, V2 and V5 are the features that are negatively related to the Amount feature. Now separate the dependent and independent features of the dataset for training, i.e., separate the data into X and Y where X is input data and Y is output data. y = data["Class"] x = data.drop(["Class"], axis = 1) print(x.shape, y.shape) # converting it to numpy array with no column Ydata = y.values Xdata = x.values (284807, 30) (284807,) Splitting the dataset for training and testing using sklearn’s train_test_split. # importing train_test_split from sklearn.model_selection import train_test_split xtrain, xtest, ytrain, ytest = train_test_split(Xdata, Ydata, random_state = 42, test_size = 0.15) Build a model by random forest classifier algorithm. We will import a random forest classifier from the sklearn library. # importing rfc classifier from sklearn.ensemble import RandomForestClassifier RFC = RandomForestClassifier() RFC.fit(xtrain, ytrain) ypred = RFC.predict(xtest) Summary of the model. # calculating every score of the model from sklearn.metrics import classification_report, accuracy_score, precision_score, recall_score, f1_score, matthews_corrcoef, confusion_matrix n_outliers = len(fraud_) n_errors = (ypred != ytest).sum() print("Random Forest Classifier") # calculating accuray of the model accuracy = accuracy_score(ytest, ypred) print("Accuracy : ",accuracy) # calculating precision of the model precision = precision_score(ytest, ypred) print("Precision: ",precision) # calculating recall score of the model recall = recall_score(ytest, ypred) print("Recall: ",recall) # calculating f1-score of the model f1 = f1_score(ytest, ypred) print("F1-Score: ",f1) # calculating matthews correlation coefficient of the model MCC = matthews_corrcoef(ytest, ypred) print("Matthews correlation coefficient: ",MCC) Random Forest Classifier Accuracy : 0.9994850428350732 Precision: 0.9482758620689655 Recall: 0.7432432432432432 F1-Score: 0.8333333333333333 Matthews correlation coefficient: 0.839286576513758 Visualizing the confusion matrix for the above predictions. # printing the confusion matrix conf_matrix = confusion_matrix(ytest, ypred) plt.figure(figsize = (7, 6)) sns.heatmap(conf_matrix, annot=True, fmt ="d"); plt.title("Confusion Matrix") plt.ylabel('True class') plt.xlabel('Predicted class') plt.show() FAQs 1. What is the random forest with an example? A: Random forest is an algorithm consisting of many decision trees. Suppose there are 4 trees in a random forest. Three of them predicted that the transaction was fraudulent, and only one predicted that the transaction was valid. As most decision trees have said transaction is fraudulent, the random forest will give the output as fraudulent. 2. When should we random forest? A: Random Forest is suitable for situations when we have a large dataset, and interpretability is not a major concern. Decision trees are much easier to interpret and understand. It becomes more difficult to interpret since a random forest combines multiple decision trees. 3. Why is the random forest algorithm so good? A: Random forests are great with high-dimensional data since we work with subsets of data. It is faster to train than decision trees because we work only on a subset of features in this model, so we can easily work with hundreds of features. 4. Does random forest reduce overfitting? A: Random Forests do not overfit. The testing performance of Random Forests does not decrease (due to overfitting) as the number of trees increases. Hence after a certain number of trees, the performance stays at a certain value. Key Takeaways In this article, we have discussed the following topics: - Problem faced during making credit fraud detection model. - How to tackle the problems faced while training the model? - Credit card fraud detection model using random forest algorithm. Want to learn more about Machine Learning? Here is an excellent course that can guide you in learning. Happy Coding!
https://www.codingninjas.com/codestudio/library/credit-card-fraud-detection-using-random-forests
CC-MAIN-2022-27
refinedweb
1,379
52.05
WMI Enhancements in Windows PowerShell 2.0 CTP The information in this article was written against the Community Technology Preview (CTP) of Windows PowerShell 2.0. This information is subject to change in future releases of Windows PowerShell 2.0. On This Page WMI Enhancements in Windows PowerShell 2.0 But What About the New WMI Cmdlets? Set-WMIInstance Invoke-WMIMethod Remove-WMIObject WMI Enhancements in Windows PowerShell 2.0 Truth be told, the Windows Management Interface (WMI) capabilities built into Windows PowerShell 1.0 are pretty good; after all, PowerShell makes it very easy to retrieve WMI data. Windows PowerShell enables you to run methods and change property values, and PowerShell cmdlets such as Sort-Object and Group-Object make it a breeze to sort and group data returned by a WQL query, tasks that are difficult, at best, using VBScript. In all fairness, however, it’s also true that the PowerShell 1.0 Get-WMIObject cmdlet has its weaknesses. For example, suppose you want to retrieve some simple information from an IIS 6.0 server. No problem; all you have to do is issue a command like this, right? Right. No, wait: not right. Look what happens when you issue the preceding command: Access denied? What the – Yes, we know: irritating, isn’t it? And don’t bother verifying that you have the proper administrative credentials; you can be System Administrator to the World and you’ll get the same message. That’s because, for security reasons, you can’t connect to any of the IIS WMI classes without first setting the AuthenticationLevel to PacketPrivacy (a DCOM setting that, among other things, encrypts all the data traveling between your computer and the IIS server). That’s a hard-and-fast rule, and the only thing you can do about it is, well, go along with it and set the AuthenticationLevel to PacketPrivacy. In VBScript that’s a trivial task; for example, here’s a sample VBScript WMI connection string that runs under the proper AuthenticationLevel: And, of course, in Windows PowerShell 1.0 you … let’s see here, you just … all you have to do is … hmmm …. As it turns out, you can’t change the WMI AuthenticationLevel in PowerShell 1.0; you also can’t change the ImpersonationLevel or enable all privileges, two other tasks occasionally required when working with WMI. Is that a problem? Well, if you need to manage IIS 6.0, it’s a big problem. But guess what? In Windows PowerShell 2.0 all those problems go away. In PowerShell 2.0, the Get-WMIObject cmdlet (along with its peer cmdlets Invoke-WMIMethod; Remove-WMIObject; and Set-WMIInstance) includes new parameters for AuthenticationLevel and ImpersonationLevel, as well as a parameter (EnableAllPrivileges) that enables all privileges. Need to use Windows PowerShell 2.0 to get information from an IIS 6.0 server? Well, why didn’t you say so: If that command looks familiar, it should: except for one addition, it’s exactly like the first WMI command we showed you. The difference is the -authentication parameter tacked onto the end: In case you’re wondering, the 6 means “PacketPrivacy.” Here are the other values available to you: And what do we get back when we run this modified command? This: __GENUS : 2 __CLASS : IIsComputer __SUPERCLASS : CIM_ApplicationSystem __DYNASTY : CIM_ManagedSystemElement __RELPATH : IIsComputer.Name="LM" __PROPERTY_COUNT : 10 __DERIVATION : {CIM_ApplicationSystem, CIM_System, CIM_LogicalElement, CIM_ManagedSystemElement} __SERVER : ATL-IIS-001 __NAMESPACE : root\microsoftiisv2 __PATH : \\ATL-IIS-001\root\microsoftiisv2:IIsComputer.Name="LM" Caption : CreationClassName : Description : InstallDate : Name : LM NameFormat : PrimaryOwnerContact : PrimaryOwnerName : Roles : Status : Maybe not the most impressive or useful set of data, but that’s not the point. The point is that now that we can set the AuthenticationLevel, we can manage IIS 6.0 using Windows PowerShell 2.0. But What About the New WMI Cmdlets? Good question: what about the new WMI cmdlets, or, more specifically, what about these new cmdlets: Invoke-WMIMethod Remove-WMIObject Set-WMIInstance The truth is, these cmdlets don’t really add any new capabilities; what they do, however, is make it easier to carry out common WMI scripting tasks. As we’re about to find out. Set-WMIInstance The Set-WMIInstance cmdlet makes it easier and more straightforward to change the value of a read-write property in WMI. Suppose you wanted to change the WMI logging level. In Windows PowerShell 1.0, you’d use code similar to this: That’s not too bad, although the Put method (which actually writes the changes to the object) can be a little tricky. That’s due to the fact that WMI’s scripting API actually uses a method named Put_; as a PowerShell script writer it wouldn’t be unreasonable to believe that you’re also using the WMI scripting API. As it turns out, however, you’re not; instead, PowerShell uses the .NET Framework class System.Management. And, in the .NET Framework, the method is named Put rather than Put_. So how does Set-WMIInstance help? Well, for one thing, you don’t have to mess around with the Put method (or even the Put_ method). Instead, you can change the LoggingLevel using a single command: See how easy that is? We simply call the Set-WMIInstance cmdlet, followed by three parameters: -class. The –class parameter specifies the WMI class containing the property value to be changed. Because the Win32_WMISetting class is in the root\cimv2 namespace (the default namespace) we can get away with specifying only the class name. If the class was in a different namespace (e.g., root\MicrosoftIISv2) then we’d need to include the –namespace parameter, just like we did when we retrieved information from an IIS server earlier in this article. And yes, now that you mention it, all the new WMI cmdlets also include the –Authentication, -Impersonation, and –EnableAllPrivileges parameters. -argument. The –argument parameter holds two values: the name of the property to be changed, and the new value for that property. These name-value pairs must be passed to –argument in the form of a “hash table” (similar to a Dictionary object). That’s the reason for the syntax @{name=value}; the @ sign followed by a pair of curly braces tells PowerShell to construct a hash table. And, needless to say, the name-value pair LoggingLevel=2 simply says, “Change the value of the LoggingLevel property to 2. Now, here’s something really cool: by specifying multiple name-value pairs you can change more than one property value with a single command. For example, suppose you want to change the LoggingLevel to 2 and the BackupInterval to 60 minutes. How would you do that? Like this: As you can see, all we had to do was include both name-value pairs (LoggingLevel=2 and BackupInterval=60) within our hash table, separating the two pairs by a semicolon. Want to change the default namespace to root\default as well? Then put all three name-value pairs in the hash table: Etc., etc. And, again, no need to call the Put method to write the changes to the object; Set-WMIInstance takes care of that for you. -computername. Last, but far from least, we have the –computername parameter. This is the spot where you specify the name of the computer on which the change (or changes) should be made. In our sample command, we’re making the change on a computer with the name atl-fs-001; alternatively, we could have specified the computer by IP address (192.1681.1.1) or by fully-qualified domain name (atl-fs-001.fabrikam.com). And sure, we can use a single command to change the LoggingLevel on more than one computer; all we have to do is specify multiple computer names as part of the –computername parameter, taking care to separate those names using commas: Here’s a cool little trick. Suppose we have a text file (C:\Scripts\Computers.txt) that lists the names of all our computers. Need to change the LoggingLevel on all your machines? Well, you could type all those computer names as part of the –computername parameter. Or, you could run this command instead: What’s so different about this command? Just one thing, really: instead of hard-coding computer names into the command we’re using the Get-Content cmdlet to read those names in from the file Computers.txt. Set-WMIInstance (like its fellow WMI cmdlets) will then operate against each of the computer names retrieved from that file. Oh, and if you want to run the command against the local machine as well, you can use this syntax: As you probably know, the dot (.) is WMI shorthand for the local computer. If you want to run this command only against the local computer then you can simply leave out the –computername parameter altogether. If no computer name is specified, then the WMI cmdlets automatically run against the local computer (and only the local computer). In other words, this command changes the logging level only on the local computer: That’s pretty much all there is to Set-WMIInstance. And that’s the whole point: the idea is to make it a no-brainer to change read-write properties. Invoke-WMIMethod The Invoke-WMIMethod cmdlet is built upon a similar philosophy: make it as easy as possible to execute a WMI method. Let’s take a peek at a sample command that takes a printer named TestPrinter and renames it NewPrinterName: Again, there’s not much to this command; all we do is call Invoke-WMIMethod followed by four parameters: -path. In general, WMI supports two different types of methods: instance methods (methods that are called on specific instances of a class) and static methods (methods that are called on the class itself). In this case we’re calling an instance method: we want to rename a specific instance of the Win32_Printer class. (That is, we only want to rename the printer TestPrinter). Because of that, we use the –path parameter, specifying: 1) the WMI class name (Win32_Printer), and 2) a property name and value (DeviceID='TestPrinter') that enables the script to pinpoint the instance (or instances) we’re interested in. What if we wanted to run a static method, a method that operates on a class as a whole? In that case we’d still use the –path parameter; however, all we’d specify is the class name, without targeting any specific instances of that class. For example: -name. This is the name of the WMI method we want to invoke. Note that we specify the name as used by WMI; don’t include parentheses after the name. Including parentheses after a method name is a standard practice in Windows PowerShell, but here we aren’t directly invoking the method; Invoke-WMIMethod will do that for us. Therefore, we only want to specify the name of the method that Invoke-WMIMethod should execute. Incidentally, the method specified here must be supported by the class in question. You can use the Delete_ method to delete a printer using Invoke-WMIMethod; that’s because the Win32_Printer class supports the Delete_ method. However, you can’t use the Delete_ method to delete a service. Why not? You got it: because the Win32_Service class doesn’t support the Delete_ method. -argumentList. This is simply the method parameters. We want to give our printer the name NewPrinterName, so we assign "NewPrinterName" to the -argumentList parameter -computer. Again, simply the name (or names) of the computers you want this command to work against. From time-to-time you will encounter a WMI method that requires multiple parameter values. For example, the Create method, a Win32_Share method for creating a new network share, requires the following values, which must be passed in order: Access Description Maximum allowed connections Share name Local path Share type So how do you deal with a method that requires multiple parameter values? Well, for starters, you assign those values, in order, to an array: And then you run this command, passing the array variable to Invoke-WMIMethod’s -argumentList parameter: Remove-WMIObject Remove-WMIObject is an interesting little cmdlet: it enables you to remove instances of WMI objects using a standard syntax. What does that mean? Well, for example, in WMI you can: Delete processes by using the Win32_Process class and the Terminate method Delete a printer connection by using the Delete_ method. Delete a folder by using the Delete method. That’s three different WMI objects that can be removed, but each one uses a different method. That’s not necessarily hard, but it can be a bit tricky to keep track of which method deletes which type of object. Now, let’s take a look at how we can accomplish those same tasks using Remove-WMIObject: As you can see, in all three cases we took the exact same approach: we simply used Get-WMIObject to retrieve an object reference ($a) to the items of interest, then we piped the variable $a to Remove-WMIObject. Remove-WMIObject took it from there. And yes, that was awfully nice of Remove-WMIObject to take care of all that for us, wasn’t it?
https://technet.microsoft.com/en-us/library/ff730973(d=printer).aspx
CC-MAIN-2015-40
refinedweb
2,201
62.88
all - Java Beginners hi all hi, i need interview questions of the java asap can u please sendme to my mail Hi, Hope you didnt have this eBook. You... friend, I am sending you a link. This link will help you. Please read Student - Java Beginners []=new Student[2]; for (int i=0; i65&&tm<75){ data[i].setGrade("C...].setGrade("A"); } } for(int i=0;i<2;i++){ Student show = data[i...Student Create a class named Student which has data fields hi online multiple choice examination hi i am developing online multiple choice examination for that i want to store questions,four options,correct answer in a xml file using jsp or java?can any one help me? Please struts struts Hi, i am writing a struts application first of all i am takeing one registration form enter the data into fileld that will be saved perfectly. but now i can apply some validations to that entry details but with out genarating student weekly progress report genarating student weekly progress report Hi every one iam using struts frame work backend as oracle 10g . i want code for how to genarate student weekly and monthly marks . thanks in advance Hi Hi I have got this code but am not totally understanding what the errors. Could someone Please help. Thanks in advance! import java.util.Random; import java.util.Scanner; private static int nextInt() { public class student details student details hi sir/madam i have a doubt in PHP how to insert student details using mysql with php.. Have a look at the following link: PHP Mysql insert Student Marks Student Marks Hi everyone I have to do this java assignment... programming grades of 8 IT students. Randomly create student numbers for each... in an array. Addresses of Students and store them in array. Previous Grade for student points being displayed in the world points being displayed in the world how do i display points in my world Hi Friend, Please clarify your problem. Thanks Struts - Struts Struts Hello I like to make a registration form in struts inwhich students course also mention in a combo box. If student choose particular course then page is redirected to that course's subjects. Also all subject struts struts <p>hi here is my code in struts i want to validate my form fields but it couldn't work can you fix what mistakes i have done</p>...; Student Name:<html:text Fathers Name:& Struts Struts Hi i am new to struts. I don't know how to create struts please in eclipse help school student attendence report school student attendence report Hi i want school student attendence genaration source code please urgent help- login problem - Struts struts- login problem Hi all, I am a java developer, I am facing problems with the login application. The application's login page contains fields like username, password and a login button. With this functionality only struts technologies like servlets, jsp,and struts. i am doing one struts application where i...struts hi Before asking question, i would like to thank you... into the database could you please give me one example on this where i i have Not sure what I am missing ? Any ideas? Not sure what I am missing ? Any ideas? import java.util.*; public...) { if(str.length()==0) return false; for(int i=0;i<str.length();i++) if (c==str.charAt(i)); return true   Student Admission Form in Java - Java Beginners Student Admission Form in Java I want to store following Information into MS Access 2007 with JDBC. 1)Student PRN Number 2)Student Name 3)Date...; Hi i produce sample application with access 2007 and i haven't done the Struts if being used in commercial purpose. the Struts if being used in commercial purpose. Do we need to pay the Struts if being used in commercial purpose - Struts Struts Code Hi I executed "select * from example" query and stored all the values using bean . I displayed all the records stored in the jsp using struts . I am placing two links Update and Delete beside each record . Now I new in struts concept.so, please explain example login application in struts web based application with source code...:// I hope that, this link will help you - Struts Struts How to display single validation error message, in stude of ? Hi friend, I am sending you a link. This link will help you. Please visit for more information. struts struts <p>hi here is my code can you please help me to solve... { public ActionForward execute(ActionMapping am,ActionForm af...(ActionMapping am,HttpServletRequest req) { ActionErrors ae=new ActionErrors..., the controller is sending request to MultipleRemote.jsp. If whenever i type message Struts file uploading - Struts Struts file uploading Hi all, My application I am uploading... = newDocumentForm.getMyDocument(); byte[] fileData = file.getFileData(); I am... when required. I could not use the Struts API FormFile since struts - Struts struts Hi, I am new to struts.Please send the sample code for login... the code immediately. Please its urgent. Regards, Valarmathi Hi Friend....shtml http Struts 2.0 - Struts Struts 2.0 Hi ALL, I am getting following error when I am trying to use tag. tag 'select', field 'list': The requested list key 'day' could.../SelectTag.jsp" please let me know If I am doing any Hi Hi Hi How to implement I18N concept in struts 1.3? Please reply to me, i have formbean class,action class,java classes and i configured all in struts-config.xml then i dont know how to deploy and test... and do run on server. whole project i have run?or any particular Hi Hi I want import txt fayl java.please say me... Hi, Please clarify your problem! Thanks Struts - Struts Struts Dear Sir , I am very new in Struts and want... validation and one of custom validation program, may be i can understand.Plz provide the that examples zip. Thanks and regards Sanjeev. Hi friend creating an applet for student management system creating an applet for student management system Write an applet/awt... a student management system having the following characteristics: The interface... it to act), and reasonably realistic. It must accept the student id,name,age,address hi * * * * * * * * * *print("code sample"); Hi Friend, Try... num=4; int p = num; int q = 0; for (int i = 0; i <= num; i++) { for (int j = p; j>= 1; j-- ) System.out.print(" "); p-=1; for (int k = 1; k <= i;.. what are the steps mandatory to develop a simple java program? To develop a Java program following steps must be followed by a Java developer : First of all the JDK (Java Development Kit) must be available hi! hi! how, the total cost of the two items;otherwise,display the total for all three
http://www.roseindia.net/tutorialhelp/comment/11974
CC-MAIN-2013-48
refinedweb
1,135
67.25
10.5: Function Return Types - Page ID - 29093 Return Statement in C++ The return statement returns the flow of the execution to the location from where it is called. As soon as the return statement is executed, the flow of the program stops immediately and return the control from where it was called. The return statement may or may not return anything for a void function, but for a non-void type of function, a return value must be returned. There are various ways to use return statements. Few are mentioned below: - Methods not returning a value: In C/C++ one cannot skip the return statement, when the methods are of return type. The return statement can be skipped only for void types - Not using return statement in void return type function: When a function does not return anything, the void return type is used. So if there is a void return type in the function definition, then there will be no return statement inside that function (generally). - Example: // C++ code to show not using return // statement in void return type function #include <iostream> using namespace std; // void method void Print() { cout <<"Welcome to CSP 31A") << endl; } int main() { // Calling print Print(); return 0; } The function is called, and processes, when done the control of the program returns to where the function was called. The output of the above code example is as follows: Welcome to CSP 31A Void function with a return statement Now the question arises, what if there is a return statement inside a void return type function? Since we know that, if there is a void return type in the function definition, then there will be no return statement inside that function. But if there is a return statement inside it, then also there will be no problem if the syntax of it will be: Correct Syntax: void func() { return; } This syntax is used in function to signify that the code is indeed to return to the calling location, and this is what the programmer intended. // C++ code to show using return // statement in void return type function #include <iostream> using namespace std; // void method void Print() { printf("Welcome to CSP 31A"); // void method using the return statement return; } // Driver method int main() { // Calling print Print(); return 0; } There is no difference here, the return statement adds nothing to this code. Many organizations require a return statement, just for clarification that it is indeed the desire of the programmer to return at this point. Output is the same. Welcome to CSP 31A Invalid return from a void function But if the return statement tries to return a value in a void return type function, that will lead to errors. So if we have a void function, and attempt to return a value like the example below void func() { return value; } When you attempt to compile this code you would receive a message stating that you can not do that. Some compilers give you a warning, others give you an error, which is correct, in that this is against the rules of C++. warning: 'return' with a value, in function returning void OR error: return-statement with a value, in function returning 'void' - Methods returning a value: For methods that define a return type, the return statement must be immediately followed by the return value of that specified return type. Syntax: return-type func() { return value; } When a function has a return type, the value being returned should be that type. Any value that is of a different type will be forced into that type before it is returned. The easiest thing is to make sure you are returning the proper type of value. // C++ code to illustrate Methods returning // a value using return statement #include <iostream> using namespace std; // integer return type - the return value MUST be an int // function to calculate sum int SUM(int inputV1, int inputV2) { int s1 = inputV1 + inputV2; // method using the return statement to return a value // It will force any value to an integer before it is returned return s1; } // Driver method int main() { int num1 = 10; int num2 = 10; int sum_of; // The SUM() return value is being assigned to an int sum_of = SUM(num1, num2); cout << "The sum is " << sum_of; return 0; } Output: The sum is 20 Adapted from: "return statement in C/C++ with Examples" by Chinmoy Lenka, Geeks for Geeks is licensed under CC BY-SA 4.0
https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/10%3A_Functions/10.05%3A_Function_Return_Types
CC-MAIN-2021-31
refinedweb
741
54.7
Patches item #2048464, was opened at 2008-08-12 22:13 Message generated for change (Comment added) made by borutr You can respond by visiting: Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed >Resolution: Accepted Priority: 5 Private: No Submitted By: Mauro Giachero (maurogiachero) >Assigned to: Borut Raem (borutr) Summary: PIC16: fix genUminus - addresses not.c regression test Initial Comment: Hello everybody, the attached patch fixes the PIC16's genUminus bug that causes the not.c regression test failures. The not.c tests fail to compile (compiler assertion) when {attr}==volatile. The problem resulted to be neither related to the "!" operator nor to volatile itself. The smallest failing testcase I could write down is: #include <testfwk.h> void testUMinus2(void) { unsigned char volatile ucv; unsigned char uc; ucv = 0; uc = ucv; ucv = (uc * -1 < 0); } The problem is that CSE replaces "uc * -1" with "-uc", so that the tree contains an UNARYMINUS node with the operand and the result differently sized. PIC16's genUminus couldn't cope with that and the assertion failed. >From my POV there are 2 ways to fix this: in SDCCcse.c adding a cast node in the tree, or in genUminus adding the code to handle that case. Since the problem appeared PIC16-specific and adding a cast could throw away possible optimizations, I went for the second and fixed genUminus. The patch effect is almost a plain NOP for the case where the operand and the result are of the same size, so I don't expect this to cause regressions. With best regards Mauro PS: I'll be on vacation/away till about the first week of September, and I most probably won't reply to mails in that timeframe. Sorry about that. I'll read mail and reply as soon as I'm back. ---------------------------------------------------------------------- >Comment By: Borut Raem (borutr) Date: 2008-08-23 13:46 Logged In: YES user_id=568035 Originator: NO Patch applied in svn revision #5218. Borut ---------------------------------------------------------------------- You can respond by visiting: I was wrong about pic16 regression tests: they seem to be OK. Sorry, Borut Borut Razem wrote: > > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > > _______________________________________________ > sdcc-devel mailing list > sdcc-devel@... > > > I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/sdcc/mailman/sdcc-devel/?viewmonth=200808&viewday=23
CC-MAIN-2017-09
refinedweb
466
62.17
Often one will want to monitor an input and then perform an action depending on what that input reading is. For example, we could be monitoring the temperature of a greenhouse and want to turn a heater on when it gets cold, and off when it gets hot. This is a basic thermostat. Assuming we had two channels, "Temperature", an analog input which has been converted to degrees C, and "HeaterPower" which is a digital output, to create a simple thermostat that turns on the heater at 15C and off at 22C we'd do this: 1) Click on the + next to CHANNELS: in the Workspace to expand this item and show all the channels. 2) Click on the Temperature channel. 3) Select the Event tab. Here we can enter script that executes every time a new reading occurs on the Temperature channel. Since this script can delay the readings, we need to make sure its short and fast. 4) Enter the following script: if ((Temperature[0] < 15) && (Temperature[1] >= 15)) HeaterPower = 1 endif if ((Temperature[0] > 22) && (Temperature[1] <= 22)) HeaterPower = 0 endif 5) Click Apply. The thermostat will start working immediately, though it won't actually change the state of the heater until the temperature passes through one of the thresholds. This example shows a couple things that are a least important to thermostat applications: Thresholding: another way to write this script would be: if (Temperature[0] < 15) HeaterPower = 1 endif if (Temperature[0] > 22) HeaterPower = 0 endif The difference is that in this second, shorter script, the heater power digital output is set every time Temperature is read and is above or below the given threshold. This may create quite a bit of overhead on the device. Even the UE9 takes a few milliseconds to perform each action and if you are doing a lot of different things with the device, this can add up. So, in the original script, we look at the most recent point, [0], and the next most recent point, [1], and only adjust the HeaterPower output if they are on opposite sides of the threshold, meaning the temperature has just dropped below, or risen above the threshold. Once this occurs, the HeaterPower output won't be reset until the other threshold is crossed. Sample file: LJGuideSamples\Thermostat.ctl
https://labjack.com/print/book/export/html/669
CC-MAIN-2022-27
refinedweb
386
56.18
referance content PubNub Functions is built upon some key concepts – It's advised to become familiar with them before you start developing and testing with PubNub Functions. PubNub Channels A channel represents a virtual path between publishing and subscribing clients in the PubNub Data Stream Network (DSN). If you're familiar with general message queue concepts, a channel is similar to a topic. Any message will have exactly one channel associated with it. As an example, in order to receive a message published on the channel BayAreaNews, the client must be subscribed to [at least] the channel BayAreaNews (a subscribing client may be subscribed to more than one channel at a time.) A channel doesn't need to be declared, defined, or instantiated in any way, they can be considered unlimited and arbitrary in scope. You can choose as many, or as few channels as you'd like to use in your apps – consider the channel as metadata on the message itself. In PubNub Functions, a specific Function (written in Javascript) can be associated with one or more channels. Every message published on a given channel will be processed by its associated event handler (if defined). PubNub Endpoints An Endpoint is a URI path you can make an HTTP request to trigger a Function. It's the same concept as an HTTP handler but you don't have to spin up a webserver etc. All Endpoints are by default secure and only accept requests via HTTPS. Endpoints allow you to start a microservice on PubNub's Network in seconds. Functions with Endpoints are On Request types. API Keysets The first level of partitioning data between PubNub accounts is via the PubNub API keysets. Keysets are managed by PubNub and created by users from their admin portal. Each Keyset contains a Publish, Subscribe, and Secret Key. Secret keys are also unique to a keyset, and are used for management functions. Consider a publish key, subscribe key, and channel name the composite identifier for any message over the PubNub network. In other words, any channel is namespaced by the subscribe and publish keyset used to publish it. For example, anyone publishing with a PubNub instance defined against keyset1 on channel1 can be received only by subscribers defined against keyset1 on channel1. To make it more clear, consider channel1's real name in this case to be keyset1:channel1… if someone published on a different keyset, but same channel, since the channel is namespaced by the keyset, there would be no collision. Functions A Function is a block of Javascript code which runs when triggered by a given Function type, against a given keyset and channel(s) or URI path – it is the logic to perform against a message meeting certain criteria or an HTTP request. Functions can be deployed to the PubNub DSN via the GUI, CLI, or REST APIs. Function Types The way a Function is triggered and executed depends on the type of Function it is. A Function can be 1 of the 4 types below: - Before Publish or Fire - Executed in response to publishing a message on a channel and before the message has been forwarded on to awaiting subscribers, these Functions allow you to operate on, or mutate the message. - After Publish or Fire - Executed in response to publishing a message on a channel and after the message has been forwarded on to awaiting subscribers, these Functions allow you to act on the message publish event without having to worry about the latency impact of it. - After Presence - Executed in response to a Presence event and after it has been forwarded on to awaiting subscribers, these functions allow you to operate on the Presence event. - On Request - Executed in response to an HTTP request, these functions allow you to build microservices and webhooks. Modules Groups of Functions that share a scope are called a Modules.
https://www.pubnub.com/docs/blocks/key-concepts
CC-MAIN-2019-51
refinedweb
647
58.42
Windows 10, version 1803 (also known as the Windows 10 April 2018 through Windows Update on May 8th. Windows 10, version 1803 is the fifth feature update for Windows 10, offering IT pros built-in intelligent security and advanced capabilities that help simplify device management and drive IT cost savings. Here's a quick rundown of what's new and what's changed since the last update: As announced by John Cable this morning, today marks the start of the 18-month servicing timeline for this Semi-Annual Channel release. We recommend that you test the newest features and functionality now—with a targeted deployment—in preparation for broad deployment to the devices in your organization in the weeks to come. If you have not yet deployed Windows 10 and are looking to test this latest release for your organization, you can download the Windows 10 Enterprise Evaluation from the Microsoft Evaluation Center. To help you better plan for and deploy this release, we have also updated the Windows Assessment and Deployment Kit (Windows ADK) for Windows 10 and published a draft of the Security baseline for Windows 10, version 1803. Volume License customers will be able to download Windows 10, version 1803 from the Volume Licensing Service Center on May 7, 2018. Register today for a one-hour “What's new in Windows 10, version 1803 for IT pros” webcast hosted by Pieter Wigleven and Nathan Mercer, Senior Product Marketing Managers with the Windows Commercial team.. Since the 24-hour Windows 10 IT Pro AMA is a departure from our typical one-hour AMA format, here's an explanation of how it will work: To participate in the AMA, you must be a member of the Microsoft Tech Community. If you're not already a member, it only takes a minute to sign up: You can also watch a quick recap of what's new in this video: The Windows 10 Enterprise Evaluation is a free, 90-day evaluation of Windows 10 Enterprise designed for IT professionals interested in testing Windows 10 Enterprise on behalf of their organization. We do not recommend that you install this evaluation if you are not an IT professional or are not professionally managing corporate networks or devices. If you haven't yet migrated to Windows 10, you can also take advantage of Upgrade Readiness, a free Windows Analytics service that helps you streamline and accelerate the Windows upgrade process by identifying compatibility issues that can block an upgrade and proactively suggesting fixes. You can use Upgrade Readiness standalone or integrate it with System Center Configuration Manager. For more information on configuring and deploying updates, please see the following resources: For more information on the latest features for end users, see What's new in the Windows 10 April 2018 Update. To see a summary of the latest documentation updates, see What's new in Windows 10, version 1803 IT pro content on Docs. For information on what's new for developers, see What's New in Windows 10 for developers, build 17134. For a full list of new namespaces added to the Windows SDK, see New APIs in Windows 10, build 17134. And, for a list of features and functionality that have been removed from Windows 10, or might be removed in future releases, see Features removed or planned for replacement starting with Windows 10, version 1803. Continue the conversation. Find best practices. Bookmark the Windows 10 Tech Community. Looking for support? Visit the Windows 10 IT pro forums. [i] On Windows 7 Service Pack 1, Windows 8.1, and Windows 10 I am incredibly confused why 1803 was designated "Semi-Annual Channel" prior to being "Semi-Annual Channel (Targeted)": In your own post you say: "We recommend that you test the newest features and functionality now—with a targeted deployment" If you want us testing in a targeted deployment, shouldn't 1803 have been designated "Semi-Annual Channel (Targeted)" instead? Correct me if I'm wrong, but isn't this how the servicing channels are supposed to work?: SAC Targeted (formerly current branch): Ready for targeted deployment, consumer ready SAC (formerly current branch for business): Ready for broad deployment and "business ready" Can you clarify? I was under the impression that Semi-Annual channel gave us ~4 months to test new Feature Updates, but your post implies that we only have "weeks" to get ready for it. "...in preparation for broad deployment to the devices in your organization in the weeks to come..." @Christopher Gallen- There is only one Semi-Annual Channel version, which is released each 6-months. This one is this April 2018 Feature Update or 1803, and is the fifth of the Feature Updates. SAC (targeted) is how we say the customer should deploy the update to their validation devices. They will have several rings typically. The first is the preview rings (for devices enrolled in the Windows Insider Program) and this gives 6-months or so to prepare (test and validate) for the SAC release 1803. Then the next ring is to validate the features work as planned, and there are no issues in production. Then once this has been validated they deploy to the next ring (wider ring, production ring, etc) and they may have several rings if they want to scale across large companies, or different business units etc. Hope that helps. @Stephen Dillon In the blog you linked it states: "The Semi-Annual Channel replaces the Current Branch [CB] and Current Branch for Business [CBB] concepts." This implies that both CB and CBB are replaced with a single channel. However, in your own documentation for setting up deployment rings ( and) you state that CB and CBB are replaced with SAC-T and SAC respectively, i.e. they are still two separate deployment rings. If we have deployment rings set up per your documentation, with our TARGETED devices configured to SAC-Targeted (via GPO or otherwise) they will actually receive 1803 *AFTER* the SAC configured devices due to you designating 1803 as SAC BEFORE SAC-Targeted. This is the opposite result of what we expect, and the opposite of how you have release Feature Updates in the past. For example: 1709 was designated SAC-Targeted BEFORE it was designated SAC. It was ready for TARGETED deployment (SAC-T on 10/17/2017) BEFORE it was ready for broad deployment (SAC on 12/12/2017). Another example: 1703 was designated CB on 4/11/2017, then designated CBB on 7/11/2017 With 1803 you are doing the complete opposite. Do you see how this might cause confusion? So, do we configure our TARGETED devices as SAC-Targeted, or as SAC with custom deferral lengths for each deployment ring? For the enterprise, what does configuring devices as Semi-Annual (Targeted) actually accomplish now? Either way your documentation needs to be updated and clarified to match what your current intention is. Firstly, let's acknowledge that yes, the documentation is sometimes out of sync and there are documents which use older terminology still. I'd like to try and respond while avoiding the use of terms SAC-T and SAC, CB and CBB and respond to what I feel is the underlying question regarding the intent of this release: In short, I wouldn't let the terminology get in the way of the process of starting small and deploying wider. In this regard, we haven't changed anything. Typically, Microsoft envisage three (or even four) different releases of Windows being in use in the same organisation. Today, in an organisation set up to stay current with Windows as a Service, most will be on the previous release (1709 or RS3) while the new release is being deployed (1803 or RS4) and a few select devices will be enrolled on the Windows Insider Program (Slow, Fast and/or pre-release) to start planning and preparing for the next release (1809 or RS5 updates are available in the Windows Insider Program now) Hi Stephen, thanks for the detail explanation. We are still thoroughly confused unfortunately. shows: Semi-Annual Channel (Targeted) as 1709 but Semi-Annual Channel as 1803 i.e. deploy latest public feature release to all users before Targeted users?! OMS is now reporting the 391 machines we have on 1709 under 'Semi-Annual' as Not up-to date which confirms the above. Last week they were Up-to-date. Obtusely the 27 machines we have on 1709 under 'Semi-Annual (Targeted)' are reporting as up to date ! We fully understand the need to push updates in to the business via limited and then wider roll out and have adopted the MS channels pushed via WUfB and GP to do this as recommended. Please clarify. This feels like a mistake and clearly we are not the only ones to feel this. Stephen, We fully understand the concept of phased deployment. That is exactly what we are trying to accomplish using your own documentation and the configuration options built into Windows 10, GPOs/ADMX, etc. You can't tell us to not let the terminology get in the way when that terminology is literally the basis of how we configure phased deployments based on your own documentation, GPOs, and settings in Windows 10. In fact, 1803 still has the configurable option of Semi-Annual Channel and Semi-Annual Channel (Targeted). If these terms are out of date or no longer relevant, why are they still present in the latest release of Windows 10? Windows 10 is configured as SAC(T) by default. Individuals or enterprises change their update channels to SAC in order to defer or avoid unexpected Feature Updates on those systems before testing it (i.e. phased deployment). By releasing 1803 directly to SAC, those computers will actually receive 1803 BEFORE everyone else, BEFORE having the opportunity to test. EDIT: After testing on a Win10 1709 VM without GPOs applied, it seems that SAC(T) and SAC configurations work as originally intended per your documentation (as of today 5/3/2018). When set to SAC, it only downloads the latest 1709 quality update. When set to SAC(T) it downloads the 1803 feature update. Your release information document should be updated to more accurately reflect this: I think there may be confusion as to what declaring a new Feature Update as "Semi-Annual Channel" actually means to Microsoft versus everyone else. Are you only referring to the beginning of the 18-24 month support period? Or are you referring to the distribution via that update channel? Or both? Right now they don't seem to line up. Thanks for clarifying your concerns (and findings), @Christopher Gallen, and @Ian Clarke- that makes it clearer- Regarding changes in terminology or documentation, I would refer to this: Windows 10 release schedule Meanwhile, just to emphasize that I wasn't suggesting Microsoft has dropped the terminology for SAC, it hasn't changed since Windows 10 release schedule. SAC (targeted) is now in use (rather than SAC-T) to identify the validation ring prior to broad deployment. I just wanted to make sure the process is understood first, and it seems it is, which is great to hear! Hope that helps for now, and I'm pleased to see that in your testing (in both cases) on 1709, SAC(T/Targeted) and SAC. So the process appears to be working even if there has been confusion in the terminology. Thanks again for the feedback and for clarifying the concerns. If you saw this comment originally, nevermind. 1803 isn't listed in the description on VLSC, but if you continue to download, it's listed as one of the available options. So, Nevermind. :) I hate to labour this but I'm still totally confused. The Windows 10 release schedule is still reporting SAC (Targeted) as being an older feature release version (1709) than SAC (1803). Is there any possibility this is a mistake and if not what is the logic. How can we evaluate something with a subset of users that has already been broadly released. Regards Ian. That's what I'm hoping Microsoft will clear up. They promoted 1803 directly to SAC leaving 1709 in SACT. If they would have followed the pattern of the previous versions (1511, 1607, 1703, and 1709), 1803 should have landed in the SACT channel, dethroning 1709 and leaving it in SAC with 1703. My existing "rings" for a targeted Pilot and broad deployment look similar to this today: Now with 1803, my "Ring 6" gets this Feature Update in late-May/early-June 2018 (30 days after initial release), whereas with 1709, it was approximately 90 days after initial release (~January 12, 2018). So as of this AM OMS has now updated to show devices on Semi Annual (1709) as up to date. Previously they were out of date as it was expecting 1803. So clearly the cogs within MS are churning. The release schedule page is still wrong though (and now disagrees with OMS report!). Ian. So looks like as of last night MS have updated the release information page to reflect what we (thought) we knew already. It's slightly frustrating that MS could not simply have said 'yes that looks wrong we will get it updated / clarified' rather than leave a lot of admins wondering if they have got something very wrong. I have to say MS seem to still be pushing 1803 is deployed to everyone ASAP rather than following the Targeted > Semi Annual Channel methodology. It's hard when admins have put a load of process / GPs etc. in place to 'test' feature release with subset of users to then suddenly change that to roll it out to everyone whenever. This qualifier statement is a little confusing, are MS having second thoughts on the need for a 'targeted' group: "(1) Windows 10, version 1803 designation has been updated to reflect the servicing option available in the operating system and to reflect existing deferral policies. We recommend organizations broadly deploy the latest version of Windows 10 when they are ready, and not wait until the “Targeted” designation has been removed." So what determines when the “targeted” designation is removed? It seems like Microsoft engineers and bloggers are also unsure about the terminology. It would be better if the targeted reference is removed all together. According to this post semi-annual channel targeted is only valid for office 365 and not windows 10. Thanks @Caitlin Fitzgerald I will have a read of the blog post. We have however been left even more confused this AM as OMS (which was correct even before the release schedule page was updated) is now showing no machines on 1803 and suggesting we have 44 on Insider! 44 sounds about right for the number actually on 1803 and we have not even configured Insider within our org so would be very concerned were that the case. Ian. And then magically today 1803 is back and none on Insider … its disconcerting how this can just change over night I have to say.
https://techcommunity.microsoft.com/t5/windows-it-pro-blog/what-s-new-for-it-pros-in-windows-10-version-1803/ba-p/188568
CC-MAIN-2020-05
refinedweb
2,513
58.21
Originally posted by anindam bhattacharaya: G'day I have a problem getting to understand this code. public class TestClass { public static void main(String[] args) { lab : for(int i = 0; i< 10; i++) { for (int j = 0; j< 10; j++) { if ( i+ j > 10 ) break lab; } System.out.println( "hello"); } } }. All following iterations are entering the if statement and "hello" is never printed again. why do you say There are no following iterations. Execution is complete when break lab is performed because it breaks out of the outer loop. Originally posted by Michael Imhof: I thought, a labelled break is working like a "goto". So after calling "break lab" it breaks the loop and returns to the label.
http://www.coderanch.com/t/247045/java-programmer-SCJP/certification/
CC-MAIN-2014-35
refinedweb
118
63.59
In this 'very special issue' of Coderoshi, I'll go over how to create a scalable REST service in Java using CXF that can be consumed by Rails via ActiveResource. I do this based on the following assumptions: * Rails sucks at scaling, but rocks at creating quick, slick presentations. * Java sucks at writing presentation code, but scales like a mofo. If you disagree, and really do believe Rails can scale with ActiveRecord's simple abstraction of an RDBMS, or that Java is awesome for presentation, best to stop reading now. But if you know enough about these technologies to muster a knowing nod, then hold onto your face - it's about to be rocked off. What's all this, then? Let us begin with an overview: Ruby on Rails ActiveResource consuming Java REST via CXF defined by JAXB. Whaa?? ActiveResource (not to be confused with ActiveRecord, Ruby's ORM) is a simple REST consumer. That's it. It wraps structured REST actions and managed data (xml or json) in active objects. Just like ActiveRecord, executing methods on an ARes object triggers backend actions or accessed data. For example: @customer = Customer.find( 1 )will translate into executing some defined REST service to populate @customer, internally constructing a URL like "", getting the xml returned: <customer>And making the value accessible by the Ruby program <id>1</id> <name>Joe</name> </customer> #outputs: JoeIt's probably the drop-dead easiest method of consuming a remote service - assuming the remote service conforms to ActiveResource's standards (there's always a catch). puts @customer.name Happily, this isn't too difficult to do with the magic ingredient of the server-side: CXF. CXF is the evolution of Dan Diephouse's XFire (merged with IONA's Celtix). It's primarily considered to be a simple annotated WS-* impl, but also has some mean REST hooks. We'll be using the JSR 311 implementation rather than CXF's custom annotations. Getting Started Although I suggest you follow along, if you just plain hate writing code, then download both projects here. The Ruby on Rails project is called "ror" and the REST on Java project is "roj". I hear it time and time again: Ruby on Rails can't scale. This is, of course, an absurd statement. If your software gets to the point that this is really a problem, well, then take your millions and rearchitect it. You can dry your eyes on hundred dollar bills while your Java counterparts are still fighting to get GWT to work. Most projects fail anyway - and I don't have enough lives to waste on them. People complain that Java sucks at presentation, and for good reason. Sure, you can write webservices that scale out to thousands of servers, but try and write JSF and you have two choices: drink the koolaid, or pray for death. With those points in mind, I'm going to go over how to create a scalable REST service in Java using CXF that can be consumed by Rails new ActiveResource. Ingredients Like a good cook, I like to gather my ingredients up front. Here is what we'll need to bake our little cake (I'm bolding the versions in case you can't read: * Ruby >= 1.8.6 * Ruby on Rails >= 2.1 (2.0 is required for ActiveResource, but 2.1 has tons of bug fixes) * Java >= 6 (or v5 - I haven't tried it) * Maven >= 2.0.9 (you need the new graph resolution stuff, or dep overload may not work) I presume you understand these technologies, at least in a cursory way. If not, there are plenty of good guides to get you started. Go ahead. I'll wait... Oh, you're back? Then let's begin. REST on Java Although this is about scalability, the focus here isn't about handling multiple concurrent nodes. If you need a super-scalable data-management system check out HBase. If you want to cluster thousands of JVMs, use Terracotta. Wire it all up with Spring, and deploy using Maven. There's your scalability, buddy. Let's begin with the Java service side. In a nutshell, we're going to use Maven to manage our dependencies (naturally), Spring to wire our components (of course), JAXB to define our XML (what else?), CXF as our REST service and Jetty as our web server. The easy way to start (besides downloading the finished product) is use a Maven archetype to generate a basic WAR. mvn archetype:generateChoose "maven-archetype-webapp" (mine was #18), make up a groupId, artifactId and version, and "packaging: war". Next, replace the generated pom with this one: <project> <modelVersion>4.0.0</modelVersion> <groupId>com.coderoshi</groupId> <artifactId>roj</artifactId> <version>1.0-SNAPSHOT</version> <packaging>war</packaging> <build> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>maven-jetty-plugin</artifactId> <version>6.1.11</version> <configuration> <contextPath>/</contextPath> </configuration> </plugin> </plugins> </build> <dependencies> <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-frontend-jaxrs</artifactId> <version>2.1.1</version> </dependency> <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-transports-http-jetty</artifactId> <version>2.1.1</version> </dependency> <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-rt-bindings-http</artifactId> <version>2.1.1</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>2.5.5</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>2.5.5</version> </dependency> </dependencies> </project> This pom does two important things: gives us the spring and CXF dependencies that we need, and configures the plugins that we require. Namely, that we assume JDK 1.6, and that we want to use the Jetty plugin to run our war (eventually). Next we'll set up our Spring's context file. This describes to spring how much we love CXF and really want to use it. Spring replies by constructing our components correctly for us. Isn't that nice? Create a new file under "src/main/webapp/WEB-INF/" named "beans.xml", and paste the following: <beans xmlns=""Cryptic? Well, no, not if you already know Spring. If not, here's the skinny: We are importing some built-in CXF Spring configurations (thanks CXF!) and using them to define our own REST server (we haven't written it yet) named "campaignREST". The "jaxrs:serviceBeans" element contains a list of services we define, in this case, one we are about to write called "com.coderoshi.service.CampaignService". The block between "jaxrs:features" turns on logging, so we can view incomming/outgoing messages and errors. Finally, the "jaxrs:entityProviders" contains a bean we are about to write to convert a JAXB Collection into an XML form consumable by Rails. xmlns:xsi="" xmlns:jaxrs="" xmlns:cxf="" xsi: <import resource="classpath:META-INF/cxf/cxf.xml" /> <import resource="classpath:META-INF/cxf/cxf-extension-jaxrs-binding.xml" /> <import resource="classpath:META-INF/cxf/cxf-servlet.xml" /> <jaxrs:server <jaxrs:serviceBeans> <bean class="com.coderoshi.service.CampaignService" /> </jaxrs:serviceBeans> <jaxrs:entityProviders> <bean class="com.coderoshi.service.JAXBCollectionProvider" /> </jaxrs:entityProviders> <jaxrs:features><cxf:logging /></jaxrs:features> </jaxrs:server> </beans> Finally, we need to tell the web server (Jetty) that it needs to load up Spring and also the CXF servlet (which handles requests and passes them off to our REST service). <?xml version="1.0" encoding="ISO-8859-1"?>Whew! With configuration out of the way, let's get to the fun part... coding! <web-app> > First, we'll start with the JAXB bean we eventually want to serialize. Just to keep it simple, we'll make public fields and forgo getters and setters. We're using Maven to build, so you'll need to create the package under src/main/java (where "java" is a sibling of the "webapp" directory. package com.coderoshi.service;Ah, the magic of JAXB! That's all you need to create an XML-marshallable object. Eat it, Rails (and you're about to...). Now we get to make our REST server. Create a new class named "com.coderoshi.service.CampaignService". Next we need to annotate the class with two annotations: @javax.xml.bind.annotation.XmlRootElement( name = "campaign" ) public class Campaign { public long id; public String name; public long budget; public Campaign() {} public Campaign( long i, String n, long b ) { id=i; name=n; budget=b; } } @javax.ws.rs.Path( "/campaignService" )These tell the server that this is a service available at the HTTP path "/campaignService", and that by default, calling this service will return XML (it sets the HTTP header's mime-type on response). @javax.ws.rs.ProduceMime( "application/xml" ) Let's write our standard CRUD operations (which Rails' ActiveResource will presume exist by default). It's important to note that we're faking data and operations. If you downloaded the zip provided, then you'll see I made the service database-backed (HSQLDB), but that's just for fun, and really is beyond the code here. Create (addCampaign) @Path( "/campaignService" )Rails presumes all adds are "POST" operations. It also assumes the path is the plural name of the resource ("campaigns") and post-fixed by the produced mime-type. In our case XML. On a successful update ActiveResource will assume a 200 (successful) response. We can play with error handling at a later time. Since we are expecting data is in XML form, we explicitly enforce it to be so with @ConsumeMime. Note that all of these method annotations are in the JSR 311 as the "javax.ws.rs.*" package. @ProduceMime( "application/xml" ) public class CampaignService { // ... define actions ... @Path("/campaigns.xml") @ConsumeMime( "application/xml" ) public Response addCampaign( Campaign c ) { System.out.println( "Adding Campaign: " + c.name ); c.id = 99; // Adding header Location=/{id} is required so Rails can extract an id return Response.status( 200 ).header( "Location", "/" + c.id ).build(); } Read (getCampaign, getCampaigns) @GETIn Rails REST reads are GET operations. It also presumes the pathnames begin with "campaigns", but notice this time the singular "getCampaign" has a param called "id". In CXF/JSR311 you can define that you expect a path to contain a variable - in this case, we expect an ID. However, since Java doesn't keep track of parameter names after compilation, the best thing you can do is name the parameter you want CXF to populate for you. Most basic types are supported, like String or int - in our case long. So when someone "GETs" our url "", the "getCampaign" method is called and the value "35" is passed in as parameter "id". @Path("/campaigns.xml") public List<Campaign> getCampaigns() { return Arrays.asList( new Campaign(1, "campaign #1", 500), new Campaign(2, "campaign #2", 600) ); } @GET @Path("/campaigns/{id}.xml") public Campaign getCampaign( @PathParam( "id" ) long id ) { return new Campaign(id, "campaign #"+id, 900); } Also notice that our getCampaigns path is the same as addCampaign. How can this be? Because although the URLs are the same, different operations take place on them (POST versus GET). This is how CXF differentiates between the operations and knows when to expect XML that conforms to our Campaign JAXB, or no arguments at all. Update (updateCampaign) @PUTHere we PUT Campaign XML to the server, along with the ID. Although technically we shouldn't require the ID (since it should already be in the Campaign data), Rails builds URLs that way, so we need to also. Don't ask me why - I don't know. @Path("/campaigns/{id}.xml") @ConsumeMime( "application/xml" ) public Response updateCampaign( @PathParam( "id" ) long id, Campaign c ) { System.out.println( "Updating Campaign: " + id ); return Response.status( 200 ).build(); } Delete (deleteCampaign) @DELETEFinally we handle delete. To delete an object you merely need to pass in the ID as a DELETE operation. Here we don't really do anything, just print to the console so you know it worked. @Path("/campaigns/{id}.xml") public Response deleteCampaign( @PathParam( "id" ) long id ) { System.out.println( "Deleting Campaign: " + id ); return Response.status( 200 ).build(); } } Yeah yeah, I know REST is not necessarily CRUD, but for our purposes the mapping is close enough, and we're going to run with it. Yay! We wrote our Java REST server - but we have a problem. Our "getCampaigns" method returns a "List Luckily for us JSR-311/CXF allows you to create your own custom providers. Providers are responsible for translating to and from one kind of data (an JAXB object, for example) from and to another (an XML stream). The one we're about to write has already been defined in the Spring context above under "jaxrs:entityProvider". Without too much ado, here it is in it's entirety: package com.coderoshi.service;Whew! That's some chunk of code, where to begin? Let's start at the top. import java.io.*; import java.util.*; import javax.ws.rs.*; import javax.ws.rs.core.*; import javax.ws.rs.ext.*; import javax.xml.bind.*; import org.apache.cxf.binding.http.strategy.EnglishInflector; @Provider @ProduceMime( "application/xml" ) public class JAXBCollectionProvider implements MessageBodyWriter<Collection<?>> { public boolean isWriteable( Class<?> type ) { return Collection.class.isAssignableFrom( type ); } public long getSize( Collection<?> l ) { return l == null ? 0 : l.size(); } public void writeTo( Collection<?> li, MediaType mt, MultivaluedMap<String, Object> headers, OutputStream os ) throws IOException { try { if( li == null || li.isEmpty() ) { os.write( "<nil type=\"array\" />".getBytes() ); return; } Marshaller marshaller = null; String envelopeName = null; for ( Object object : li ) { if ( object != null ) { if ( marshaller == null ) { JAXBContext context = JAXBContext.newInstance( object.getClass() ); marshaller = context.createMarshaller(); marshaller.setProperty( Marshaller.JAXB_FRAGMENT, true ); envelopeName = context.createJAXBIntrospector().getElementName( object ).getLocalPart(); // pluralize the collection envelopeName = new EnglishInflector().pluralize( envelopeName ); os.write( "<".getBytes() ); os.write( envelopeName.getBytes() ); os.write( " type=\"array\">".getBytes() ); } marshaller.marshal( object, os ); } } if ( envelopeName != null ) { os.write( "</".getBytes() ); os.write( envelopeName.getBytes() ); os.write( ">".getBytes() ); } } catch ( JAXBException e ) { throw new IOException( "broke", e ); } } } Notice that we annotate our class with @ProviderThese tell the framework that this is an available provider implementation and that it produces XML. We implement "MessageBodyWriter", which contains hook methods for the REST container. "isWriteable" checks if outbound object classes can utilize this Provider. In our case, we can use anything that is a Collection (which, of course, our List<Campaign> most certainly is). @ProduceMime( "application/xml" ) The "writeTo" method does the actual work of marshaling any approved Collection object directly into the output stream. Here is where things get weird. First we ensure that the collection contains objects. If not, we just output "<nil type=\"array\" />". This is a flag to Rails' ActiveResource that we would like to return an array, but we don't have any data to give it. We call the element "nil" since we don't have any objects, we can't exactly know what types they are. Assuming our collection contains value, we first wrap the collection within a pluralized version of the elements we envelope. For example, since our List contains two Campaign JAXB objects named "campaign", we wrap them in a root element named "campaigns". How can we know how to correctly pluralize words? We use a built-in CXF class called "EnglishInflector". We then define the type as "array" for the benefit of the ActiveResource consumer. That's it! If it seems like a lot, don't worry. When it's all together and chugging along, it's amazing how little code there really is (considering we're dealing with Java here). Running the REST Service Assuming no typos or environment problems, you can now run the server using the Maven Jetty plugin. Don't worry about building it, the plugin will ensure we are built before launching. So go to the project base dir, and type: mvn clean jetty:run-warNavigate your browser to "", and you should be treated to a list of your campaigns: <campaigns type="array"> <campaign> <id>1</id> <name>campaign #1</name> <budget>500</budget> </campaign> <campaign> <id>2</id> <name>campaign #2</name> <budget>600</budget> </campaign> </campaigns> Ruby on Rails Now on to the Ruby on Rails frontend, which gladly, is simpler. Would you expect less from Rails? Like the Java backend, we can build the project from scratch, or you can download the completed project here. rails rorNext, create the model file "app/models/campaign.rb" with the contents (assuming your server is at port 8080: cd ror class Campaign < ActiveResource::BaseThat's it for the ActiveResource. Rails presumes the rest (REST, get it? ha ha). It points the ActiveResource to the given site as a base URL, and from there constructs URLs based upon a pluralized version of the classname. For each of the following ActiveResource actions: self.site = "" end Campaign.find(:all) # GET covers it for the ActiveResource, but we need something to show the user. Luckily Rails makes this simple with scaffold generation. Campaign.find(7) # GET Campaign.create(data).save # POST Campaign.find(7).save # PUT Campaign.delete(7) # DELETE ./script/generate scaffold Campaign name:string budget:integerThis generates a controller, resources in the routing table, test cases, and attempt to generate a model in the form of an activerecord object. But since we already created our campain ActiveResource there, it just skips it. Next, we need to set up the server. Since we don't actually need Rails to run on a database anymore, we can configure the server to skip it. Open up your "config/environment.rb" file, and within "RailsInitializer" add the line: config.frameworks -= [ :active_record ]To stop active_record from loading (and thus any database requirements). Now, start up the rails server: ./script/serverDid that work? Probably not. Unless you are running edge Rails, there is a current bug (gasp!) that forces you to comment out the lines from "new_rails_defaults.rb" (4th line in the server startup stack trace): # ActiveRecord::Base.include_root_in_json = trueTry and start the server again. It should work this time. If you visit "", it will probably work. This is just a coincidence. # ActiveRecord::Base.store_full_sti_class = true Next make a few tweaks to your new rails CampaignController. This is an issue with Rails scaffold generation. It generated a Controller for an ActiveRecord, not an ActiveResource, so you need to modify for using ActiveResource objects. Under "def update", replace the AR.update_attributes method with the ARes.load and save methods. Like so: if @campaign.update_attributes(params[:campaign])with @campaign.load(params[:campaign])and replace if @campaign.save @campaign = Campaign.find(params[:id])with @campaign.destroy Campaign.delete(params[:id])Better yet, let's just rip out all the stuff we don't need. Here's the complete Controller: class CampaignsController < ApplicationControllerYou'll also need to make a slight change to the generated "new.html.erb" file. Again, this is because the scaffold generator we used presumes it's dealing with an ActiveRecord object, and we provide an ActiveResource. (more info if you care: The "form_for(@campaign)" line generated, given an ARec object, will generate a hidden "_method=put" field in the form. This tells the controller that you are going to update an object - which isn't true - you are creating one. So you really want to POST, not PUT) # GET /campaigns def index @campaigns = Campaign.find(:all) end # GET /campaigns/1 def show @campaign = Campaign.find(params[:id]) end # GET /campaigns/new def new @campaign = Campaign.new() end # GET /campaigns/1/edit def edit @campaign = Campaign.find(params[:id]) end # POST /campaigns def create @campaign = Campaign.new(params[:campaign]) if @campaign.save flash[:notice] = 'Campaign was successfully created.' redirect_to(@campaign) else render :action => "new" end end # PUT /campaigns/1 def update @campaign = Campaign.find(params[:id]) @campaign.load(params[:campaign]) if @campaign.save flash[:notice] = 'Campaign was successfully updated.' redirect_to(@campaign) else render :action => "edit" end end # DELETE /campaigns/1 def destroy Campaign.delete(params[:id]) redirect_to(campaigns_url) end end So, as long as I'm giving away free answers, put this in place of the new.html.rb form generated (don't forget your authenticity token): <form action="/campaigns/" method="post">Navigate to "" and check out your list of two campaigns. click "New campaign" to add one. Fill out the form and "Create". This then forwards you to "show" the newly created campaign, which is retrieved from the Java's REST service. Click "back" to return to the list of campaigns, and you should see your new one sitting there. Click "Edit" to make a change to the campaign, and "Destroy" to delete it. These actions don't actually do anything since we hard-coded our REST service. <%= hidden_field_tag "authenticity_token", form_authenticity_token %> <p> <label for="campaign_name">Name</label><br /> <%= text_field_tag "campaign[name]", "" %> </p> <p> <label for="campaign_budget">Budget</label><br /> <%= text_field_tag "campaign[budget]", 0 %> </p> <p> <%= submit_tag "Create" %> </p> </form> However, if you downloaded the sample projects and run them, then the REST service is backed by an HSQLDB - so adding/editing/removing values actually works. That's it! This convergence of technologies is precisely what ActiveResource was created to deal with. Simple REST serviced is why JSR311 (aka, JAX-RS) exists for. Simplicity of domains - what more could you want? RoRoRoJ? Ro3J? 5 comments: interesting post and i would like to read more but am getting this error when i try to subscribe "feed/" has no items. my error or yours?? I believe yours. The correct URL for the feed is: (without the rss.xml) Hello Eric, I have been all over the Internet trying to find you. You made the iBall app, correct? Please track me down, I have an interesting proposition to offer you. Webb Nelson, 800-678-8697, [email protected] . Thanks. Great article. I'm trying to expand the example to work with complex value objects. Imagine a Campaign that contains a List of Event objects for example. How best customize the xml in a situation like this? I've managed to get close by using JAXB annotations, but the "type=array" part is missing, which Rails doesn't like. It would be nice if the CXF provider could handle collections inside the value objects as well, but apparently it isn't applied recursively. Any ideas? @Mikael I'd alter the JAXBCollectionProvider to be more general, like a JAXBModelProvider - which can reflect an object, and generate the annotation you desire for collections, but let leaf objects (any not containing a collection) to just pass to other providers. Just a thought. I'd be interested to know if you get it to work.
http://www.coderoshi.com/2008/08/ruby-on-rails-on-rest-on-java.html
CC-MAIN-2014-35
refinedweb
3,728
58.69
. Default imports All these packages and classes are imported by default, i.e. you do not have to use an explicit import statement to use them: - java.io.* - java.lang.* - java.math.BigDecimal - java.math.BigInteger - java.net.* - java.util.* - groovy.lang.* - groovy.util.* were used to write a for loop which looked like in groovy you can use that too, but you can use only one count variable. Alternatives to this are or or Things to be aware of - Semicolons are optional. Use them if you like (though you must use them to put several statements on one line). - The returnkeyword is optional. - You can use the thiskeyword inside static methods (which refers to this class). - Methods and classes are public by default. Protectedin Groovy has the same meaning as protected in Java, i.e. you can have friends in the same package and derived classes can also see protected members. - Inner classes are not supported at the moment. In most cases you can use closures instead. - The throwsclause in a method signature is not checked by the Groovy compiler, because there is no difference between checked and unchecked exceptions. - You will not get compile errors like you would in Java for using undefined members or passing arguments of the wrong type. See Runtime vs Compile time, Static vs Dynamic. throw a MissingMethodException!: This way the block following the initialized definition is clearly an instance initializer. Another document lists some pitfalls you should be aware of and give some advice on best practices to avoid those pitfalls.
http://docs.codehaus.org/pages/diffpages.action?pageId=228187689&originalId=228163592
CC-MAIN-2015-06
refinedweb
258
66.03
Unity includes Windows Runtime support for IL2CPP on Universal Windows Platform and Xbox One platforms. Use Windows Runtime support to call into both native system Windows Runtime APIs as well as custom .winmd files directly from managed code (scripts and DLLs). To automatically enable Windows Runtime support in IL2CPP, go to PlayerSettings (Edit > Project Settings > Player), navigate to the Configuration section, and set the Api Compatibility Level to .NET 4.6. Unity automatically references Windows Runtime APIs (such as Windows.winmd on Universal Windows Platform) when it has Windows Runtime support enabled. To use custom .winmd files, import them (together with any accompanying DLLs) into your Unity project folder. Then use the Plugin Inspector to configure the files for your target platform. In your Unity project’s scripts you can use the ENABLE_WINMD_SUPPORT #define directive to check that your project has Windows Runtime support enabled. Use this before a call to .winmd Windows APIs or custom .winmd scripts to ensure they can run and to ensure any scripts not relevant to Windows ignore them. Note, this is only supported in C# scripts. See the examples below. Examples C# void Start() { #if ENABLE_WINMD_SUPPORT Debug.Log("Windows Runtime Support enabled"); // Put calls to your custom .winmd API here #endif } In addition to being defined when Windows Runtime support is enabled in IL2CPP, it is also defined in .NET when you set Compilation Overrides to Use Net Core. • 2017–05–16 Page amended with no editorial review
https://docs.unity3d.com/ru/2018.2/Manual/IL2CPP-WindowsRuntimeSupport.html
CC-MAIN-2021-31
refinedweb
244
50.94
QNX Developer Support pci_find_device() Find the PCI device with a given device ID and vendor ID Synopsis: #include <hw/pci.h> int pci_find_device( unsigned device, unsigned vendor, unsigned index, unsigned* bus, unsigned* dev_func ); Arguments: - device - The device ID. For a list of supported device IDs, see <hw/pci_devices.h>. - vendor - The vendor ID. For a list of supported vendor IDs, see <hw/pci_devices.h>. - index - The index (n) of the device or function sought. - bus - A pointer to a location where the function can store the bus number of the device or function found. - dev_func - A pointer to a location where the function can store the device or function ID of the nth device or function found with the specified device and vendor IDs. The device number is in bits 7 through 3, and the function number in bits 2 through 0. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The pci_find_device() function returns the location of the nth PCI device that has the specified device and vendor IDs. You can find all the devices having the same device and vendor IDs by making successive calls to this function, starting with an index of 0, and incrementing it until PCI_DEVICE_NOT_FOUND is returned. Returns: - PCI_DEVICE_NOT_FOUND - The device or function wasn't found. - PCI_SUCCESS - The device or function was found. - -1 - You haven't called pci_attach(), or the call to it failed. Classification: See also: pci_attach(), pci_attach_device(), pci_detach(), pci_detach_device(), pci_find_class(), pci_present(), pci_read_config(), pci_read_config8(), pci_read_config16(), pci_read_config32(), pci_rescan_bus(), pci_write_config(), pci_write_config8(), pci_write_config16(), pci_write_config32()
http://www.qnx.com/developers/docs/6.3.0SP3/neutrino/lib_ref/p/pci_find_device.html?lang=en
CC-MAIN-2013-48
refinedweb
261
56.35
Hi, here is the lab assignment that I'm currently working on : Using nested loops and objects in an applet Right now I'm stuck at these two steps : Quote: Now add some more code to your paint method so that it displays the terrain type where the player is currently located. This can be displayed near the bottom of the applet. To do this you use the drawString method of the Graphics class. Next, fill in keyPressed so that the user can move by using the arrow keys. Don't forget to repaint so they get an update on what kind of terrain they are on. The user should be able to move around now (after clicking the applet to gain focus). Code : import java.applet.*; import java.awt.*; import java.awt.event.*; public class TreasureApplet extends Applet implements KeyListener { private Island island; public void init() { island = new Island(10); addKeyListener(this); } public void keyPressed(KeyEvent e) { if (island.currentLocation() == Island.WATER) { return; } if (island.currentLocation() == Island.TREASURE) { return; } if (island.currentLocation() == Island.PIRATE) { return; } if (e.getKeyCode() == KeyEvent.VK_LEFT) { island.moveWest(); } repaint(); if (e.getKeyCode() == KeyEvent.VK_RIGHT) { island.moveEast(); } repaint(); if (e.getKeyCode() == KeyEvent.VK_UP) { island.moveNorth(); } repaint(); if (e.getKeyCode() == KeyEvent.VK_DOWN) { island.moveSouth(); } repaint(); } public void keyReleased(KeyEvent e) { } public void keyTyped(KeyEvent e) { } public void paint(Graphics g) { int x = 0; while(x < 10) { int y = 0; while(y < 10) { if (island.terrainAt(x,y) == Island.WATER) { //draw the water g.setColor(new Color(0,0,210)); g.fillRect(x*40, y*40, 40, 40); g.drawString("Splash! You got to the water.", 50, 430); } else if (island.terrainAt(x,y) == Island.SAND) { //draw the sand g.setColor(new Color(160,82,45)); g.fillRect(x*40, y*40, 40, 40); g.drawString("Ouch. Sand got into your sandals.", 50, 430); } else if (island.terrainAt(x,y) == Island.TREE) { //draw the treasure g.setColor(new Color(50,205,50)); g.fillRect(x*40, y*40, 40, 40); g.drawString("You are standing by a tree.", 50, 430); } else if (island.terrainAt(x,y) == Island.TREASURE) { //draw the treasure g.setColor(new Color(225,215,0)); g.fillRect(x*40, y*40, 40, 40); g.drawString("Congratulations. You found the treasure", 50, 430); } else { //draw the pirate g.setColor(new Color(0,0,0)); g.fillRect(x*40, y*40, 40, 40); g.drawString("It's the pirate. Run!", 50, 430); } //draw a square so the user can distinguish locations clearly g.setColor(Color.BLACK); g.drawRect(x*40, y*40, 40, 40); y++; } x++; } } } When I try to compile and display the applet, things got really weird when all 5 drawString strings appear at the same time. I mean, they sit on top of each other. It's weird because I have already used an if-else sentence to classify them, how in the world they still appear all at once? Another thing I would like to ask is the keyPressed method. I think I have wrote the code wrong somewhere because I don't see the player moving at all. Please take a look and give me some suggestions on how to change it. Thank you all so much. :D
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/8449-help-graphics-class-keypressed-method-printingthethread.html
CC-MAIN-2015-48
refinedweb
536
70.39
EMF Validation, Java 8 and nsURI ! Technical details on the issue If nsURI is not specified, the framework takes the first EClass with the corresponding name. In my case, there was a conflict with GMF Notation. There were two EClass named Connector. Until Java 8, it was always my EClass which was used so I never detected the potential issue. With Java 8, it is always the GMF one. Details on how to solve the issue When defining the target EClass, specify the nsURI following the convention explained in the tooltip Please note that in most cases, though not in mine :-(, a constraint applies to a single model namespace. In this case you would be able to specify a package at the constraint provider level which will avoid the issue. Going further It would be nice to warn the user when there are two elements found instead of simply taking the first one. This validation might be done both at runtime and at design time. When specifying the nsURI, the extension point defined is not easily readable directly in the tree: I opened an enhancement request to prevent issues like this and to improve usability. I proposed several solutions, and your ideas are highly welcome. Acknowledgement Warm thanks to Christian W. Damus for his help on the Eclipse forum as I investigated this issue.
https://community.bonitasoft.com/emf-validation-java-8-and-nsuri
CC-MAIN-2019-18
refinedweb
224
63.7
Calculating Expected Returns of Stock using CAPM Capital Asset Pricing Model(CAPM) is a popular model in finance for calculating the expected return of investments in the stock market. I’m going to explore it in detail in this post. CAPM calculates the expected return using the below formula Let’s dive deep into these variables. Risk-free Rate Historically, U.S. Treasury bills, notes, or bonds are considered as risk-free investments because they are fully backed by fed and are guaranteed to be paid no matter what. Yield rates of Treasury bonds vary depending on the term of the bond. Longer the maturity term, the better the yield rate. Financial analysts consider the yield rate of 10-year Treasury bond as a standard risk-free rate for calculations. The current yield rate of 10-year treasury bond is 1.74%. So the first variable Rf is 0.0174. Beta Beta is a measure of correlation. Beta effectively describes the investment volatility with respect to the overall stock market. It is calculated as follows r(i) = Return of the invidual stock r(m) = Return of the overall stock market Beta is basically the ratio of covariance of the stock to the variance of the market. To calculate beta, we first need to calculate the returns of the stock and the stock market. For this example, I would use Cathie Wood’s ARK active funds. ARK ETFs gained immense popularity lately due to their stunning performance. Simple Returns Let’s calculate the simple returns of S&P500 and ARK funds. # Calculating simple return stock_returns = (stocks_df - stocks_df.iloc[0, :])/stocks_df.iloc[0, :]*100# plot the rate of return over time stock_returns.plot(legend=True, figsize=(14, 8), linewidth=1) plt.axhline(y=0, linestyle='dashed', color='black', linewidth=1) plt.xlabel('Year') plt.ylabel('Return %') plt.title('S&P 500 and ARK ETF Returns in last 5 Years') From the plot, we can see that ARKK ETF has outperformed the market by a large margin. A lot of its success is due to Tesla($TSLA) which has a holding share of more than 10% and other innovative companies like Zoom, Teledoc, Shopify, etc., Log Returns Simple returns are asymmetric in nature. What I mean by that is, if a stock value goes from $10 to $20 then returns are 100% and if it goes down from $20 to $10, returns are not -100%. Log function keeps this behavior consistent. #Math Log(10/20) = -0.30 Log(20/10) = 0.30 Python code for calculating log returns: log_returns = np.log(stocks_df/stocks_df.shift(1))#Note: Shift function shifts data by desired number of periods. Shift(1) moves the data by 1 period. So, the expression becomes log(current_value/previous_value). Beta calculation Beta value is normally calculated using log returns since the log function keeps the change in value symmetric. #Covariance of the log returns returns_cov = log_returns.cov()# There are about 252 trading days in an year annual_cov = returns_cov*252annual_cov market_var = log_returns['S&P500'].var() market_var ##Output: 0.0001412928728106657annual_mkt_var = market_var*252 annual_mkt_var ##Output: 0.03560580394828776 The stock market gets a beta value of 1. As you might already know, S&P 500 index is usually referred to as the market index. # Beta is the ratio of annual covariance of the stock to annual market variancebeta = annual_cov/annual_mkt_var beta In investing world, beta > 1 signifies a stock that is more volatile than the market. More the volatile the stock is the riskier it becomes. Similarly, Beta < 1 signifies a less volatile and less risky stock. Finally, CAPM Take a look at the first formula again. Historically, market returns are around 8% annually. So, we would take 0.08 as market returns. We already know the risk-free rate(0.017) and beta values(calculated above). Let’s calculate the expected returns for ARK funds. Expected returns using CAPM from collections import defaultdictreturns_dict = defaultdict()tickers = [ 'S&P500', 'ARKK', 'ARKG','ARKQ']for ticker in tickers: returns_dict[ticker] = round((0.017+beta[ticker]['S&P500']*(0.08-0.017))*100,2)#print the expected returns of the stock for idx, val in returns_dict.items(): print(idx,'expected annual returns would be',val,'%') That’s all! ARKK has the highest annual returns at 9.45%. In my opinion, CAPM expected returns are very conservative. ARKK has Compound annual growth rate(CAGR) of more than 40%. In simple words, it has returned more than 40% annually in the last 5 years. Also, note that CAPM doesn’t factor in the expense ratio of the ETFs or mutual funds. Some of the actively managed funds are more expensive than passively managed index funds. So, you should keep that in mind while choosing your investment options along with their volatility and expected returns. I will have the code updated in my GitHub soon. I will talk more about CAGR and Sharpe ratio in my next article.
https://sathish-manthani.medium.com/calculating-expected-returns-of-a-stock-using-capm-d435af109986?source=post_internal_links---------7-------------------------------
CC-MAIN-2022-05
refinedweb
815
59.5
Today we released the Multilingual App Toolkit v3.1. This release provides several key fixes as well as new and improved features. Please note that due to updates to the setup process, you will need to perform a one time uninstall of MAT v3.0 or earlier before installing v3.1. Visual Studio Online builds While the Multilingual App Toolkit has supported local TFS builds for some time, building online was not available. The team took up the challenge to not only support v3.1 for online builds, but to enabled MAT v2.2 or greater as well. If you have an existing MAT v2.2 or greater project using Visual Studio Online, you can simply enable online builds and it will just work. For more information on Visual Studio Online builds, please see: Expanded import, export and recycling Round-tripping of resources with friends, family and professional translators has been supported since the first release of MAT. Soon thereafter, a recycling option was added so you could ‘import’ translations from other unrelated projects (or share between universal apps) without the need to send out similar resources for translation for a second (or third or fourth) time. In this release, we have merged the Import and Recycling options into the same user interface to save you some steps. One of the more common requests has been to extend this model to support CSV files as well. I’m happy to announce that CSV files are now supported. The export steps are the same as in previous releases, except for a new drop down that allows you to choose between .XLF and .CSV output formats. Importing provides the ability to import multiple files from multiple locations. Be sure to select the “Enable resource recycling” option if you are importing non-related projects. We’ll dive deeper into the Exporting, Importing and Recycling features in a future blog. Improved translation and suggestion results In previous versions the translation providers would return a confidence level of either 50% or 100%. This did not provide for any automatic differentiation between results based on case or punctuation differences. This made it difficult to easily select the ‘best’ suggestion as it was not guaranteed to be the top option. To help simplify picking the best recommended translation, the provider model and each provider differentiates by tuning the confidence level value. This ensures the preferred recommendation is always at the top of the suggestion list as well as the result you get when you select Translate All (or Generate Translations inside Visual Studio). And of course, no release is complete without addressing those little critters that sometimes make it into the product. Here is a list of the key fixes. - Enabled Windows 7 + Visual Studio 2013 installation support. To be honest, this was just an embarrassing miss for v3.0 - Improved and added support for Visual Studio Express, including Windows, Desktop and Web editions - Removed dependencies on the Visual Studio .config files to avoid future issues (see: Rename is disabled in Visual Studio with MAT) - Incorporated fix to ensure that the Store always showed the full list of the app’s languages (See: Store is not showing my languages) - Added better validation of translation result to prevent invalid XLIFF results - Fixed build failure if “####” was in the source or target resource string - Fixed offline first run issues with the language portal provider - Fixed support for Windows Phone (Sliverlight) projects in Visual Studio 2013 Express for Windows - Fixed support for Class Libraries, Windows Runtime components, as well as improved support for other project types - Added Microsoft Translation provider support for language neutral codes (ja, fr, it, etc) - Fixed loss of existing translation from first RESX in the project when converting from RESX to XLF files The team really focused on key features as well as addressing both reported and non-reported issues in this release. We are pretty excited about the features as well as the overall level of product improvement in v3.1. We hope you will enjoy the features and fixes in this release of the Multilingual App Toolkit. Thank you, The Multilingual App Toolkit team [email protected] User voice site: The title of the download page is still "Multilingual App Toolkit 2.0". Kinnara, Thanks for pointing this out. It is being fixed. Trying to use MAT with an ASP.NET MVC project in Visual Studio 2013 update 3 but having some issues. For example, does my configuration support resjson files? I created a folder MultiLingualResources but have not been able to enable MAT as it is looking for a resource file. Where should the resource file go? Is this a resjson file or a standard resource file? Should have mentioned I am using Windows 7 The current ASP.NET MVC support is limited to resx files. I have had a chance to test MAT with Update 3 yet, but it should work the same as update 2. Using Windows 7 is supported and should not be a factor. Cameron, What versions of VS does Multilingual Tool kit work with? MAT supports Visual Studio 2012 and 2013 – including the express editions – with v3.1. However we focus the majority of our testing on Visual Studio 2013 environments. I have MAT working fine with the MultilngualResources folder in an ASP.NET MVC project. Really nice! I have tried to move the MultilngualResources folder into a support dll where data manipulation is done for WebAPI to be able to translate some data values. The English text is being correctly supplied to the ASP.NET Razor page generation but the other languages are not being accessed when the CurrentUICulture is changed to the desired culture. Any ideas? Thanks Just updated to MAT 3.1 a few days ago. Now we are seeing this error for our Spanish XLIFF file. "The element 'trans-unit' in namespace 'urn:oasis:names:tc:xliff:document:1.2' has invalid child element target in namespace 'urn:oasis:names:tc:xliff:document:1.2'. List of possible elements expected: 'context-group, count-group, prop-group, note, alt-trans' in namespace 'urn:oasis:names:tc:xliff:document:1.2' as well as any element in names '##other'." MAT will no longer let us edit that file. However, the file builds corrected in VS2013 U2. When we open the file in the VS2013 XML editor no errors or warns appear. What is this message trying to tell us and how do we fix it? Thanks, Gyle P.S. It would be friendly to allow copying of the error messages to the clipboard. @Gyle Iverson, can you send the file to multilingual at Microsoft dot com so I can take a look. This sounds like a conversion to XLIFF issue. Just sent you the file. Thanks for helping. Hi, I am running into issues with the MAT, it seems to prevent the "Refresh Windows App" feature of Visual Studio from working. I have posted on the MS forums with further explanations, I would appreciate if you could take a look: social.msdn.microsoft.com/…/visual-studio-keeps-asking-me-to-restart-my-application-even-if-i-didnt-make-any-change Thank you! Fabien I have MAT 3.1.1250.0 installed, but it does not work in Visual Studio Online builds. I receive the following error: C:Program Files (x86)MSBuildMicrosoftMultilingual App Toolkitv3.0Microsoft.Multilingual.PriResources.targets(43,5): Error : Cannot start process because a file name has not been provided.
https://blogs.msdn.microsoft.com/matdev/2014/06/25/announcing-multilingual-app-toolkit-v3-1/
CC-MAIN-2018-47
refinedweb
1,245
56.55
Removing break from DIV Discussion in 'HTML' started by Robert Mark Bram, Oct 17, 2003. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML - Replies: - 6 - Views: - 634 - Richard Tobin - Nov 14, 2006 `if (!p ? i++ : 0) break;' == `if (!p){ i++; break;}' ?, Apr 13, 2008, in forum: C Programming - Replies: - 12 - Views: - 986
http://www.thecodingforums.com/threads/removing-break-from-div.155347/
CC-MAIN-2014-42
refinedweb
110
83.15
Lock 9 to Lock 10 Migration Guide The following instructions assume you are migrating from Lock 9 to the latest Lock 10. If you are upgrading from a preview release of Lock 10, please refer to the preview changes. Otherwise, read on! If you just want a quick list of new features to see what changes Lock 10 has introduced, take a look at the new features page. The goal of this migration guide is to provide you with all of the information you would need to update your Lock 9 installation to Lock 10. Of course, your first step is to install or include the latest version of Lock 10 rather than Lock 9. Beyond that, take a careful look at each of the areas on this page. You will need to change your implementation to reflect the new changes, not only the initialization of Lock and your calls to Lock methods, but especially any customization options you were implementing may need inspected and changed. Take a look below for more information! General Changes and Additions User Profiles - The profile is no longer fetched automatically after a successful login, you need to call lock.getUserInfo. Redirect Mode vs Popup Mode - Lock now uses Redirect Mode by default. To use Popup Mode, you must enable this explicitly with the redirect option auth: { redirect: false }. - You no longer need to call parseHashwhen implementing Redirect Mode. The data returned by that method is provided to the authenticatedevent listener. Customizing Options - The showmethod no longer takes any arguments. You pass the options to the constructor and you listen for an authenticatedevent instead of providing a callback. You can listen for this event with the on method. Events Changed - Events have significantly changed between Lock 9 and Lock 10. The events that were emitted in Lock 9 are no longer used in Lock 10. The new events list for Lock 10 can be found on the Lock 10 API page. - Important notes about the new authenticatedevent: The authenticatedevent listener has a single argument, an authResultobject. This object contains the following properties: idToken, accessToken, state, refreshTokenand idTokenPayload. Most of them correspond to the arguments passed to the showmethod's callback. - See information on all of the new events in Lock 10 on the Lock 10 API page. Internationalization - Not all languages supported by Lock v9 are supported by Lock v10. Please see the i18n directory in the GitHub repository for a current list of supported languages in Lock. Removed Methods - The showSignin, showSignupand showResetmethods are no longer available. You can emulate the behavior of this options with the initialScreen, allowLogin, allowSignUp and allowForgotPassword options. - The getClientmethod and the $auth0property are no longer available. You can, instead, simply instantiate Auth0when using functionality from auth0.js. If you need help with how to do this, see the Using Lock with auth0js page. Changes to Customization Options Some existing options suffered changes, in addition to the beforementioned removals and additions. Please see below for brief descriptions, or consult the customization reference for more information. Display Options - The connectionsoption was renamed to allowedConnections. - The focusInputoption was renamed to autofocus. - The gravataroption was renamed to avatar and instead of taking trueand falseit now takes nullor an object. - The dictoption was split into language and languageDictionary. The languageoption allows you to set the base dictionary for a given language and the languageDictionaryoption allows you to overwrite any translation. Also, the structure of the dictionary has been changed. Theming Options - The iconoption was renamed to logo and namespaced under theme. Now you use it like this theme: {logo: ""}. - The primaryColor option was namespaced under theme. Now you use it like this theme: {primaryColor: "#ec4889"}. Social Options - The socialBigButtonsoption was renamed to socialButtonStyle and its possible values are "small"or "big"instead of trueor false. Authentication Options - The authParamsoption was renamed to params and namespaced under auth. Now you use it like this auth: {params: {myparam: "myvalue"}}. - The connection_scopesparameter under authParamsis now connectionScopes(under the authoption) auth: {connectionScopes: {'facebook': ['scope1', 'scope2']}}. - The popupoption was replaced by redirect which is namespaced under auth. If you previously used popup: truenow you need to provide auth: {redirect: false}. - The callbackURLoption was renamed to redirectUrl and namespaced under auth. Now you use it like this auth: {redirectUrl: ""}. - The responseType option was namespaced under auth. Now you use it like this auth: {responseType: "code"}. - The sso option was namespaced under auth. Now you use it like this auth: {sso: false}. Database Options - The disableResetActionoption was renamed to allowForgotPassword. - The disableSignUpActionoption was renamed to allowSignUp. - The defaultUserPasswordConnectionoption has been replaced by the defaultDatabaseConnection and the defaultEnterpriseConnection options. - The resetLinkoption was renamed to forgotPasswordLink. Other Options - The forceJSONPoption was removed. Further Reading - Some other options were added, see New Features page for details. - Check out the customization page for more details on all of the customization options that are available. - Take a look at the api page for more details on Lock 10's API. Upgrading From Preview Releases This section is only pertinent if you are currently using a pre-release version of Lock 10, and wish to update to the release version. It is a summary of what you absolutely need to know before upgrading between preview releases. For the full list of changes, please see the project's CHANGELOG. Upgrading from v10.0.0-beta.1 to v10.0.0-beta.2 - Renamed closemethod to hide. - Renamed connectionsoption to allowedConnections. - Renamed - Requiring the npm package has been fixed, you need to require('auth0-lock')instead of require('auth0-lock/lib/classic'). Upgrading from v10.0.0-beta.2 to v10.0.0-beta.3 - The profile is no longer fetched automatically after a successful login. To obtain it you need to call lock.getProfile(see the examples above for the details). Upgrading from v10.0.0-beta.3 to v10.0.0-beta.4 - The jsonpoption was removed. Upgrading from v10.0.0-beta.4 to v10.0.0-beta.5 - The constructor no longer takes a callback. See the examples above to learn how to use events instead. - The languageoption has been added, and the structure of the languageDictionaryoption has been updated. Upgrading from v10.0.0-beta.5 to v10.0.0-rc.1 - No API changes were made in this release. Lock: Table of Contents - Getting Started With Lock - Methods of installation and an example app setup - Lock Configuration Options - Altering Lock's behavior for your project's needs - Lock UI Customization - Customizing Lock's look and feel - Lock API Reference - The Lock API reference - Lock With auth0.js - How to use Lock alongside auth0.js - Lock 9 to 10 Migration Guide - What existing customers need to know to migrate from Lock v9 to v10 - Language Support and Custom Text - Languages supported and customization of text fields - Customizing Error Messages - Customizing error messages that are shown in Lock - Advanced Use Case: Popup Mode - An alternate Lock mode, not recommended in most use cases
https://auth0.com/docs/libraries/lock/v10/migration-guide
CC-MAIN-2017-17
refinedweb
1,159
59.09
OK let’s make this clear. You can totally call SQL Functions, SQL SPROCS, or any other raw SQL statement and use it in EF Code Only. What you don’t get is automatic or fluent API configuration statements that perform this mapping work for you and by default, no tracking occurs on the materialized objects (though you can manually do that work yourself to attach it), and the functions are not composable in LINQ statements that run on the server. This is all capable because of the SqlQuery<T>() method exposed on the Database type in EntityFramework. In a nutshell it directly executes the provided SQL statement by marshaling it to the underlying provider connection and materializes the result set as types shaped as T. Mapping occurs EXACTLY as property names in the target type (as far as I know there’s no overriding this behavior) so that means if you have a property named CustomerID then you need a column with the exact same name in the SQL statement. Note: The returned type does NOT need to actually be a class mapped in EF DbSet<T> types. It can be any type with a default constructor and properties with get/set (public visibility is not required). So here are my simple recommendations: In example imagine I have a sproc, GetPeople, that takes no parameters and returns a result set of Id int, FirstName varchar(50), LastName varchar(50) (doesn’t matter what the actual table/views are). I have this class to represent the output [DebuggerDisplay("FirstName = {FirstName}, LastName = {LastName}")] public class Person { public Guid Id { get; set; } public String FirstName { get; set; } public String LastName { get; set; } } I could map the sproc the Person type with the following method on my DbContext based type public virtual IEnumerable<Person> GetAllPeople() var results = this.Database.SqlQuery<Person>("execute dbo.GetPeople"); return results; That’s right folks. That is it. Of course you can see that you can call anything yourself here including your own SQL select statements or what not. It’s all based on convention. I've put together the worlds easiest working example solution (including database) that you can use to run this example (and others such as parameterized requests and SQL Commands). I hope this clears up things for people out there. Thanks Jimmy for the quick response on this... I have been spinning my head to trying to find a straight forward example on how this is done.... there are bits and pieces on Bing but no unified example like this one... you've simplified it just perfectly. Love this feature in EF, makes coding super fast and fun!
http://blogs.msdn.com/b/schlepticons/archive/2012/02/16/yes-you-can-execute-sprocs-with-ef-fluent-api-code-only.aspx
CC-MAIN-2014-23
refinedweb
444
59.53
Categories: S60 | Code Examples | Python | How To This page was last modified 04:23, 2 May 2008. Python debugging techniques From Forum Nokia Wiki This article aims to give developers ideas about how to better find the sources of errors in their PyS60 applications, as default error reports can be hard to understand and standalone applications don't even have error reports. Method 1 A simple way of finding out what is wrong in a sequence of code is to show a confirmation message after an operation is successfully performed. The message can either be printed on the screen or spoken by the phone. For example, let's say we want to alphabetize a list and append an element to it: import appuifw, e32, audio l=['alpha', 'beta', 'lambda', 'gamma'] n=len(l) i=0 while(i<n-1): j=i+1 while(j<n): if(l[i]>l[j]): a=l[i] l[i]=l[j] l[j]=a j+=1 i+=1 #If all goes well to this point, we can print a message... print "The list has been alphabetized" #... or have the phone tell us audio.say("The list has been alphabetized") l.append('omega') print "The element 'omega' has been added to the list" applock=e32.Ao_lock() applock.wait() #Tell the application not to terminate immediately Method 2 Known as the "exception harness", this method is more useful for standalone applications. It mimics a stack trace of the Python Script Shell. try: # Actual program is here.() Method 3 Here we record the error to a text log file. def main(): try: Your main code goes here. except: import sys import traceback import appuifw cla, exc, trbk = sys.exc_info() excName = cla.__name__ try: excArgs = exc.__dict__["args"] except KeyError: excArgs = "<no args>" excTb = traceback.format_tb(trbk, 5) errorString = repr(excName) + '-' + repr(excArgs) + '-' + repr(excTb) + '\n' print errorString appuifw.note(u'Application errors, see log file for more information', "error") file = open(u'C:\\Log.txt','a') file.write(errorString) file.close() raise if __name__ == "__main__": main() Here the errors will be recorded in the Log file at C:\\Log.text
http://wiki.forum.nokia.com/index.php/Python_debugging_techniques
crawl-001
refinedweb
353
65.62
Drool... After sooo many months of development, it's finally here. Public! We can write about it. We can show it off. Yes!!!!! Over the next few days developers, testers and program managers in the Outlook Mobile team will write about features they worked on here. But lets do some high-level overview here. So what's so cool about Windows Mobile 6? Well, it's not a comprehensive list but some of the things I like about WM6 are: ...and a ton more. This is the list off the top of my head of stuff that *I* enjoy. So where are some screenshots, you ask? Fair question. In my excitement I don't want to hold this post for screenshots. They'll follow. As a recent purchase has given me the joys (and some woes) of a JasJam, it is good news to hear that Looks like folks are starting to talk about Windows Mobile 6. John Kennedy says they can now talk about Creating Custom Configuration Sections in Web.config [Via: ] ASP.NET AJAX Goodies: Documentation Download,... In my own personal protest of today's major story , this entry was created on my Windows Mobile phone CONGRATS GUYS! I can't wait to pick one up and use the spiffy UI and new features! Looks like a great jump forward. Does the product map including back-porting the fix for namespaces (ie. deleting items in IMAP) for existing devices like the recently released Samsung BlackJack / Sprint Moto Q -- which are all WinMo 5 devices? Actually, I don't know. There are some critical fixes that we do backport, but I don't know what the criteria is. I should find it, it'll be good education for me. Looks good! Any plans for supporting multiple calendars in Outlook Mobile like in Outlook? is that possible that outlook mobile on WM6 does not have any filter to only view unread mails? Re: Filter e-mails. Yes, it is possible that WM6 does not have any filter to only view unread emails. As sad as it sounds, it is actually an architecture limitation from a long time ago. Now that we support flags, I'd love to be able to just see unread emails, flagged emails, important emails etc. etc. etc. Sadly, we can't :(. Not right now anyway. We do not support Multiple Calendars in Outlook Mobile yet. As for the plans, I am not in the Calendar Team. Let me ping our Calendar folks to see if they have something to add. wow...can´t beleive it...i am so used to filter by unread mails on Outlook (PC version)...but thanks for your answer! "Keyboard Shortcuts in Messaging" -- I couldn't get this feature work on my WM6 device (engineering device). I tried Device Emulator and it didn't work too. I am using Pocket PC Phone (WM6 Professional). Does this feature only work on Smartphone (WM6 Standard)? "Out of Office messages" -- I couldn't find this menu in my WM6 device, even in emulator. Does it need new Exchange Server? "flag e-mail messages" -- I couldn't find this menu too. Does it need new Exchange Server? You need Exchange 12 for those features to work. "Keyboard shortcuts" should work, but i you do not have a shipping device then this may be a driver issue. Thank you, Zhamid. Does "HTML Mail" also needs Exchange 12 to make it work? David ...and you can manage your favorite SQL Server from the Smartphone near you - use SiccoloSP! as per.. -------- any feedback from the calendar team ? I, too, would like the ability to have both my Corp Exchange calendar (via activesync) and another web based cal, ie Google Calendar avail within the same calendar client such as Outlook . Someone must be doing something to solve this request ?? Another vote for the criticality of multiple calendars. Use case/example: a work calendar and a personal calendar. Desktop Outlook manages this nicely today. What I am, and it seems the world is, looking for is the ability to store our Outlook CALENDAR entries on our smartphone multi-gigabyte SD cards. When will this be available? Thanks for listening. Multiple calendars would be awesome, since I manage 6-7 techs that use our companies calendar. Another vote for multiple calendars on Outlook Mobile... the ability to sync individual calendars separately from different sources (eg, personal, work, etc.). At the moment, I'm forced to choose one source to prevent contaminating one or the other calendar(s). Love the new WM6 interface and speed over WM5! Why can't I sync my Outlook 2007 calendar (PC only) with my Windows Mobile Pocket PC? It did it when I set up the partnership but has not happened since. Another vote for multiple calendars on Outlook Mobile!! Multiple Calendars, Contacts and Tasks !!! YES ! How could this have been overlooked ? We have multiple email folders, why criple other folder types ? Just as importantly, please include the ability to access SHARED folders(NOT just public) from other users on an Exchange server. Multiple calendars would make my life so much easier today You have to figure that there are - oh - a few million WM users out there....all of 'em have an office life and a home life to manage. Multiple calendars should be the STANDARD - not something we need to beg for. Oh, yah, it's Microsoft..... I'm a happy owner of a new Blackjack II smartphone. What I found is I *can* sync to multiple calendars - but that's using the "Missing Sync" software I purchased which picks up all the calendars from iCal on my Mac. Moving to a Windows platform is where I find the same problem being voiced here. Microsoft gives us multiple calendars in Outlook but restricts the use to only one calendar in WM6. This really needs to be patched and soon. Yet another vote for multiple calendars - no offence, but this is such an oversight, and is really hindering the usefulness of calendar syncing for me. Which is a drag 'cos I don't want to get my laptop out or start using a paper calendar evvery time I want to schedule anything...! Multiple Calendars would cause me to drop my Blackberry in a heartbeat and switch to WM6. How do I stop my Exchange mail from filling up my device's main storage space? I would like to offload it to a multi-GB external MicroSD card, but the software does not allow you to change the location of the messages. I don't want to delete messages on the phone because they get deleted on the Exchange server as well and there is no way to change the setting in the OS. Did a search hoping for multiple calendars to be support, but to no avail. Will have to figure some kind of workaround i guess. I want to be able to schedule and sync jobs in a public folder away from the desk. Pubilc calendars please!!!!!!!!!!!! we are a consulting company that uses public calendars for scheduling. I have people driving back to the office to check the schedule!!! thats absurd, every person here has a $500 phone and thousands invested in exchange 2007 R Is windows Mobile 6 compatible with the new nokia symbian handsets or Apple iPhones? Another vote for multiple calendars in WM6. This is a badly needed feature! Another vote for multiple calendars! I would like to be able to have sync my work and personal calendars. I, too, would like to have the ability to track several calendars, including the office calendar that is shared. Also, I would like my contacts to have a "history" button like the phone's "call history." another vote for multiple calendars! Multiple calendars would be a great feature for the windows mobile platform!! Is there a way to switch the softkey function in Outlook Mobile? For example, within Inbox view how does one configure the Left Softkey to "Delete" instead of "New"? It's best to have "Delete" without needing to hit "Menu" then scroll to "Delete". I need a calendar for work and for personal...even if I have to install an additional application for a second calendar I will...HELP!!! Multiple calendars is desperately needed for me, too. "Calendar A" + "Calendar B" + "Calendar C" + "Calendar D" + "Calendar E" = "MULTIPLE Calendars". Please! Oh, and the ability to network sync imap calendars. ___________ Hello....multiple calendars a need!!! Why hasn't it been provided yet! Multiple calenders, please!!!! I quite disappointed about WM6 - bought a Palm Treo 500 thinking it would do everything you always show off except no stylus - wrong, wrong, wrong! I am e.g. missing the "Out of Office" function - not too much to ask, is it? Another vote for multiple calendars! I would like to have access to the public calenders of my team. Another vote for multiple calendars function! I'll toss in my vote for multi-calendar support. It's a pretty silly thing for MS to overlook. Any updates on multiple calendars? Please - Multiple calendar. The PALM OS was capable of doing this through the hotsync cable and pockmirror professional. Did I mention that multiple calendars are essential? I would really like multiple calendars. How about adding multiple calendars to windows mobile devices? Make it work with SYNCml as well as I spent so much money on the smartphone I couldn't afford exchange... the stupid imap does not work...cant believe it...emails still disappear from the Sent folder...i am so frustrated... jacek: Can you tell share more about what's not working for you? Are you syncing your Sent Items folder? If yes, then do the emails first show up in it and then disappear, or do they never appear? What pattern is there in their disappearance? Also, I'd suggest you make a post at with all this information. The forum is great for figuring out issues like this, and we keep an eye out in the forums for difficulties people are having, to improve our future versions. Well I found this page doing a search for "multiple calendars", like many people I guess. I run a dog walking company and I use ten plus calendars on my desktop Outlook to handle employee schedules. The fact that I can't sync these to my Pocket PC running WM5 makes me feel like this is 1988, not 2008. What the heck was Microsoft thinking - do they know *anything* about business? So I guess I was presuming that in WM6, they'd corrected their inexcusable oversight - I was obviously wrong. *Still* no multiple calendar support? Well I obviously won't be upgrading to a WM6 device any time soon then! Microsoft - what *were* you thinking? Please bring yourselves into the 21st century. I have an AT&T Tilt with WM6 and I cannot flag emails, are you sure that you can? Since switching from a Blackberry back to a WM phone, I have also been looking to sync with multiple calendars. I am currently using Google Calendar Sync, but would greatly appreciate Sync Center (ActiveSync) supporting multiple calendars. + 1 million on multiple calendar support! I am staggered that Microsoft do not have this feature on the mobile software. Have they not thought of it?, can't do it (I don't believe that), or can't be arsed? WTF do they do all day? I'm new to windows mobile, so far I liked it a lot, helps me big time planning and organising my work. Until I started to look for a solution to sync my personal calendar as well. another vote for multiple Calendars! Really don't like the idea that I will need to download some extra soft in order to accomplish this. One more vote for the multiple calendar feature!!! Could someone please take these requests seriuos and respond to what exactly Microsoft have in mind regarding solving this issue!!! Someone said he wuld "ping" the calendar folks... are they ignoring this? Is this too difficult to solve for them? In the same context it would be great if you could look at the "multiple contacts folder" problem... The exact same problem... Multiple calendar support! Is it coming in a future release or not? Any news on future multiple calendar support? I can't believe MS is getting so much feedback on multiple calendars and they are not even responding. Better icons and sounds are nice, but COME ON!!!! Lets have something useful already! Pretty frustrated that no multiple calendar / multiple profile support (for various profile's tasks, notes) is not available. NEED MULTIPLE CALENDARS!!! Another vote for Multiple Calendars - why the heck has this not been patched yet! This discussion started so long ago. I run a business, sports team, social club, and private life calendar and cant get them all on my phone which lives with me but I can't tell what is coming up next. Wow, and I thought I was the only one that needs multiiple Calendars. I couldn't even find any third party apps that do it. Seems like a basic feature that should be added. Iphone support multiple calendars. I tested. when am I going to be able to have multiple calendars. I want to have one sync my personal stuff via active sync and the one sync to my business via exchange. its pretty sill that I still can't do this. Multiple calendar support is Key!!!!!! People use their phones for business and personal use. We dont want to share the personal calendar but still need to operate off it on the phone. Man, win mobile needs multiple calendar support. Badly. I too found this page doing a search for "multiple calendars". Like so many others I was hoping the WM6 would have this feature by default. As a WM5 customer I am looking for a device that can handle this and was expecting to see it on the feature list for WM6. Too bad. Dang! I have to try the iPhone to get multi-calendar support? Com'on Microsoft get with it, you have people crying out for this feature!!!!! What about syncing multiple calendars? I also need multiple calendar function. (personal, family and work) I take it that zhamid no longer works for MS. Has anyone heard anything about multiple calendar support? Perhaps any 3rd party apps to do it? Multiple Calendars would be most excellent. Personal and business don't play nice together. Does anyone know a software to mix multi calendar? Multiple calendars! Control over direction of sync would be nice too. Classing appointments to specific accounts would also help prevent passing from one account thru WM to another acount I also think it's high-time we got some support for multiple calendars. Not having this capability makes me question why I have been so loyal to windows mobile for this long!?! This is a change that needed to happen 5 yrs ago!!! Hey, does anybody think that WM6 should support multiple calendars? So we all know M$FT isn't listening, and I'm sure SOME of you have tried some 3rd party software as a workaround. What are your results? What have you used and is in good or bad? Well, it looks like every man and his dog wants and expects multiple calendar support on their Windows Mobile smartphone - not surprising, since this is one of the most obvious features I can think of. Yet Microsoft have not only failed to implement this most basic feature (for NO reason that I can think of), but they also apparently refuse to even discuss it with anyone. No reason given. No word on any plans, progress, no acknowledgment. Can someone please explain to me how in the hell Microsoft has such a large market share? It really is time we started kicking their asses over this! When you think of how much money they spend, how large their operation is, how much is spent on advertising campaigns telling us how sophisticated and thoughtful their business software solutions are, how they're the cutting edge of software development....and yet they completely neglect a feature that is so obviously essential it's like they released a word processing program with no "print" function. For the life of me, this is one of those things that I will just NEVER understand. Hey all, Well, I see all the comments about multiple calendars and I figured I'd post about it as well, since I also got here from Google, searching for Windows Mobile Multiple Calendars :-) So yes, PLEASE! What is going on with this? When will MS release an update for WM that has multiple calendar support? I know quite a few people who would like to see this feature yesterday. -f. @Jesse B: Iphone & Ipod touch have a pretty cool multiple calender system, that even can be syncronised over the air! That's what ALL windows mobile (6) users are waiting for... So let's pray it will come with the update of Windows Mobile 6.5... Otherwise I think microsoft will lose more Windows Mobile users. Multiple Canenders/Contacts Sync, is there anything else, other than Google sync that will do this. We dont seem to be getting any answers from Microsoft. Do we go IPhone ? I would like to see multiple calendars, contacts and tasks so that I can maintain for work and home. I had been using Intellisync to do this on my work laptop. It could merge and manage my Exchange email/tasks/calendar and contact along with my personal items (same as Exchange) from a .pst. Nokia brought and killed that product. Now I am using PocketMirror for Windows Mobile (). I am happy with it but I have to connect to my laptop to update my phone. I would like to see the ability for ActiveSync to work with multiple sources (Exchange, GMail, Another Exchange server), sync everything over the air and give me a consolidated view of the accounts that I want to see. Does anyone know if the iPhone will do this? From the preview of the Palm Pre, it looks like it may do this. I may have to jump off the Microsoft train at the next stop! Hi everyone! My name is Andy Vanosdale, a developer on the Windows Mobile Messaging team. I want to thank you all for being great users of our phones! I want to assure you that your comments and feedback do not go unnoticed. Multiple calendar support is one feature request that we know a lot of users would like us to add. I can not comment on future plans but do know that your comments and feedback are taken very seriously. Thanks again for being great users of our product! Andy If you would like to receive an email when updates are made to this post, please register here RSS Trademarks | Privacy Statement
http://blogs.msdn.com/outlook_mobile/archive/2007/02/07/windows-mobile-6-is-here.aspx
crawl-002
refinedweb
3,173
75.61
Hi Akim. On Wed, 26 Aug 2009, Akim Demaille wrote: > I have no opinion about this. What are the others doing? Say coreutils for > instance :) I just found this in the coreutils README: *********************** Pre-C99 build failure ----------------------- There is a new, implicit build requirement: To build the coreutils from source, you should have a C99-conforming compiler, due to the use of declarations after non-declaration statements in several files in src/. There is code in configure to find and, if possible, enable an appropriate compiler. However, if configure doesn't find a C99 compiler, it continues nonetheless, and your build will fail. If that happens, simply[*] apply the included patch using the following command, and then run make again: cd src && patch < c99-to-c89.diff [*] however, as of coreutils-7.1, the "c99-to-c89.diff" file is no longer maintained, so even if the patches still apply, the result will be an incomplete conversion. It's been 10 years. Get a decent compiler! ;-) > > If this is ok, we should probably add a configure.ac check for C99. Currently, bison and coreutils use AC_PROG_CC_STDC, which currently tries to find a C99 compiler but will accept C89. I think that makes most of the quote from coreutil's README true for bison. Maybe we should just put something like that in bison's README in branch-2.5 and master. > > And what about C99 for building Bison-generated parsers? > > I doubt we can do that. For a start we could move to C90 in yacc.c :) > > > #ifdef YYPARSE_PARAM > > #if (defined __STDC__ || defined __C99__FUNC__ \ > > || defined __cplusplus || defined _MSC_VER) > > int > > yyparse (void *YYPARSE_PARAM) > > #else > > int > > yyparse (YYPARSE_PARAM) > > void *YYPARSE_PARAM; > > #endif > > #else /* ! YYPARSE_PARAM */ > > #if (defined __STDC__ || defined __C99__FUNC__ \ > > || defined __cplusplus || defined _MSC_VER) > > int > > yyparse (void) > > #else > > int > > yyparse () > > > > #endif > > #endif It would be nice to see that cleaned up. Of course, yacc.c uses some M4 macros to hide this away for us, but even that could be easier to read. > If I read correctly our parse-gram.c, we have two errors: > > > # ifndef yytnamerr > > /* Copy to YYRES the contents of YYSTR after stripping away unnecessary > > quotes and backslashes, so that it's suitable for yyerror. The > > heuristic is that double-quoting is unnecessary unless the string > > contains an apostrophe, a comma, or backslash (other than > > backslash-backslash). YYSTR is taken from yytname. If YYRES is > > null, do not copy; instead, return the length of what the result > > would have been. */ > > static YYSIZE_T > > yytnamerr (char *yyres, const char *yystr) > > { > > > and > > > static YYSIZE_T > > yysyntax_error (char *yyresult, int yystate, int yytoken) > > { Have we seen any complaints from the users about this? Then again, I suppose not every user enables this code. > All the other functions are KnR compliant too. Maybe we should poll the > audience, say via NEWS. What about the following? It should probably remain at the very top of 2.4.2 news. diff --git a/NEWS b/NEWS index 57db6c2..4e29918 100644 --- a/NEWS +++ b/NEWS @@ -122,6 +122,18 @@ Bison News * Changes in version 2.4.2 (????-??-??): +** User Poll: Bison to drop K&R C support. + + Bison has always generated LALR(1) parsers that can be compiled using + a K&R C compiler. However, K&R C is over 30 years old. The Bison + developers are ready to take a step forward. For release 2.5, we are + considering requiring a C89/C90 compiler. If you feel this is an + unreasonable change that would adversely affect Bison's user base, + please write us at <address@hidden>. + + To be clear, building the `bison' executable itself already requires a + C89/C90 compiler. Building release 2.5 will require a C99 compiler. + ** Detection of GNU M4 1.4.6 or newer during configure is improved. ** %code is now a permanent feature.
http://lists.gnu.org/archive/html/bison-patches/2009-08/msg00085.html
CC-MAIN-2014-35
refinedweb
628
68.16
This is an EXTREMELY simple module for scripting a telnet session. It uses abbreviated versions of the commands exported by telnetlib followed by any necessary arguments. An example of use would be: import telnetscript script = """ru Login: w %(user)s ru Password: w %(pwd)s w cd ~/interestingDir w ls -l ra w exit c """ user = 'foo' pwd = 'bar' conn = telnetscript.telnetscript( 'myserver', vars() ) lines = conn.RunScript( script.split( '\n' )) This assigns lines the value of the output of "ls" in "~/interestingDir" for user foo on myserver. Discussion This could certainly be made a bit smarter, but it's a good starting point, and I have found it simplifies doing telnets from Python TREMENDOUSLY. Also try Pexpect for scripting SSH or anything else. Also try Pexpect for scripting SSH or anything else. Take a look here: Still beta, but very stable and easy to use. not on Windows. Pexpect only works on *nix. Is there any chance to get hands on the complete version, if it still exists, since this one here is truncated. Tia
http://code.activestate.com/recipes/152043/
crawl-002
refinedweb
175
65.62
Hi, becasue I'm unable to use new WIX multilingual app in my website, I have written a code for headers buttons that are common for all pages in different languages. For example English page I should be able to scroll down to different anchors defined in that page when I click any of the corrosponding buttons, so I have written the below code, but it doesn't seems to be working properly. import wixUsers from 'wix-users'; import wixData from 'wix-data'; import wixLocation from 'wix-location'; export function Home_click(event) { wixLocation.to(`/ `); } export function About_click(event , $w) { $w("#anchor4").scrollTo(); } Any idea what am doing wrong with my code? @givemeawhisky Thanks it worked out for me. Actually I just noticed that my buttons name are not the same as the one mentioned in the properties panel, so my above code was correct. Thanks again :)
https://www.wix.com/corvid/forum/community-discussion/how-to-scroll-to-an-anchor-within-same-page-using-corvid-code
CC-MAIN-2019-47
refinedweb
146
60.24
Try it: Display data from a sample SQL database In Microsoft Expression Blend, you can work with XML data sources and common language runtime (CLR) object data sources. XML data sources are simple to work with, but CLR object data sources are much more complex. The following procedures show you how to display data from a CLR data source in your Expression Blend application. The first two tasks involve obtaining data from a sample database and converting the data to a format that Expression Blend can bind to. The third task involves creating an Expression Blend project that has elements that are bound to the data. The following procedure describes how to create a class library in Visual Studio 2008 to populate an instance of a DataTable with data from the AdventureWorks sample database. To define and fill a DataTable On the File menu of Visual Studio 2008, point to New, and then click Project. In the New Project dialog box, under Project Types, click Visual C#. Under Templates, click Class Library. Name the new project AWDataSource, and then click OK. Visual Studio generates the code for your new class library project and opens the Class1.cs file for editing. In the Class1.cs file, change the name of the public class definition from Class1 to ProductPhotosCollection (this name is more descriptive). In Solution Explorer, right-click the name of your project (AWDataSource), point to Add, and then click New Item. In the Add New Item dialog box, select DataSet from the list of templates, name the item ProductPhotos.xsd, and then click Add. A dataset is added to your project in the form of a schema file and supporting class files. Additionally, the schema file is opened for editing. In Server Explorer, right-click Data Connections, and then click Add Connection. In the Choose Data Source dialog box, the Data source field should already list Microsoft SQL Server (SqlClient). In the Server Name field, enter the name of the instance of SQL Server on which the AdventureWorks database is installed. Under Log on to the server, select the authentication method that is required to log on to your instance of SQL Server. You might have to contact the server administrator for that information. Windows Authentication uses your current logon credentials. SQL Server Authentication requires the user name and password of the account that is configured to have access to your database. Under Connect to a database, select the AdventureWorks database, which will be visible only if your logon credentials are correct, if the AdventureWorks database is installed on your computer, and if your computer is running SQL Server. Click the Test Connection button. If the test connection is unsuccessful, see your SQL Server administrator for help. Click OK to complete the creation of the data connection. In Server Explorer, a new connection appears under the Data Connections node named <servername>.AdventureWorks.dbo, where <servername> is the name of your server. In Server Explorer, expand the new <servername>.AdventureWorks.dbo connection node, expand the Tables node, and then locate the ProductPhoto table. With the ProductPhotos.xsd file open on the design surface, drag the ProductPhoto table from ServerExplorer onto the design surface. You now have a typed dataset that can connect to the AdventureWorks database and return the contents of the ProductPhoto table. In the Class1.cs file, add the following method inside the ProductPhotosCollection class: The ProductPhotosTableAdapters namespace is defined in the ProductPhotos.Designer.cs file, which was generated by Visual Studio when you created the ProductPhotos DataSet. You now have a method that will fill an instance of a ProductPhotos DataTable with data when your application is run. Build your project (CTRL+SHIFT+B) to make sure that it contains no errors. The following procedure describes how to create a class library in Visual Studio 2008 to convert data from a DataTable to an ObservableCollection so that Expression Blend (or any application that uses Windows Presentation Foundation (WPF)) can bind to the data. You will define a ProductPhoto class to represent the data in a table row, add a collection of ProductPhotos to ProductPhotosCollection as a private member, and then add a public accessor (a get method) so that code from outside the class can access it. To adapt the data collection to a WPF collection In Visual Studio 2008, right-click your project name in Solution Explorer, and then click Add Reference. On the .NET tab, select the WindowsBase assembly. If you do not see the WindowsBase assembly listed, click the Browse tab and locate the WindowsBase.dll assembly in your %SystemDrive%\Program Files\Reference Assemblies\Microsoft\Framework\v3.0 folder. Click OK. The WindowsBase assembly implements the System.Collections.Object.ObservableCollection class. At the top of the Class1.cs file, add the following statement: Also in the Class1.cs file, add the following ProductPhoto class definition to the AWDataSource namespace so that you have a class to work with: Add the following member to the ProductPhotosCollection class: Add the following accessor method to the ProductPhotosCollection class: The next steps involve copying the ID, the modified date, and the two photos from the DataTable into the ObservableCollection. Right-click your project name in Solution Explorer, and then click Add Reference. Add a reference to the PresentationCore assembly. At the top of the Class1.cs file, add the following statements: Add members to the ProductPhoto class so that the class looks like the following: public class ProductPhoto { // Public Accessors to the private properties. public int ID { get { return id; } } public ImageSource ThumbNailPhoto { get { return thumbNailPhoto; } } public ImageSource LargePhoto { get { return largePhoto; } } public DateTime ModifiedDate { get { return modifiedDate; } } // Constructor. public ProductPhoto(int id, byte[] thumbNailPhoto, byte[] largePhoto, DateTime modifiedDate) { this.id = id; this.thumbNailPhoto = ByteArrayToImageSource(thumbNailPhoto); this.largePhoto = ByteArrayToImageSource(largePhoto); this.modifiedDate = modifiedDate; } // Private properties. private int id; private ImageSource thumbNailPhoto; private ImageSource largePhoto; private DateTime modifiedDate; // Supporting method. private ImageSource ByteArrayToImageSource(byte[] data) { BitmapImage image = null; if (null != data) { image = new BitmapImage(); image.BeginInit(); image.StreamSource = new System.IO.MemoryStream(data); image.EndInit(); } return image; } } Add the following code to the ProductPhotosCollection class at the end of the GetData method so that the method copies the DataTable into the ObservableCollection: Now, as a convenient way of triggering the ProductsPhotosCollection.GetData method, you can implement a Command. Right-click your project name in Solution Explorer, click Add, and then click Existing Item. In the Add Existing Item dialog box, browse to the DelegateCommand.cs file in the Expression Blend samples folder, %SystemDrive%\Program Files\Microsoft Expression\Blend\Samples\<language>\ColorSwatch, and then click Add. Change the namespace from ColorSwatch to your namespace name (AWDataSource). The code in the DelegateCommand.cs file enables you to bind any command to your method. In the Class1.cs file, add the following member to the ProductPhotosCollection class: Add the following constructor to the ProductPhotosCollection class to initialize the command: Finally, expose the command by adding the following accessor method to the ProductPhotosCollection class: Build your project (F5) to make sure that it contains no errors. You now have a class that you can use as a data source in an Expression Blend (or any WPF) application. This class will be either ProductPhotosCollection or an equivalent class if you defined your own. The following procedure describes how to create a very simple Expression Blend application that has a ListBox control that is bound to your data source. The application uses a common user interface design pattern known as a master-details view. The left pane, named the master pane, will contain the product list. Whenever you select a product in this pane, the details about that product will be displayed in the right pane, named the details pane. Updating the content of one pane when an element is selected in another pane is accomplished by using data synchronization between controls. To bind procedures to the data source in Expression Blend In Expression Blend, click File, and then click New Project. In the New Project dialog box, select the WPF Application project type. This creates a project for a Windows-based application that you can build and run while you are designing it. The other option is a WPF Control Library project, which you can use for designing controls for use in other Windows-based applications. In the Name text box, type AWProductPhotos. Leave Language set to the default, because this procedure has no handwritten code. Click OK. Expression Blend loads your new project and displays it for editing. After your new project is loaded into memory, save it to disk by clicking Save All on the File menu. The Name text box should already include the name AWProductPhotos, so click OK. On the Project menu, click Add Reference. In the Add Reference dialog box, browse to the AWDataSource.dll file that you built at the end of the second task in this topic to add a reference to it. The AWDataSource.dll file will likely be in the bin/Debug folder of your AWDataSource project. Click OK. The AWDataSource.dll is now a part of your project. If you expand the References node in the Projects panel, you'll see a reference to AWDataSource.dll. In the Data panel, click Add live data source , and then click Define New Object Data Source. In the Define New Object Data Source dialog box, expand the AWDataSource node, select ProductPhotosCollection, and then click OK. In the Data panel, a data source named ProductPhotosCollectionDS has been added to your project. The ProductPhotosCollectionDS data source represents the structure of an instance of the CLR class that you referenced. Expand ProductPhotosCollectionDS and ProductPhotosCollection to see the structure. In a later step in this task, you will drag data onto the artboard from in the Data panel to create new controls. In the Objects and Timeline panel, click LayoutRoot to activate it. When you activate the element, notice that a shaded bounding box appears around its name. In the Tools panel, click Selection . On the artboard, move your pointer over the thick ruler area at the top of LayoutRoot. A column ruler will follow your pointer, indicating where a new column divider will be positioned if you click. Click to create a new column divider, making the left column about the same width as the right column. The left column will contain a list of product photo thumbnails, and the right column will contain a large photo that represents the selected list item. A column divider appears inside LayoutRoot. On the artboard, move your pointer over the thick ruler area on the left side of LayoutRoot. Click to create a new row divider, making the top row large enough to fit a button into. Click the open padlock icon that appears next to the top row to lock the row to a fixed height. In the Data panel, drag GetDataCommand (from under ProductPhotosCollection) into the top-left grid cell on the artboard. In the drop-down list that appears, click Button. In the Create Data Binding dialog box, in the Property of drop-down list, choose Command, and then click OK. This action creates a new button that is bound to the GetDataCommand accessor method in your AWDataSource class. At run time, when the button is clicked, it performs the GetDataCommand on the ProductPhotosCollection data source, and, as in the second task in this topic, the implementation of that command calls the GetData method. With [Button] selected in the Objects and Timeline panel, look for the Content property under CommonProperties in the Properties panel. Set the Content property by entering the text Get Product Photos, and then press ENTER. Move and resize the [Button] element by clicking the Selection tool in the Tools panel and then using the adorners on the artboard. Make [Button] fit into the top-left grid cell. Then, under Layout in the Properties panel, set the following properties: Set the Width and Height properties to Auto. Set the Margin properties to 0. Set the HorizontalAlignment and VerticalAlignment properties to Center. These settings make sure that the button is only as large as it has to be to fit the text in the Content property, and the settings also center the button in the grid cell. In the Data panel, drag ProductPhotos (Array) into the lower-left grid cell on the artboard. In the drop-down list that appears, click ListBox. In the Create Data Binding dialog box, in the Property of drop-down list, choose ItemsSource, and then click OK. In the Create Data Template dialog box, select the New Data Template and Display Fields radio button. This option defines the structure of the data type that you dragged from the Data palette (for example, each element in a collection of ProductPhoto objects). You can now bind to any parts of the data structure and therefore define what the data template's tree of elements looks like. Next to each data item is a drop-down list that determines the element that will be used to present the data field (StackPanel and TextBlock elements). Next to that is a label that indicates to which of the properties of the element the data item will be bound. Clear the LargePhoto option because you want to display it only in the ListBox. The ModifiedDate data field is currently of type StackPanel, but you have to change the control to an element type that is more appropriate for displaying that data type. In the drop-down list next to ModifiedDate, choose TextBlock. The label automatically changes to Text. The ThumbNailPhoto data field is currently of type ImageSource, but you have to change the control to an element type that is more appropriate for displaying that data type. In the drop-down list next to ThumbNailPhoto, choose Image. The label automatically changes to Source. Click OK. This inserts a new ListBox into the document. With the [ListBox] element selected in the Objects and Timeline panel, under Layout in the Properties panel, do the following: Set the Width and Height properties to Auto. Set the Margin property to 8. Set the HorizontalAlignment and VerticalAlignment properties to Center. These settings make sure that the ListBox almost completely fills the lower-left grid cell. In the Tools panel, select Image . On the artboard, draw a new Image in the lower-right grid cell, almost filling the whole cell. With [Image] selected in the Objects and Timeline panel, look for the Source property under Common Properties in the Properties panel. Click the name of the Source property, and then in the drop-down list that appears, select Data Binding. In the Create Data Binding dialog box, select the Element Property tab, because you are going to bind the data to a property of the [ListBox] element. Under Scene Elements, expand Window and LayoutRoot, and then select your ListBox ([System.WIndows.Controls.ListBox]). In the Show drop-down list, select All Properties. This displays all properties that are available to be bound to, not just those of the same data type as the Source property (String). Under Properties, select SelectedItem : (Object). Select the Use a custom path expression check box. The default expression is SelectedItem. Change it to SelectedItem.LargePhoto so that you are binding to the LargePhoto member of the currently selected ProductPhoto object. Click Finish. On the Project menu, click Test Project (or press F5). When the application starts, test the application by clicking Get Product Photos. When the list box loads with data, step through its items and verify the large photo that appears in the right column. The finished application
http://msdn.microsoft.com/en-us/library/cc294789(d=printer,v=expression.30).aspx
CC-MAIN-2014-52
refinedweb
2,598
56.05
XSLT Stylesheet Scripting Using <msxsl:script> The XslTransform class supports embedded scripting using the script element. The XslTransform class supports embedded scripting using the script element. When the style sheet is loaded, any defined functions are compiled to Microsoft intermediate language (MSIL) by being wrapped in a class definition and have no performance loss as a result. The <msxsl:script> element is defined below: where msxsl is a prefix bound to the namespace urn:schemas-microsoft-com:xslt. The language attribute is not mandatory, but if specified, its value must be one of the following: C#, VB, JScript, JavaScript, VisualBasic, or CSharp. If namespace can be defined somewhere in a style sheet. Because the msxsl:script element belongs to the namespace urn:schemas-microsoft-com:xslt, the style sheet must include the namespace declaration xmlns:msxsl=urn:schemas-microsoft-com:xslt. If the caller of the script does not have UnmanagedCode access permission, then the script in a style sheet will never compile and the call to Load will fail. If the caller has UnmanagedCode permission, the script compiles, but the operations that are allowed are dependent on the evidence that is supplied at load time. If you are using one of the Load methods that take an XmlReader or XPathNavigator to load the style sheet, you need to use the Load overload that takes an Evidence parameter as one of its arguments. To provide evidence, the caller must have ControlEvidence permission to supply Evidence for the script assembly. If the caller does not have this permission, then they can set the Evidence parameter to null. This causes the Load function to fail if it finds script. The ControlEvidence permission is considered a very powerful permission that should only be granted to highly trusted code. To get the evidence from your assembly, use this.GetType().Assembly.Evidence. To get the evidence from a Uniform Resource Identifier (URI), use Evidence e = XmlSecureResolver.CreateEvidenceForUrl(stylesheetURI). If you use Load methods that take an XmlResolver but no Evidence, the security zone for the assembly defaults to Full Trust. For more information, see SecurityZone and Named Permission Sets. Functions can be declared within the msxsl:script element. The following table shows the namespaces that are supported by default. You can use classes outside the listed namespaces. However, these classes must be fully qualified., you must use the correct syntax for the language in use. For example, if you are in a C# script block, then it is an error to use an XML comment node <!-- an XML comment --> in the block. The supplied arguments and return values defined by the script functions must be one of the World Wide Web Consortium (W3C) XPath or XSLT types. The following table shows the corresponding W3C types, the equivalent .NET Framework classes (Type), and whether the W3C type is an XPath type or XSLT type. If the script function utilizes one of the following numeric types: Int16, UInt16, Int32, UInt32, Int64, UInt64, Single, or Decimal, they are forced to Double, which maps to the W3C XPath type number. All other types are forced to a string by calling the ToString method. If the script function utilizes a type other than the ones mentioned above, or if the function does not compile when the style sheet is loaded into the XslTransform object, an exception is thrown. When using the msxsl:script element, it is highly recommended that the script, regardless of language, be placed inside a CDATA section. For example, the following XML shows the template of the CDATA section where your code is placed. It is highly recommended that all script content be placed in a CDATA section, because operators, identifiers, or delimiters for a given language have the potential of being misinterpreted as XML. The following example shows the use of the logical AND operator in script. This throws an exception because the ampersands are not escaped. The document is loaded as XML, and no special treatment is applied to the text between the msxsl:script element tags.() { //Create the XslTransform and load the style sheet. XslTransform xslt = new XslTransform(); xslt.Load(stylesheet); //Load the XML data file. XPathDocument doc = new XPathDocument(filename); //Create an XmlTextWriter to output to the console. XmlTextWriter writer = new XmlTextWriter(Console.Out); writer.Formatting = Formatting.Indented; //Transform the file. xslt.Transform(doc, null, writer, null);>
https://msdn.microsoft.com/en-us/Library/533texsx(v=VS.100).aspx
CC-MAIN-2015-18
refinedweb
724
55.54
Subscribe in a reader In this article, I want to show how you can setup your menus in code-behind and avoid redundancy. I recently inherited a web application with the menu system setup in the code-in-front. Each menu shared identical values, other than the visibility. Notice that numerous properties are defined more than once, above and below the MenuItems. What's as bad is that this entire block of code was repeated for 6 additional menus. Example of Redundant, Bloated Menu Setup <asp:Menu <Items> <asp:MenuItem>...</asp:MenuItem> </Items> <StaticMenuItemStyle Font- <DynamicMenuStyle BackColor="#F2F8FF" BorderColor="LightSkyBlue" BorderStyle="Solid" BorderWidth="1px" /> <DynamicMenuItemStyle Font- <StaticMenuStyle HorizontalPadding="4px" /> <DynamicHoverStyle BackColor="Wheat" Font- <DynamicSelectedStyle BorderStyle="None" /> </asp:Menu> If a property needs changed, the programmer must make certain that the change is made in all six instances of this block of code, for each menu, as well as realizing that certain properties are defined twice. This type of coding can introduce problems after updates and edits. To optimize this menu, let's move this to the code-behind so that if a change is needed, it is only needed in one line of code. In our code-in-front, we'll simply setup the menus like so: Streamlined Menu <asp:Menu <Items> <asp:MenuItem>...</asp:MenuItem> </Items> </asp:Menu> <asp:Menu <Items> <asp:MenuItem>...</asp:MenuItem> </Items> </asp:Menu> ...and so forth In our codebehind, we'll setup each menu in our Page_Load and define one function that will set the properties for each menu: Protected Sub Page_Load(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Me.Load MenuSetup(Menu1, True) MenuSetup(Menu2, True) MenuSetup(Menu3, True) MenuSetup(Menu4, False) MenuSetup(Menu5, False) MenuSetup(Menu6, False) End Sub Remember I said that the only difference in these menus was the visibility? So in our MenuSetup function we'll pass in the menu ID, as well as whether the menu should be visible or not. I've altered some of the values from what you see in the original code-in-front, but you get the idea. Be sure to import System.Drawing for the colors. Protected Sub MenuSetup(ByVal myMenu As Menu, _ ByVal visibility As Boolean) myMenu.Visible = visibility myMenu.BorderColor = Drawing.Color.Black myMenu.BorderWidth = Unit.Pixel(1) myMenu.Font.Underline = False myMenu.Width = Unit.Pixel(90) myMenu.StaticMenuStyle.HorizontalPadding = Unit.Pixel(4) myMenu.StaticMenuItemStyle.Font.Bold = True myMenu.StaticMenuItemStyle.Font.Name = "Verdana" myMenu.StaticMenuItemStyle.Font.Size = "10" myMenu.StaticMenuItemStyle.Font.Underline = False myMenu.StaticMenuItemStyle.ForeColor = Color.White myMenu.DynamicMenuStyle.BorderWidth = Unit.Pixel(1) myMenu.DynamicMenuItemStyle.Font.Bold = True myMenu.DynamicMenuItemStyle.Font.Name = "Verdana" myMenu.DynamicMenuItemStyle.Font.Size = "8" myMenu.DynamicMenuItemStyle.BorderWidth = Unit.Pixel(1) myMenu.DynamicMenuItemStyle.BorderColor = _ ColorTranslator.FromHtml("#CCCCCC") myMenu.DynamicMenuItemStyle.BorderStyle = BorderStyle.None myMenu.DynamicMenuItemStyle.VerticalPadding = Unit.Pixel(4) myMenu.DynamicMenuItemStyle.HorizontalPadding = Unit.Pixel(4) myMenu.DynamicMenuItemStyle.ForeColor = Color.Black myMenu.DynamicMenuItemStyle.BackColor = _ ColorTranslator.FromHtml("#F0F2F4") myMenu.DynamicHoverStyle.BackColor = _ ColorTranslator.FromHtml("#CCCCCC") myMenu.DynamicHoverStyle.ForeColor = _ ColorTranslator.FromHtml("#00008B") myMenu.DynamicSelectedStyle.ForeColor = _ ColorTranslator.FromHtml("#00008B") End Sub Notice some of the differences when defining properties in the code-behind: View State allows you to retain page property values, such as string and numeric types, between postbacks. You may also store class objects in View State, but you must first add the Serializable attribute. If you do not add the Serializable attribute, you will receive this error when trying to add the object to View State: Type 'SuchAndSuch.ThisAndThat' in Assembly 'SuchAndSuch, Version 1.0.0.0, Culture=neutral, PublicKeyToken=null' is not marked as serializable. Here is an example of adding Serializable to a class: <Serializable()>_Public Class aMenu Public MenuName As String Public MenuId as Integer Public Url as String Public Sub New(ByVal menuName as String, ByVal menuId as Integer, ByVal url as String) MenuName = menuName MenuId = menuId Url = url End Sub End Class The aMenu class can now be added to View State: Dim myMenu as New aMenu("Home",1,"/default.aspx")ViewState("myMenu") = myMenu To use the aMenu object: If ViewState("myMenu") IsNot Nothing then Dim myMenu as aMenu myMenu = DirectCast(ViewState("myMenu"),aMenu)End If May your dreams be in ASP.NET! Nannette Thacker Ash explains the concept of Filtering Parameters in a Stored Procedure in this blog post. This method is safer and more beneficial than dynamically creating and passing a sql query from the code layer and using sp_executesql, as it helps to avoid sql injection attacks. However, the author explains there is a pitfall because you may sacrifice index optimization. Check it out! May your dreams be in ASP.NET and your code free from sql injections! Nathan Barry posted a new article on How To Use Icons To Support Content In Web Design. Besides his design insights, he also provides images that depict actual live website examples and links to those sites. His tips include How to Use Icons, Purpose and Placement, Icon Styles, and numerous examples. Nannette I have been using Telerik controls for about a year now. First, on a client site project, and then I licensed it for my own development needs as a consultant. I have to say I am tickled with the Telerik support. 1) Telerik forums are great. Ask a question, get a quick answer from Telerik staff and users. Yes, the Telerik staff actually responds in the forums. You would not believe the number of forums I've posted to where the people/product supporting the forum do not participate. Talk about trying to get out of doing your job by making others do your work for you for free. Those types of forums typically do not get an answer at all! 2) Telerik support. Wow. They're on a completely different time zone, so it's expected I do have to wait 24 hours to get a response. But when the response arrives, wow. And why do I say wow? Well, there are numerous times I ask a question, and then the response says, "Here, try this zip project." I open up the project, and it's an exact project with my exact question and answer right there. They created it just for me, with my tables, fields, data, and everything! Wow! The other day, I saw a cool menu on a website. I wrote and asked, "Can your menu do this?" They sent me a zip project with their menu doing exactly what that one did. And it did it better! Yesterday, I wrote and asked how to populate a treeview with two tables as parent/child. I have done this before, but I had done it by putting the table in a generic list and manually building the tree. Today, at 3:27 am, they sent me a zipped file with my exact scenario, tables and all, showing me how to populate this tree with two tables. Much simpler than what I was going to do. This wasn't a little zip file they keep on hand and send out to everyone. It was customized with my exact solution! Wow. And they've done this in the past. I've turned in more than 50 support requests and I always get great responses. So yeah, with all the cruddy support out there, I just wanted to take a moment and thank Telerik and tell you about their wonderful support. May your dreams be in ASP.NET and your controls be Rad! If you're a VB.NET developer learning C# or converting your VB code to C#, here are a few hints, tips and gotchas. But first, let me share a few important links with you: VB.NET and C# Comparison - This is one of the most accurate and complete charts I've seen comparing VB.NET with its C# equivalent. Convert VB.NET to C# and Convert C# to VB.NET - developerFusion provides these free online utilities to automatically convert VB.NET to C# and C# to VB.NET. It also supports the .NET 3.5 framework. There is no download required. Just simply use the online form. Telerik also provides a utility to Convert VB to C# or C# to VB. However, Linq is not yet supported in either converter as of this writing. LearnVisualStudio.net provides free Cheat Sheets for "C# Language Basics Cheat Sheet" and "Visual Basic Language Basics Cheat Sheet." The links to the zip files can be found at the bottom of the home page. I'll admit I've only been working with C# since 2007 and the majority of my projects have been in VB.NET. But I finally decided to write this post on some of the gotchas I have found when working with C# versus VB. Option Strict On By default, VB.NET sets Option Strict Off which allows backward compatibility with older Visual Basic legacy code. C# was originally written with the same type checking functionality as is performed in VB with Option Strict On. Therefore when converting your VB to C#, there is no option for Option Strict On. For further information on OPTION STRICT ON in VB.NET, please see this article by Michael McIntyre. References and Namespace Gotchas C# is a lot more picky about referencing namespaces. For instance, in VB.NET you can access the Drawing.Colors like so without including the namespace: validator.ForeColor = Drawing.Color.BlueViolet However, in C#, even if you include the System namespace: using System; you cannot access a Drawing.Color member like so: validator.ForeColor = Drawing.Color.BlueViolet; you must use: using System.Drawing; and access the colors this way: validator.ForeColor = Color.BlueViolet; So even though you are "using System;" you would expect that you could use the Drawing namespace by calling Drawing.Color.BlueViolet as you can in VB. But it seems C# requires the namespace be added in a using directive and then you can call the class and its members. If any C# gurus have any further comments or links to shed more light on this, I'd appreciate it. With Construct There is no With construct in C#: VB:With validator .ForeColor = Drawing.Color.BlueViolet;End With C#:validator.ForeColor = Color.BlueViolet; CInt versus (int) versus Convert.ToInt32 In VB.NET you can use CInt() for data type conversions: If CInt(myString) <> NUMERICCONSTANT Then Of course this will also work: If (Convert.ToInt32(myString) <> NUMERICCONSTANT ) Then However, in C#, this will error: if ((int)myString != NUMERICCONSTANT) But this will work: if (myString != Convert.ToString(NUMERICCONSTANT)) See this article for a great comparison of String to Integer Conversion Methods in VB.NET. Microsoft.VisualBasic Namespace There are a few functions that are not available in C#. However, you can accomplish this same functionality in C# by including the Microsoft.VisualBasic Namespace. In your C# project, add a Reference to the Microsoft.VisualBasic component. Then add this directive to your class: using Microsoft.VisualBasic; Now you can produce the equivalent of this VB code: If IsNumeric(myString) Then in C#: if (Information.IsNumeric(myString)) { Also, this VB: Microsoft.VisualBasic.Chr(9) Can now be used in C# as: Strings.Chr(9) Other examples using Left and Len: VB:clientPath = Left(validFile.FileName, Len(validFile.FileName) - Len(validFile.GetName())) C#:clientPath = Strings.Left(validFile.FileName, Strings.Len(validFile.FileName) - Strings.Len(validFile.GetName())); DateTime can't be Declared as a Constant in C# In C#, a Date cannot be declared as a constant. VB:Public Const INVALIDDATE As Date = #1/1/1753# However, in C#, you may reproduce the same functionality by using either of these: C#:public readonly DateTime INVALIDDATE = DateTime.Parse("01/01/1753");orpublic static readonly DateTime INVALIDDATE = Convert.ToDateTime("01/01/1753"); () and Method Calls In VB you can get away with calling a method call without ()'s. However, C# requires () after method calls. For instance, this will work in VB:control.GetType But in C#, you must have the parens:control.GetType() Enumeration Conversions In VB if you want to retrieve the numeric value of an enumerator, by default the enumerator returns the numeric value: Enum Layout Green = 1 Blue = 2 End Enum Layout.Green will return 1. Layout.Green.ToString() will return "Green" C# does the opposite. Layout.Green will return "Green" To retrieve the numeric value, you must convert it to a numeric value first: Convert.ToInt32(Layout.Green) or: (int) Layout.Green In VB if you wish to retrieve the string equivalent of the numeric value, you must first convert the enumerator to an int, then convert the returned value to a string. You might do it this way, which won't work in C#: CInt(Layout.Green).ToString But in C#, the equivalent is: Convert.ToString(Convert.ToInt32(Layout.Green)) Select / Case versus Switch / Case and Constants In C#, the Switch/Case requires constants, so forget using the Switch / Case with your enumerator comparisons. In VB, you can do this: Select Case layoutName Case Layout.Green.ToString Return CInt(Layout.Green) End Select But in C#, the converters will convert it to this, which is not supported and will error: switch (layoutName) { case Layout.Green.ToString: return (int)Layout.Green; } Instead, you'll need to convert your switch statement to an if/else statement: if (layoutName == Convert.ToString(Layout.Green)) return Convert.ToInt32(Layout.Green); So in C#, only constants will work, such as: case 25:orcase "male":... et cetera. CStr versus (string) VB:If CStr(myID) <> "25" Then C#:if ((string)myID != "25") Linebreaks with Microsoft.VisualBasic.Chr(13) versus Environment.NewLine If you have been using Microsoft.VisualBasic.Chr(13) to achieve a line break when creating javascript blocks, for instance, this will not work in C# unless you create a reference to the Microsoft.VisualBasic component and change it to Strings.Chr(13). However, for both VB and C# you can achieve the same effect by using Environment.NewLine. Global Constants In VB, you can create Global Constants using a Public Module and call the constant directly by name if you include the namespace. In C#, you must include the class name when using the constant. For instance, in VB, you can create a public module: Public Module MyConstants Public Const SUCCESS As Integer = 1End Module Then throughout your application you can use SUCCESS in any class as a global constant. In C#, you must define a public static class: public static class MyConstants { public const int SUCCESS = 1; } And when using the constant, preface it with the class name: MyConstants.SUCCESS VB:Return (value <> SUCCESS) C#:return (value != MyConstants.SUCCESS); Keywords When using the code converters, sql keywords will be prefaced with @ in C#. For instance the "from" key word: VB:byval from as string C#:string @from I would suggest changing variable names to non-keywords, such as:string fromname Web.UI.Control In VB you can have this:Dim myControl As Web.UI.Control = control.Parent In C# the equivalent is:Control myControl = control.Parent; If you have any further comments to share, please do. Educate Me! And may your dreams be in ASP.NET! Since I had purchased the SQL Server 2008 Web Edition for my database server, I decided to also install it on my development box. But when I tried to install the Management Tools, it errored with: "Previous release of Microsoft Visual Studio 2008." I click the "failed" link and it complained... "Upgrade MS Visual Studio 2008 to the SP1 before installing sql server 2008." I researched this error and was told this occurs if you had a previous beta version of Visual Studio 2008 and that you also needed SP1. Well, I had never used a beta version of VS2008 and I had SP1 installed, so that couldn't be the problem. So I uninstalled the SQL Server 2005 Express Edition that came with my Visual Studio 2008. I then installed SQL Server 2008 Web Edition and all was well... yay! One problem fixed. Fixed until I needed to create an express MDF database or work with an existing one. So I installed the SQL Server 2008 Express Edition database and when I clicked the App_Data directory to add a new SQL Server database, I received the error: "failed to generate a user instance of sql server due to a failure in starting the process" Googling the phrase (and don't you love Google's intellisense in their search box that finishes your typing for you?!) brings up 15,200 results, with many, many potential solutions. Even the potential solutions have more potential solutions in the user comments. "I did this..." but "that didn't work for me, so I did this..." and so on. So I decided to hopefully restrict it to the 2008 version by adding 2008 to the front of the error message. Fortunately, I landed on this blog post first: Errror Message : Failed to generate a user instance of SQL Server due to a failure in starting the process for the user instance The author, (and why is it so hard to find author's names on their blogs!!!), says to delete this directory: 1) C:\Documents and Settings\username\Local Settings\Application Data\Microsoft\Microsoft SQL Server Data\SQLEXPRESS 2) and reboot your computer. Did it work for me? Yes! I hope sharing this find with you is helpful. I muddled through a lot of "solutions...." some that were pages long, involved permissions, authorities, command line changes, and who knows what else. I didn't try any of them. I wanted to try something simple first. This was very simple and voila! May your dreams be in ASP.NET and your SQL Servers run smoothly! I'm going to demonstrate how to add javascript events programmatically in codebehind using the Attributes.Add method. You may want to add your javascript attributes programmatically so that you can populate the values from a database. For demonstration purposes, I'm going to add javascript click events to an image. Let's start out with this cute little script contributed by Paul Kurenkunnas that I found on Javascript.Internet.Com (click to see it in action), which enlarges and reduces an image size on click and double-click. This javascript is simple enough that anyone can use it to see Attributes.Add in action. <img src="waterfall.jpg" width="150" height="200" onclick="this.src='waterfall.jpg';this.height=400;this.width=300" ondblclick="this.src='waterfall.jpg';this.height=200;this.width=150"> Next in our code in front, we place an image control: <asp:Image In the codebehind, we want to programmatically setup everything else. Notice I hard-code in the dimensions and URL, but you could easily retrieve the values from a database table and use them instead. This demo is primarily for the purpose of showing how to use the Attributes.Add and ResolveClientUrl methods: VB.net: Image1.ImageUrl = "~/site/images/MyImage.jpg"Dim imageSrc As String = ResolveClientUrl(Image1.ImageUrl)Image1.Attributes.Add("onclick", "this.src='" & _ imageSrc & "';this.height=600;this.width=400")Image1.Attributes.Add("ondblclick", "this.src='" & _ imageSrc & "';this.height=300;this.width=200") C#: Image1.ImageUrl = "~/site/images/MyImage.jpg"; string imageSrc = ResolveClientUrl(Image1.ImageUrl); Image1.Attributes.Add("onclick", "this.src='" + imageSrc + "';this.height=600;this.width=400"); Image1.Attributes.Add("ondblclick", "this.src='" + imageSrc + "';this.height=300;this.width=200"); The code generated within the browser is: <img id="ctl00_MainBody_Image1" onclick="this.src='../../../site/images/MyImage.jpg';this.height=600;this.width=400" ondblclick="this.src='../../../site/images/MyImage.jpg';this.height=300;this.width=200" src="../../../site/images/MyImage.jpg" style="border-width:0px;" /><br /> Notice the image path was originally setup with: "~/site/images/MyImage.jpg" We must use the ResolveClientUrl method to obtain a URL that can be used by the browser: ../../../site/images/MyImage.jpg Just FYI, don't put a height and width for the image or the resizing won't work. The Attributes.Add method can be added to numerous controls, such as images, buttons, comboboxes, labels, textboxes, radio buttons, checkboxes and more. Employers should encourage programmers to exercise and be fit, as a recent study found that those who are fit have four times less brain shrinkage than those who aren't. And seriously, that can only help you be a better programmer, right? A recent Reader's Digest blurb in the Health section tells of an Alzheimer's study. ." - Reader's Digest pg. 96, December 2008. Okay, so I stretched it a bit. But yes, I believe in being fit, and I believe in the advantages of proper diet and exercise. Personally, I feel this can be related to everyone's brain and general health. I think companies should encourage programmers to get outside at least once a day for a 10-15 minute walk. It would clear up the cricks in the necks, allow the mind to relax, and some of those tough problems might even get resolved while walking. We're told we should take a break from monitors every few hours, to avoid eye strain, so why not take a walk. The cigarette and coffee breaks don't count. And seriously, if management allows all the cigarette and coffee breaks, why don't they encourage exercise breaks? I think the trouble is peer pressure. No one wants to go outside and have people accuse us of goofing off. If management would encourage it, wouldn't that be great! What do you think? 3 John 1:2 Beloved, I wish above all things that thou mayest prosper and be in health, even as thy soul prospereth. May your dreams be in ASP.NET and may your Health be Excellent! Perhaps there are times when you just need a short-term ASP.NET developer for a 3-6 month project and don't wish to invest in a full-time employee. I am available for 1099 or W-2 consulting. No contract is required. Check out the details and my current rates, then feel free to contact me with your project details. Please contact me with your needs.
http://weblogs.asp.net/nannettethacker/default.aspx
crawl-002
refinedweb
3,682
58.58
Heap Shot is a graphical UI used to explore memory allocation patterns in an application. It processes log files generated using the standard profiling tools. HeapShot can either explore one snapshot of the heap, or it can be used to compare the objects in two separate snapshots from different points in time. Obtaining Heap Shot If Heap Shot does not have an installer or package for your operating system, it is relatively easy to build. The source code is located on github at . Once it has been checked out, you can simply run 'xbuild' in the heap-shot directory or you can open the solution file in either MonoDevelop or VisualStudio and build it there. If you are unfamiliar with Git, you should read the Git Faq for information on how to check out the code. Enabling the profiler Heap Shot relies on the Log Profiler shipped as part of Mono 2.10+ and also the sgen garbage collector to generate the required profiling data. To enable the profiler in heapshot mode you must run the application to be examined with the following command line: mono --gc=sgen --profile=log:heapshot MyProgram.exe This activates the sgen garbage collector and also the profiler in 'heapshot' mode. This will result in the profiler writing a dump of every live object at the end of every garbage collection to a log file called 'output.mldp'. Note: By default the log profiler will not overwrite an existing log file. You must either specify a different filename when launching the profiler, as described in the documentation or you must delete/rename existing logs before running the profiler. Using the GUI for HeapShot When you are happy that your application has run long enough to generate useful statistics, open the log file in Heap Shot. This can be done by clicking on 'Open' in Heap Shot and navigating to the file: Once a log file has been opened, you will be presented by a screen similar to this: The left hand side of the screen contains an entry for every heap snapshot. In this case there were two garbage collections before the application exited. By clicking on one of these snapshots and viewing the 'All objects' tab, you can quickly inspect a number of metrics such as: - The types that are being created. - Number of instance of the objects created (default sorting) - Memory used by these instances. - Average size of these objects. There are two ways to view the information in the heap: - Viewing which objects the current object references. - Viewing objects that reference the current type. The default mode is to display a list of types and as you expand each type you can see what objects are being referenced by that type and also the quantity of each object. You can also use the "Filter" function at the bottom to limit the display of types to a given type name or namespace. As you can see, the mono System.HashTable class references both System.Int32[] types and System.Collections.HashTable.Slot[] types. As the quantities of these three types are all the same, it'd be safe to assume that every System.HashTable creates one Slot[] and one Int32[] object. To view the objects that keep references to a given type, click on "Inverse references" at the bottom of the screen. This will allow you to see what types reference the current type so you can figure out why something is being retained in memory. The best way to do this is to double click on the type you are interested in and then click on "Inverse References". From this screenshot you can see which types store reference to System.String objects and the quantity of System.Strings that each type retains. This mode is invaluable when trying to figure out why objects you expect to have been GC'ed are actually still in memory. You can keep expanding the toggles to see why other objects are being kept alive. For example System.String is being kept alive by GLib.Signal. To see why GLib.Signal is kept alive you can simply expand that node. Alternatively you can double click on GLib.Signal and once again click on 'Inverse References' for that view. Visualizing Changes It is possible to examine which objects were created between two snapshots in time. To do this, snapshot the application twice, then set the checkbox on the snapshot that you want to use as a reference, and then select the second snapshot. The results displayed on the GUI will be only for the differences.
http://www.mono-project.com/HeapShot
CC-MAIN-2013-48
refinedweb
766
63.49
CodeGuru Forums > Visual Basic Programming > Visual Basic 6.0 Programming > Winsock (Socket Programming) Can Anyone Help? PDA Click to See Complete Forum and Search --> : Winsock (Socket Programming) Can Anyone Help? softweng May 25th, 2001, 12:03 PM I am trying to create a Class that can be used to communicate with a device using Modbus TCP protcol. Basically you just send a command out on the TCP/IP connection in a certain format. I have a C++ example and an OCX written by someone else that can do this. I don't know that much about C++ but it is pretty simple to do. You build a command string and send it out and wait for a response. The C++ and OCX work fine. I have a packet sniffer and looked at the data inside the packet and it is formated correctly (Using the C++ OCX). when I try to send the command string in the same format the winsock control formats it to hex and then the command is wrong. If I sniff the packet from my class the data is not correct. Is there a way to stuff the command string right into the TCP/IP packet without it reformatting the data?" I cannot figure out how to do this. If anyone has any Ideas or knows something about Socket Programming I would appreciate some help. I am a VB programmer that does not know much about C++ and how it passes data to a TCP socket. I have included here the C++ sample I found if anyone can tell me how to do the same in VB it would be a life saver. Thanks for any help you can provide!!!!!!! Here's the C++ code Sample. I have the whole OCX project in C++ too if it would help. // test1.cpp - Win32 console app to read registers // ============================================================ // test1.cpp 5/23/97 // example Win32 C++ program to read registers from PLC via gateway // compile with BC45 or BC50 // default settings for Win32 console app // empty DEF file #include <winsock.h> #include <stdio.h> #include <conio.h> int main(int argc, char **argv) { if (argc<5) * { *** printf("usage: test1 ip_adrs unit reg_no num_regs\n" *** "eg test1 198.202.138.72 5 0 10\n"); *** return 1; * } * char *ip_adrs = argv[1]; * unsigned short unit = atoi(argv[2]); * unsigned short reg_no = atoi(argv[3]); * unsigned short num_regs = atoi(argv[4]); * printf("ip_adrs = %s unit = %d reg_no = %d num_regs = %d\n", * ip_adrs, unit, reg_no, num_regs); * // initialize WinSock * static WSADATA wd; * if (WSAStartup(0x0101, &wd)) * { *** printf("cannot initialize WinSock\n"); *** return 1; * } * // set up socket * SOCKET s; * s = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP); * struct sockaddr_in server; * server.sin_family = AF_INET; * server.sin_port = htons(502); // ASA standard port * server.sin_addr.s_addr = inet_addr(ip_adrs); * int i; * i = connect(s, (sockaddr *)&server, sizeof(sockaddr_in)); * if (i<0) * { *** printf("connect - error %d\n",WSAGetLastError()); *** closesocket(s); *** WSACleanup(); *** return 1; * } * fd_set fds; * FD_ZERO(&fds); * timeval tv; * tv.tv_sec = 5; * tv.tv_usec = 0; * // wait for permission to send * FD_SET(s, &fds); * i = select(32, NULL, &fds, NULL, &tv); // write * if (i<=0) * { *** printf("select - error %d\n",WSAGetLastError()); *** closesocket(s); *** WSACleanup(); *** return 1; * } * // build request of form 0 0 0 0 0 6 ui 3 rr rr nn nn * unsigned char obuf[261]; * unsigned char ibuf[261]; * for (i=0;i<5;i++) obuf[i] = 0; * obuf[5] = 6; * obuf[6] = unit; * obuf[7] = 3; * obuf[8] = reg_no >> 8; * obuf[9] = reg_no & 0xff; * obuf[10] = num_regs >> 8; * obuf[11] = num_regs & 0xff; * // send request * i = send(s, obuf, 12, 0); * if (i<12) * { *** printf("failed to send all 12 chars\n"); * } * // wait for response * FD_SET(s, &fds); * i = select(32, &fds, NULL, NULL, &tv); //read * if (i<=0) * { *** printf("no TCP response received\n"); *** closesocket(s); *** WSACleanup(); *** return 1; * } * // read response * i = recv(s, ibuf, 261, 0); * if (i<9) * { *** if (i==0) *** { ***** printf("unexpected close of connection at remote end\n"); *** } *** else *** { ***** printf("response was too short - %d chars\n", i); *** } * } * else if (ibuf[7] & 0x80) * { *** printf("MODBUS exception response - type %d\n", ibuf[8]); * } * else if (i != (9+2*num_regs)) * { *** printf("incorrect response size is %d expected %d\n",i,(9+2*num_regs)); * } * else * { *** for (i=0;i<num_regs;i++) *** { ***** unsigned short w = (ibuf[9+i+i]<<8) + ibuf[10+i+i]; ***** printf("word %d = %d\n", i, w); *** } * } * // close down * closesocket(s); * WSACleanup(); * return 0; } Kris Software Engineer Phoenix,AZ jgonzale May 25th, 2001, 03:35 PM It looks like you're stuffing a VB ASCII string (of hex characters) into your packet. For example, you want 5 sets of 00's followed by a 06 in hex (00 00 00 00 00 06). But, your sniffed data is displaying 5 sets of 30's followed by 36. This is because an ascii value of 0 is decimal 48 and 30h(hex). ASCII 6 is 36h. You want the actual real values in your TCP/IP packet and not a string of hex values. So, I would not convert the numbers to hex strings, and if you are using strings you need to convert them to BYTE data type or some other way of getting just the value. I'm still new to VB so I'm not sure what the command is, Val() CByte() ?? Hope this helps. John coolbiz May 25th, 2001, 07:31 PM You parsing the command string incorrectly." The command string is already in hex and it seems like your converting it again to hex for each char in the string. Since a HEX is represented as 2 chars, so you actually have 12 bytes of of data. If you want to parse that information into a string then you can use the ASC() function by passing in each of the 2 chars. So to convert the sample above: dim szBuffer as String szBuffer = chr$(00) & chr$(00) & chr$(00) & chr$(00) & chr$(00) & chr$(06) szBuffer = szBuffer & chr$(01) & chr$(04) & chr$(00) & chr$(00) & chr$(00) & chr$(0A) Try it out and hopefully it works. -Cool Bizs coolbiz May 26th, 2001, 06:59 AM Sorry. There were typos in my prev reply. The correct syntax: So to convert the sample above: dim szBuffer as String szBuffer = chr$(&H00) & chr$(&H00) & chr$(&H00) & chr$(&H00) & chr$(&H00) & chr$(&H06) szBuffer = szBuffer & chr$(&H01) & chr$(&H04) & chr$(&H00) & chr$(&H00) & chr$(&H00) & chr$(&H0A) -Cool Bizs codeguru.com
http://forums.codeguru.com/archive/index.php/t-20147.html
crawl-003
refinedweb
1,064
77.57
Am 27.06.2012 15:55, schrieb Li Zhang: > On Wed, Jun 27, 2012 at 9:47 PM, Andreas Färber <address@hidden> wrote: >> Am 18.06.2012 11:34, schrieb Li Zhang: >>> Also instanciate the USB keyboard and mouse when that option is used >>> (you can still use -device to create individual devices without all >>> the defaults) >>> >>> Signed-off-by: Benjamin Herrenschmidt <address@hidden> >>> Signed-off-by: Li Zhang <address@hidden> >>> --- >>> hw/spapr.c | 43 ++++++++++++++++++++++++++++++++++++++++++- >>> 1 files changed, 42 insertions(+), 1 deletions(-) >>> >>> diff --git a/hw/spapr.c b/hw/spapr.c >>> index 8d158d7..c7b6e9d 100644 >>> --- a/hw/spapr.c >>> +++ b/hw/spapr.c >>> @@ -45,6 +45,8 @@ >>> #include "kvm.h" >>> #include "kvm_ppc.h" >>> #include "pci.h" >>> +#include "pc.h" >> >> This seems wrong for sPAPR. >> > pci_vga_init() is defined in pc.h which is called in the following. > > + } else if (std_vga_enabled) { > + pci_vga_init(pci_bus); Then we should move the declaration to a better place instead. :) We seriously shouldn't expect pc.h to build on random targets. Not sure what the function does, maybe it can be avoided by QOM? Alex? >>> @@ -510,6 +518,30 @@ static void spapr_cpu_reset(void *opaque) >>> cpu_reset(CPU(cpu)); >>> } >>> >>> +static int spapr_vga_init(PCIBus *pci_bus) >>> +{ >>> + /* Default is nothing */ >>> +#if 0 /* Enable this once we merge a SLOF which works with Cirrus */ >>> + if (cirrus_vga_enabled) { >>> + pci_cirrus_vga_init(pci_bus); >>> + } else >>> +#endif >>> + if (vmsvga_enabled) { >>> + fprintf(stderr, "Warning: vmware_vga not available," >>> + " using standard VGA instead\n"); >>> + pci_vga_init(pci_bus); >>> +#ifdef CONFIG_SPICE >>> + } else if (qxl_enabled) { >>> + pci_create_simple(pci_bus, -1, "qxl-vga"); >>> +#endif >>> + } else if (std_vga_enabled) { >>> + pci_vga_init(pci_bus); >>> + } else { >>> + return 0; >>> + } >>> + return 1; >>> +} >>> + >> >> Did you test whether all those paths actually work with ppc? SPICE >> didn't support ppc host last time I checked. Does it work on x86 host? > Currently, I test -vga std, it works well. > SPICE and curris are not supported on pcc. :) Please elaborate on this: ppc host or guest? If they don't work with sPAPR ppc guests there's little point in including the code here... Andreas -- SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
https://lists.gnu.org/archive/html/qemu-devel/2012-06/msg04494.html
CC-MAIN-2019-51
refinedweb
344
68.16
On 03/29/2010 06:54 PM, Jan Moringen wrote: [...] >> I've been asked to add features to sticky-func that are more like >> which-func, and as such the timing if this idea is good. Do you think >> it is worth merging the two, or keeping them separate? > > Good question. They do some similar things but other parts are > different. I made a small table: > > +---------+-----------------+------------------+-------------------+ > | |which-func | breadcrumbs |sticky-func | > +---------+-----------------+------------------+-------------------+ > |Displays |tag(s) under |tag(s) under point|top-most visible | > |what |point | |tag(?) | > +---------+-----------------+------------------+-------------------+ > |Displays |Mode-line |Header-line or |Header-line | > |where | |mode-line | | > | | |(planned) | | > +---------+-----------------+------------------+-------------------+ > |Update |Point motion, |Point motion, |Scroll, ? | > | |timer, ? |reparse | | > +---------+-----------------+------------------+-------------------+ Stickyfunc mode uses the header line format with :eval tag, and that gets called on redisplay. I don't know what magic it uses. > I don't think the presentation and logic can be unified in a single > mode. It's configuration and implementation would get very complex. > However, it should be possible to shared code and maybe get rid of > which-func-mode. Hmmm. I see the "what gets displayed" as having two configurations which could be mixed: 1. which tag gets summarized option 1: The tag under point option 2: The tag scrolled off the top of the screen 2. what the format of the summary is opt 1: Actual text of the tag opt 2: some semantic format tag function From a different angle, the name "stickyfunc" is about causing that raw text to be stuck to the top of the screen. As such, it is quite different from which-func or breadcrumbs. The big advantage to showing the tag under point with a reformatting function is it can include all the nested namespaces, which the raw text will not show. That can be very handy. Anyway, I know someone who wants your breadcrumbs feature, so it seems like a good idea, and users will just need to choose between semantic-stickyfunc-mode (mixed with semantic-highlight-func-mode for point under tag), or breadcrumbs. Eric View entire thread
http://sourceforge.net/p/cedet/mailman/message/24888340/
CC-MAIN-2014-42
refinedweb
337
71.85
You can subscribe to this list here. Showing 3 results of 3 This may be a broader question about relational information design, but I thought I'd put this out anyway. I would like to build an object, say a 'node', that has a number of related 'items' based upon a particular 'context'. In one context, for example, one particular node may 'contain' a certain of items, but contain other items in a different context. Here's how I have it defined in SQLObject. Does anyone see a better way to do this? Or, is there a better way to design associations between two sets of things that depends on the settings of a 3rd thing? class node(SQLObject): name = StringCol(length=32) itemAssociations = MultipleJoin('itemAssociation') def getItems(self, context): return [ia.item for ia in self.itemAssociations if ia.context == context] def getItems_ALT(self, context): # note: 'getItems' is 5-10% faster for small sets of items return [iaq.item for iaq in itemAssociation.select(AND( itemAssociation.q.nodeID==self.id, itemAssociation.q.contextID==context.id))] def associateItem(self, item, context): return itemAssociation.new(node=self, item=item, context=context) class itemAssociation(SQLObject): node = ForeignKey('node') item = ForeignKey('item') context = ForeignKey('context') class item(SQLObject): name = StringCol(length=32) class context(SQLObject): name = StringCol(length=32) For now, I have a 'compile' method in 'node' that takes a context object as its first arg and returns a node and its information/items into a dictionary where its 'items' key is a list of item dictionaries, too. Thanks, --T On Jan 8, 2004, at 10:33 PM, Javier Ruere wrote: >? An oversight then. SQLite should work just like MySQL. I thought there were unit tests that would have caught that, but apparently not. -- Ian Bicking | ianb@... | -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Ian Bicking wrote: > On Dec 28, 2003, at 1:35 PM, Javier Ruere wrote: > >> I've been away for a while but I see that BoolCol has been added. I >> have been unable to use it with SQLite. After looking the code and the >> tests, I think it is not supported for SQLite. Am I right? It would be >> really useful to me so if I can help with it, count on me. > > It should be working -- at least, I thought it was, but I don't have the > code in front of me at the moment (SF CVS and my laptop are > uncooperative). Non-Boolean-supporting databases (everything but > Postgres) just use integers (1/0). There's other details to how BoolCol > works, but those are database-agnostic.? Javier -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.2 (GNU/Linux) Comment: Using GnuPG with Thunderbird - iD8DBQE//i8g8XQC840MeeoRAgeJAKCFoy18b6oGvhjxORpQrPDGsuRPKQCfeefT gdO7gGezukq/kp3CTc8rxzI= =XCsl -----END PGP SIGNATURE-----
http://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/?style=flat&viewmonth=200401&viewday=9
CC-MAIN-2015-48
refinedweb
454
57.47
18.1. Introduction: Test Cases¶ When we write functions that return values, we intend to use them over and over again. However, we want to be certain that they return the correct result. To be more certain these functions work correctly we write test cases. A test case expresses requirements for a program, in a way that can be checked automatically. Specifically, a test asserts something about the state of the program at a particular point in its execution. We have previously suggested that it’s a good idea to first write down comments about what your code is supposed to do, before actually writing the code. It is an even better idea to write down some test cases before writing a program. For example, before writing a function, write a few test cases that check that it returns an object of the right type and that it returns the correct values when invoked on particular inputs. You’ve actually been using test cases throughout this book in some of the activecode windows and almost all of the exercises. The code for them has been hidden, so as not to confuse you and also to avoid giving away the answers. Now it’s time to learn how to write code for test cases. To write a unit test, we must know the correct result when calling the function with a specific input. testEqual (from the test module) is a function that allows us to perform a unit test. It takes two parameters. The first is a call to the function we want to test ( square in this example) with a particular input (10 in this example). The second parameter is the correct result that should be produced (100 in this example). test.testEqual compares what the function returns with the correct result and displays whether the unit test passes or fails. Extend the program … On line 8, write another unit test (that should pass) for the square function. Note The test module is not a standard Python module. Instead, there are other more powerful and more modern modules, such as one called unittest which will be taught in more advanced courses. However, the test module offers a simple introduction to testing that is appropriate at this stage in the interactive text. Check your understanding test-1-1: When - True - A message is printed out, but the program does not stop executing - False - A message is printed out, but the program does not stop executing - It depends - A message is printed out, but the program does not stop executing test.testEqual()is passed two values that are not the same, it generates an error and stops execution of the program. test-1-2: Test cases are a waste of time, because the python interpreter will give an error message when the program runs incorrectly, and that’s all you need for debugging. - True - You might not notice the error, if the code just produces a wrong output rather generating an error. And it may be difficult to figure out the original cause of an error when you do get one. - False - Test cases let you test some pieces of code as you write them, rather than waiting for problems to show themselves later. - test.testEqual(blanked('under', 'du', 'u_d__')) - blanked only takes two inputs; this provides three inputs to the blanked function - test.testEqual(blanked('under', 'u_d__'), 'du') - The second argument to the blanked function should be the letters that have been guessed, not the blanked version of the word - test.testEqual(blanked('under', 'du'), 'u_d__') - This checks whether the value returned from the blanked function is 'u_d__'. test-1-3: Which of the following is the correct way to write a test to check that ‘under’ will be blanked as 'u_d__' when the user has guessed letters d and u so far? def blanked(word, revealed_letters): return word import test test.testEqual(blanked('hello', 'elj'), "_ell_") test.testEqual(blanked('hello', ''), '_____') test.testEqual(blanked('ground', 'rn'), '_r__n_') test.testEqual(blanked('almost', 'vrnalmqpost'), 'almost')
https://runestone.academy/runestone/static/fopp/TestCases/intro-TestCases.html
CC-MAIN-2018-51
refinedweb
673
70.43
Excellent error message from /?". Only problem is, it doesn't really tell you the problem, the problem in this case being an interfaced inheriting from IDatagramPortTypeChannel instead of IDialogPortTypeChannel We have two .NET trading application servers that use XmlSerializer, mostly because the market data we retrieve of the internal message bus is in XML format. Unfortunately it appears as if XmlSerializer was possibly not the best tool for the job, even with its own internal cache. We are currently processing 1000-1500 msgs/sec, which we estimate will raise to 5500-8000 msgs/sec over the next few months. However, its currently taking us on average 5mns to Deserialize() a message, which is to long - we really need it sub-millisecond. Heartbeats Yesterday we had an unfortunate issue where a trader's UI somehow lost its .NET remoting connection to a server, and hence their view of the market was stale - in the worst case this can cause a position to go south. We resolved the issue by leveraging some existing heartbeat infrastructure, which allowed us to notify the trader when their screen had not received any messages for a certain period (implying a loss of connection), and allowing a reconnection and refresh of all his data on-screen. Svr GC Our GC issues seem to have been resolved by the svr GC. Today we survived the morning volatile market, with memory remaining fairly constant - between 175-180MB. Remoting calls were stable (203,000 for the entire trading day), and we have an 8-10% Time in GC. The box is an IBM 8-CPU Intel Pentium 4 2.4 Mhz (Hyper-Thread) with 8G RAM. When running the wks GC on the same hardware, we saw a high usage of Gen 2, whereas with svr GC, it's the reverse. Gen 0 for the svr GC appears to run at 10Mb, whereas for the wks GC it was generally smaller. Gen 1 appears to be the same for wks and svr GC. Possibly the next book to buy Our .NET services were using rather a lot of memory over the last few days, we tracked part of the problem down to a missing "delete" in a shared unmanaged C++ assembly. We were also leaking memory due to a missing System::Runtime::InteropService::Marshal::FreeHGlobal call for a char* Indigo: My guess is we will see the next code drop of Indigo early next year. SOA offers an improvement over passing object references. Ryan Dawson has an interesting article on how Indigo could be used on Wall St. I really hope Indigo performs well given that its based on SOAP, as I have said many times in this blog, performance is key for Wall St. applications. Subscribe: Financial Technology blog Hardware: Sometimes, due to the RAD nature of trading projects, things get missed of the project plan. Today its lack of hardware - simulating a live market requires a certain amount of CPU power. GC Update: Due to a few performance/memory leak issues we've had recently, we haven't yet had time to try out the svr GC. Hopefully, we can schedule a test of the svr GC in the next few days. Thanks to Patrick for the solution to adding multi-instance and single-instance counters in the same category - you can't. On the work front, the last two week's performance related work appears to be slowly paying off. We should know for sure by Tuesday/Wednesday, but so far today we have seen a reduction in CPU usage from 40-60% last week to 15-20% this week, even with a 2.5x increase in load on our servers. A low context switch rate is less then 5,000 context switches per second, per processor. So, an 8-processor server with an overall context switch rate of 1,500 context switches per second would qualify as having a low context switch rate. Todays useless piece of code if you to want to know the number of processors you have available: ; } [DllImport("kernel32")]static extern void GetSystemInfo(ref SYSTEM_INFO pSI); SYSTEM_INFO pSI = new SYSTEM_INFO();GetSystemInfo(ref pSI);Console.WriteLine("Number of CPU's:" + pSI.dwNumberOfProcessors); Since I had this problem a long time ago, and keep getting asked about it, I'll blog the answer. We originally got this problem when we had some C# code in an AppDomain, other than the default AppDomain, call some unmanaged code (C++). The unmanaged code would then perform a callback into C# at sometime in the future. First you need to be aware of the Mixed DLL Loading Problem and linker warnings you might have seen when compiling unmanaged code. The System.DllNotFoundException will not normally occur in the above scenario if the default AppDomain is used. This is because when the unmanaged code makes a callback, it has no idea about the managed world (and AppDomain's), and so unless otherwise told, will invoke the callback into the default AppDomain. Hence the problem, if your original C# call to the unmanaged world was via another AppDomain, then the callback will be incorrect. The simplest way to solve the problem is to compile you unmanaged code with the Visual C++ compiler option "/clr:initialAppDomain". This should cause all calls from unmanaged threads to call back into the AppDomain that loaded the unmanaged code If a server has a MarshalByRefObject object that was create and return to a client, and the client drops its reference to the object, why does remoting still keep a hold of the object on the server? .NET Memory Profiler says one of my objects is held by ServerIdentity in the System.Runtime.Remoting namespace. On another note this .NET Memory Profiler gives a different view of the world to this one Marshalling DateTables is expensive, even with the binary formatter. One idea that appears to perform better for us is to do the following: DateSet ds = new DataSet();ds.Tables.Add(table)ds.WriteXML(stream, System.Data.XmlWriteMode.WriteSchema) Then take the XML, compress it, send it to the client, and reverse the process. Finally, calling Clear() prior to Dispose() on a DataTable makes a big different to the memory usage. For us, it was making a 5Mb difference to the memory footprint. We also have this SqlClient memory leak
http://weblogs.asp.net/mdavey/archive/2004/02.aspx
crawl-003
refinedweb
1,055
60.45
* A friendly place for programming greenhorns! Big Moose Saloon Search | Java FAQ | Recent Topics Register / Login JavaRanch » Java Forums » Java » Java in General Author Sort Directory paths, sort of. David Shepherd Ranch Hand Joined: Mar 02, 2001 Posts: 35 posted Mar 07, 2001 16:59:00 0 I have a vector full of strings that resemble directory paths. I just need to sort them into an order similar to the one below, maintaining the parent child relationship. I was hoping there was an EASY way to do it. Like an algorithm that someone knew off the top of thier head. Thank you very much if you have it! Exhibit / David Exhibit / David / Car Exhibit / David / Car / Seat Exhibit / David / Car / Seat / Red Exhibit / David / Car / Seat / Blue Exhibit / David / Car / Seat / Green Exhibit / Eric Exhibit / Eric / Truck Exhibit / Eric / Truck / Engine Exhibit / Eric / Truck / Color Exhibit / Eric / Truck / Size Exhibit / Eric / Truck / Size / Big Exhibit / Eric / Truck / Size / RealBig Exhibit / Eric / Truck / Size / Tiny Manfred Leonhardt Ranch Hand Joined: Jan 09, 2001 Posts: 1492 posted Mar 07, 2001 17:59:00 0 Hi David, If you want something ordered (text being the easiest) I would place them all into some set (i.e., HashSet , TreeSet ) which will automatically order them for you. Then you just need to read them back out! Regards, Manfred. David Shepherd Ranch Hand Joined: Mar 02, 2001 Posts: 35 posted Mar 08, 2001 02:24:00 0 I wish it were that easy. A set will return them ordered but not quite the way I need. I need to preserve the parent child relationship of a tree. A set does not seem to reliably do that. Is there such a thing as a tree sort or something similar? I've been trying to use a combination of a set and breaking each string apart using a StringTokenizer . It just seems to me that somewhere someone has already done this trickery. After all, directory structures are sorted somehow. Thanks for the help. Frank Carver Sheriff Joined: Jan 07, 1999 Posts: 6920 posted Mar 08, 2001 09:31:00 0 It looks to me as if these "paths" are just strings, sorted in their natural order. Here's a test program I wrote to demonstrate: import java.util.Arrays; public class Sorter { static String[] data = { "Exhibit / Eric / Truck / Size", "Exhibit / Eric / Truck / Size / Big", "Exhibit / David", "Exhibit / David / Car / Seat / Green", "Exhibit / David / Car / Seat / Blue", "Exhibit / David / Car / Seat / Red", "Exhibit / Eric / Truck / Engine", "Exhibit / Eric / Truck / Color", "Exhibit / Eric", "Exhibit / Eric / Truck", "Exhibit / Eric / Truck / Size / RealBig", "Exhibit / David / Car", "Exhibit / David / Car / Seat", "Exhibit / Eric / Truck / Size / Tiny" }; public static void main(String[] args) { // do the work! Arrays.sort(data); for (int i = 0; i < data.length; ++i) { System.out.println(data[i]); } } } Is there something I'm missing, or is it really this simple? Read about me at frankcarver.me ~ Raspberry Alpha Omega ~ Frank's Punchbarrel Blog David Shepherd Ranch Hand Joined: Mar 02, 2001 Posts: 35 posted Mar 08, 2001 11:32:00 0 It comes close but no cigar. If you run the code above you do not always obtain the pattern I listed in my first posting. I tried sorting using a set and then using an array.sort. I think these basically do the same thing so that did not quit get me anywhere. I have seen examples of a binary tree sort that come closest to what I need but have not seen a lot of Java code fo such a sort. I was hoping I could just use a JTree to sort the info as a cheat. Any good web resources for Java sort algorithms? Frank Carver Sheriff Joined: Jan 07, 1999 Posts: 6920 posted Mar 09, 2001 03:51:00 0 I'm puzzled now. Can you give an example of a "wrong" ordering output from the code above, it always gives the same output for me. There's obviously some subtlety in the problem that I'm missing. I've always used this sort of sort for traversing tree structures (I use it in my XML parser, for example, where grouping subnodes is vital), so I'm very interested if you have found some sort of loophole. Cindy Glass "The Hood" Sheriff Joined: Sep 29, 2000 Posts: 8521 posted Mar 09, 2001 08:24:00 0 Well it isn't alphabetical, cuz the Red is before the Bue and Green, and Engine is before Color, so I am at a loss as to what the order is. "JavaRanch, where the deer and the Certified play" - David O'Meara Steve Fahlbusch Bartender Joined: Sep 18, 2000 Posts: 515 3 I like... posted Mar 09, 2001 10:48:00 0 Greetings David, Let's start by clarifying a few points, shall we? 1) What is the order of the items in the first post, or said another way, what determines the order since it seems not to be data within the strings, is it some other data that does not make it to the strings, such as the date/time when the path (subtree) was created? 2) Why and How does Frank's example not keep the parent - child relationship? 3) You said the Frank's example does not always get the correct results, could you please explain? It seems to me that either Frank's code will always work or never work for your intended results? David Shepherd Ranch Hand Joined: Mar 02, 2001 Posts: 35 posted Mar 09, 2001 13:01:00 0 I think Frank may be right. I am using this in an XML context to develop an application. I need the sort to allow me to dynamically generate XSL. I'll look at it further this weekend and see if it will work. However, using the Array.sort() I recieved this result. exhibit/section exhibit/section/byline/title exhibit/section/para exhibit/section/section/byline exhibit/section/section/byline/subbyline And I was trying to get this result. exhibit/section exhibit/section/para ******* exhibit/section/byline/title exhibit/section/section/byline exhibit/section/section/byline/subbyline The difference is in the (exhibit/section/para) and (exhibit/section/byline/title) pair. It is subtle and may not matter. I will try it. Jim Yingst Wanderer Sheriff Joined: Jan 30, 2000 Posts: 18670 posted Mar 09, 2001 20:52:00 0 I think we all see where the difference is; the question is why do you think that "para" should come before "byline"? Is there some rule which you could explain to us (or your program) which explains why one ordering is "correct" and another is not? Offhand it seems as though it shouldn't matter; using alphabetical order is just the easiest sorting method to implement. If you can explain some other criteria to be used for sorting, it will be possible to create a java.util.Comparator class which can then be used to sort the classes the way you want (using the Arrays.sort(Object[], Comparator) method). "I'm not back." - Bill Harding, Twister David Shepherd Ranch Hand Joined: Mar 02, 2001 Posts: 35 posted Mar 09, 2001 21:48:00 0 I'll try to explain. It may not make any difference for my purpose (I have not tested it yet), however, once I get an idea in my head I'm stubborn. Sorry. If section is the first path the next logical path (in my mind anyway) is direct children of section (that do not have children of thier own). "exhibit/para" would come before "exhibit/section/byline/title" because it has only one element past "exhibit/section", thus represents a direct child/descendant of "exhibit/section". If "exhibit/section" is considered the root, my sort would look for direct children of this root . After that the sort would look for direct children of the new root "exhibit/section/para", and so on. So, each path could be viewed as a parent while iterating through the sort. So, as you iterate down the list: - each string is a path that may have children (a root node) - if the path has children, order them by how directly/ closely they are related to the root. Most recent descendants coming first. I am having trouble just describing the logic, that's obviously why I am having trouble putting it into mathematical terms. David Shepherd Ranch Hand Joined: Mar 02, 2001 Posts: 35 posted Mar 09, 2001 21:50:00 0 In addition to the above, descendants without children would come before descendants with children. Frank Carver Sheriff Joined: Jan 07, 1999 Posts: 6920 posted Mar 10, 2001 09:57:00 0 I think the sort which I have described will generate the results you require, iff all "branch" nodes are present. In your recent example, I note that there is no "exhibit/section/byline" or "exhibit/section/section" entries, although your original example was "complete" in this way. In most forms of tree traversal these nodes would have to be present for their children to be there. Is this deliberate? Are these entries missing from your input for some important reason? From your mention of XML, I guess the missing "tags" might not be shown because they have no significant (non-subtag) contents. For my own XML/string mapping I had to make sure these were stored to make the system work correctly. Also, just in case you haven't come across this yet, there is a big problem with representing XML trees in this sort of way. Most DTDs allow repeated nodes: < book > < title >blah< /title > < chapter >...< /chapter > < chapter >...< /chapter > < chapter >...< /chapter > < /book > which can cause a lot of problems unless you implement a way to handle or prevent this in your representation. [This message has been edited by Frank Carver (edited March 10, 2001).] David Shepherd Ranch Hand Joined: Mar 02, 2001 Posts: 35 posted Mar 10, 2001 15:18:00 0 The missing nodes is an excellent point. It is something I have overlooked. I will need to come up with a way of correcting that problem. If fact, it is something that would have been an issue and I'm glad you pointed it out to me now. Thank you. I should have seen that earlier. Sometimes the little things are hard to see. Thanks to everyone for the help. subject: Sort Directory paths, sort of. Similar Threads Tree Sort algorithm needed. Why abstract classes in java do not have objects, any strong reasons ? a little MINI to brighten up your day yet another classpath question (Visual Swing 4 Eclipse) JSP javascript images slideshow problem All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/367474/java/java/Sort-Directory-paths-sort
CC-MAIN-2013-48
refinedweb
1,793
70.33
Using the package database From FedoraProject , View Packages , and View Bugs links allow you to browse through a list of packages, looking for information on them. - My Packages lets you look at the list of packages you own. - Orphan Packages displays a list of packages which are orphaned in one or more active branches. - Project Page and Report Bugs Take you to the Package Database's project page for getting involved with development of the PackageDB code. What's the fastest way to find a package? Currently, there's no UI to search for a package or go directly to it. However, if you want to avoid having to go through the complete list of packages you can access a specific package by modifying the URL. Simply replace PACKAGENAME in the following URL with the name of the package you're interested in: If you'd like to help implement package search, have a look at this ticket . Logging in The upper right hand corner of the Package Database has a brief message that says You are not logged in yet Login. When you click on Login, you are asked to enter your Fedora have a button labeled Add myself to Packages that you can press to request access to a certain package on a certain branch. Once you've selected that button, you can check off the checkboxes for each acl you're interested in. -? Currently you have to ask the person you want to have access request acls on the package. When that happens, you will be sent an email to approve their request. You can then go to the package in the Package Database and approve their request. There are two tickets relevant to this in the PackageDB. If you'd like to help make this feature better, feel free to work on them: How do I find orphaned packages that I can take over? This information is currently scattered in several places. We are working on consolidating it in the packagedb: But information is currently also present in the wiki: If you'd like to work on migrating this information into the PackageDB, please see this ticket . Working with your packages How can I find my packages? The easiest way to find the packages you work with is to view the overview of your packages. This is accessed from This view shows you every package for which you have an acl in a non-EOL branch. (So a package for which you only have watchbugzilla in EPEL4 will show up. A package for which you are the owner in Fedora Core 3 will not.) How can I only list packages I'm comaintainer of? The packagedb allows filtering the list of packages. Using the filter box on is the recommended method. Enterprising users can change the URL directly. To use it, add ?acls=ACL1&acls=ACL2[...&acls=ACLN] to the end of the user url. ACL1 through ACLN can be any combination of the following: - owner: you are the package owner - commit: you are allowed to commit to the package - approveacls: you can approve and change acls on the package - watchbugzilla: you are watching bugzilla entries for this package - watchcommits: you are watching commits for this package Here's an example that lists what people usually think of as packages they're a comaintainer of: How can I list packages which I was interested in on an EOL branch? Add EOL=True to the end of the url. Here's an example that lists packages that you were the package owner including EOL branches:. If a page supports plain text, it can be retrieved that way by appending ?tg_format=plain to the Base URL. If it can be retrieved as JSON data, appending ?tg_format=json will retrieve the information in that way. The best way to retrieve this data is by using the client library present in python-fedora. yum install python-fedora then setup a pkgdb client object and call methods on it like this: from fedora.client import PackageDB # username and password are optional -- in general, only methods that # modify data will need them to have been specified pkgdb = PackageDB(username='me', password='XXXX') pkgdb.get_bugzilla_acls() Documentation for the pkgdb module of python-fedora is here:. Package Information Package Buglist User Package List Returns a list of packages belonging to the user with the ability to filter on certain acls via query params. The query params may be added to in the future and the URL location may change as well. Change Package Info This is the only method to change information in the database but the API is going to be changing radically.
http://fedoraproject.org/w/index.php?title=Using_the_package_database&oldid=193947
CC-MAIN-2014-10
refinedweb
780
70.73
1.2 anton 1: \ environmental queries 1.1 anton 2: 1.36 ! anton 3: \ Copyright (C) 1995,1996,1997,1998,2000,2003,2007,2012.35 pazsan 20: [IFUNDEF] cell/ : cell/ 1 cells / ; [THEN] 21: [IFUNDEF] float/ : float/ 1 floats / ; [THEN] 22: 1.3 pazsan 23: \ wordlist constant environment-wordlist 1.1 anton 24: 1.26 anton 25: vocabulary environment ( -- ) \ gforth 26: \ for win32forth compatibility 27: 28: ' environment >body constant environment-wordlist ( -- wid ) \ gforth 1.20 crook 29: \G @i{wid} identifies the word list that is searched by environmental 1.18 crook 30: \G queries. 1.26 anton 31: 1.3 pazsan 32: 1.10 anton 33: : environment? ( c-addr u -- false / ... true ) \ core environment-query 1.20 crook 34: \G @i{c-addr, u} specify a counted string. If the string is not 35: \G recognised, return a @code{false} flag. Otherwise return a 36: \G @code{true} flag and some (string-specific) information about 37: \G the queried string. 1.2 anton 38: environment-wordlist search-wordlist if 39: execute true 40: else 41: false 42: endif ; 43: 1.15 jwilke 44: : e? name environment? 0= ABORT" environmental dependency not existing" ; 1.13 jwilke 45: 1.27 jwilke 46: : $has? environment? 0= IF false THEN ; 1.14 jwilke 47: 1.27 jwilke 48: : has? name $has? ; 1.14 jwilke 49: 1.2 anton 50: environment-wordlist set-current 51: get-order environment-wordlist swap 1+ set-order 52: 53: \ assumes that chars, cells and doubles use an integral number of aus 54: 55: \ this should be computed in C as CHAR_BITS/sizeof(char), 56: \ but I don't know any machine with gcc where an au does not have 8 bits. 1.7 anton 57: 8 constant ADDRESS-UNIT-BITS ( -- n ) \ environment 1.19 crook 58: \G Size of one address unit, in bits. 1.2 anton 59: 1.18 crook 60: 1 ADDRESS-UNIT-BITS chars lshift 1- constant MAX-CHAR ( -- u ) \ environment 61: \G Maximum value of any character in the character set 1.1 anton 62: 1.18 crook 63: MAX-CHAR constant /COUNTED-STRING ( -- n ) \ environment 64: \G Maximum size of a counted string, in characters. 1.3 pazsan 65: 1.18 crook 66: ADDRESS-UNIT-BITS cells 2* 2 + constant /HOLD ( -- n ) \ environment 67: \G Size of the pictured numeric string output buffer, in characters. 68: 69: &84 constant /PAD ( -- n ) \ environment 70: \G Size of the scratch area pointed to by @code{PAD}, in characters. 71: 72: true constant CORE ( -- f ) \ environment 73: \G True if the complete core word set is present. Always true for Gforth. 74: 75: true constant CORE-EXT ( -- f ) \ environment 76: \G True if the complete core extension word set is present. Always true for Gforth. 77: 78: 1 -3 mod 0< constant FLOORED ( -- f ) \ environment 1.21 anton 79: \G True if @code{/} etc. perform floored division 1.18 crook 80: 81: 1 ADDRESS-UNIT-BITS cells 1- lshift 1- constant MAX-N ( -- n ) \ environment 82: \G Largest usable signed integer. 83: 84: -1 constant MAX-U ( -- u ) \ environment 85: \G Largest usable unsigned integer. 86: 87: -1 MAX-N 2constant MAX-D ( -- d ) \ environment 88: \G Largest usable signed double. 89: 90: -1. 2constant MAX-UD ( -- ud ) \ environment 91: \G Largest usable unsigned double. 92: 93: version-string 2constant gforth ( -- c-addr u ) \ gforth-environment 1.22 anton 94: \G Counted string representing a version string for this version of 95: \G Gforth (for versions>0.3.0). The version strings of the various 1.23 anton 96: \G versions are guaranteed to be ordered lexicographically. 1.1 anton 97: 1.18 crook 98: : return-stack-cells ( -- n ) \ environment 99: \G Maximum size of the return stack, in cells. 1.34 pazsan 100: [ forthstart 6 cells + ] literal @ cell/ ; 1.10 anton 101: 1.18 crook 102: : stack-cells ( -- n ) \ environment 103: \G Maximum size of the data stack, in cells. 1.34 pazsan 104: [ forthstart 4 cells + ] literal @ cell/ ; 1.10 anton 105: 1.18 crook 106: : floating-stack ( -- n ) \ environment 1.19 crook 107: \G @var{n} is non-zero, showing that Gforth maintains a separate 108: \G floating-point stack of depth @var{n}. 1.11 pazsan 109: [ forthstart 5 cells + ] literal @ 1.34 pazsan 110: [IFDEF] float/ float/ [ELSE] [ 1 floats ] Literal / [THEN] ; 1.10 anton 111: 1.6 anton 112: 15 constant #locals \ 1000 64 / 113: \ One local can take up to 64 bytes, the size of locals-buffer is 1000 1.5 anton 114: maxvp constant wordlists 1.2 anton 115: 116: forth definitions 117: previous 1.1 anton 118:
https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/environ.fs?annotate=1.36;hideattic=0;sortby=rev;f=h;only_with_tag=HEAD
CC-MAIN-2021-49
refinedweb
770
70.09
This is the program....I don't know how to start it....any help would be helpful.!!!! Write a short code that promts the user to enter any 3 by 3 matrix, and then calculates its inverse.!! This is the program....I don't know how to start it....any help would be helpful.!!!! Write a short code that promts the user to enter any 3 by 3 matrix, and then calculates its inverse.!! Heh, perhaps you should reread your sig. Anyway, what have you got so far?Anyway, what have you got so far? Code:#include <cmath> #include <complex> bool euler_flip(bool value) { return std::pow ( std::complex<float>(std::exp(1.0)), std::complex<float>(0, 1) * std::complex<float>(std::atan(1.0) *(1 << (value + 2))) ).real() < 0; } Here's a good place to start: Code:#include <iostream> using namespace std; int main() { // code here // also here } Claus Hetzer Compiler: Borland 5.5 (on Windows) Solaris CC (on Unix) Known Languages: C++, MATLAB, Perl, Java well heres an example of taking in three values, might help. Code:#include <iostream.h> int main() { int value1, value2, value3; cout<<"Enter a value: "; cin>>value1; cout<<"Enter a value: "; cin>>value2; cout<<"Enter a value: "; cin>>value3; cout<<"The values u enterd were "<<value1 <<" "<<value2<<" "<<value3<<"."<<endl; return 0; } use a multi- dimensional array... so instead of having all those values, you could pass them through a function as well. so something like this to start you off. int matrix ( int [] , int[], int[]); int main () { cout<<" Enter three values for horizontal array."; for ( int i = 0; i < 3 ; i ++ ) cin>>value [ i ] ; then for the second dimension , you copy and paste the above code. im not writing this for you, so figure out the the function to pass the int[ ] through HET TETRA REMEMBER THIS..... "i am supposed to write a program that will assign seats on a ten seat plane. dont worry, im not asking you to write code for." -TETRA I posted the code for the program...........did you ever look at IT???!!!!? Here it is again.... did this program a few days ago....here it is...the complete program......check if it works..... Code://air #include <iostream.h> int main() { int plane[11]={0}; int choice; int seatFound =1; int Firstclass =1; int Economy =6; int people =1; cout<< "Please type 1 for \"First Class\"" << endl; cout<< "Please type 2 for \"Economy\"" << endl; cin>> choice; while (choice != -1) { if(choice==1) { seatFound= 0; choice=1; while (choice<6) { if(plane[choice]<1 && Firstclass <=5) { seatFound=choice; plane[choice] =1;//put person in seat choice=33;//exit loop } else choice ++; people++; } if(seatFound>0)//test if seat is found { cout<<"Boarding Pass"<<endl; cout<<"-First Class-"<<endl; cout<<"Seat#" << seatFound<<endl; } else { cout << "Do you want a seat in economy? " << endl; cout << "\n1 = yes , 2 = no : "; cin >> choice; if ( choice == 2 ) { seatFound=0; choice=2; while(plane[choice]<11) { if(plane[choice] > 0 && Economy <= 10 ) { plane[choice]=2; // put someone in the seat seatFound=choice; choice=33; } else choice++; people++; } if(seatFound>0) //test if seat is found { cout<<"Boarding Pass"<<endl; cout<<"Economy Class"<<endl; cout<<"Seat#" << seatFound << endl; } } else { cout << "Next flight leaves in three hours..." << endl; } } } else //economy class { seatFound=0; choice=6; while(plane[choice]<11) { if(plane[choice] < 1) { plane[choice]=1; // put someone in the seat seatFound=choice; choice =2; } else choice++; people++; } if(seatFound>0) //test if seat is found { cout<<"Boarding Pass"<<endl; cout<<"Economy Class"<<endl; cout<<"Seat#" << seatFound << endl; } else { cout << "Next flight leaves in three hours..." << endl; } } cout<< "Please type 1 for \"First Class\" " << endl; cout<< "Please type 2 for \"Economy\" " << endl; cin>> choice; } return 0; } [code][/code]tagged by Salem No, you're right, I was too lazy to look at it before. I love your subject line! I think You must use Code Tags.. for posting codes...I think You must use Code Tags.. for posting codes...Originally posted by Little Tito I posted the code for the program...........did you ever look at IT???!!!!? Here it is again.... code:---------------------------------------------------------------------------- } code------------------------------------------------------------------------------ instead of using your own.. for easy posting codes Read this thread for help... Last edited by jawwadalam; 11-16-2002 at 07:53 PM. One day you will ask what more important to you.. I will say my life.. and You will leave me with even knowing that You are my Life (L)
https://cboard.cprogramming.com/cplusplus-programming/28692-cplusplus-program.html
CC-MAIN-2017-13
refinedweb
740
76.11
Here we raise the modelling abstraction level by passing an abstract datatype along a channel. sc_signal < bool > mywire; // Rather than a channel conveying just one bit, struct capsule { int ts_int1, ts_int2; bool operator== (struct ts other) { return (ts_int1 == other.ts_int1) && (ts_int2 == other.ts_int2); } int next_ts_int1, next_ts_int2; // Pending updates void update() { ts_int1 = next_ts_int1; ts_int2 = next_ts_int2; } ... ... // Also must define read(), write() and value_changed() }; sc_signal < struct capsule > myast; // We can send two integers at once. For many basic types, such as bool, int, sc_int, the required methods are provided in the SystemC library, but clearly not for user-defined types. void mymethod() { .... } SC_METHOD(mymethod) sensitive << myast.pos(); // User must define concept of posedge for his own abstract type. Future topic: TLM: wiring components together with methods instead of shared variables.
http://www.cl.cam.ac.uk/teaching/1213/SysOnChip/materials/sp2syscbasic/zhp792cc0d04.html
CC-MAIN-2017-30
refinedweb
127
57.67
From: Daniel Wallin (dalwan01_at_[hidden]) Date: 2003-12-13 06:35:28 David Abrahams wrote: > "Jonathan Turkanis" <technews_at_[hidden]> writes: > > >>I'm toying with the idea of allowing $1, $2, ... as alternatives to _1, _2, >>.... on platforms which support the dollar sign in identifiers. I assume the >>universal response to this would be that it is unworkable, dangerous, ugly >>and possibly a felony. >> >>Does anyone have a better idea? > > > Bite the bullet and use qualification. Or just reuse bind's placeholders. Unless you need to add members to the placeholders of course.. > That's what I did with MPL to stay out of the way of bind's > placeholders, which were in the unnamed namespace and clashed > with everything. The code still looks good enough to my eye. .. #define _1 boost::arg<1>() ;) -- Daniel Wallin Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/12/57567.php
CC-MAIN-2020-05
refinedweb
157
78.35
There are four principle concepts upon which object oriented design and programming rest. They are: 1. Abstraction 2. Polymorphism 3. Inheritance 4. Encapsulation (i.e. easily remembered as A-PIE). Abstraction refers to the act of representing essential features without including the background details or explanations. Encapsulation is a technique used for hiding the properties and behaviors of an object and allowing outside access only as appropriate. It prevents other objects from directly altering or accessing the properties or methods of the encapsulated object. 1. Abstraction focuses on the outside view of an object (i.e. the interface) Encapsulation (information hiding) prevents clients from seeing it's inside view, where the behavior of the abstraction is implemented. 2. Abstraction solves the problem in the design side while Encapsulation is the Implementation. 3. Encapsulation is the deliverables of Abstraction. Encapsulation barely talks about grouping up your abstraction to suit the developer needs. 1. Inheritance is the process by which objects of one class acquire the properties of objects of another class. 2. A class that is inherited is called a superclass. 3. The class that does the inheriting is called a subclass. 4. Inheritance is done by using the keyword extends. 5. The two most common reasons to use inheritance are: i. To promote code reuse ii. To use polymorphism. (Inheritance, Overloading and Overriding are used to achieve Polymorphism in java). Polymorphism manifests itself in Java in the form of multiple methods having the same name. 1. In some cases, multiple methods have the same name, but different formal argument lists (overloaded methods). 2. In other cases, multiple methods have the same name, same return type, and same formal argument list (overridden methods). When an 'assert' fails, the test will be aborted. Assert is best used when the check value has to pass for the test to be able to continue to run log in. Where if a 'verify' fails, the test will continue executing and logging the failure. Verify is best used to check non critical things. Like the presence of a headline element.. Method Overloading means to have two or more methods with same name in the same class with different arguments. The benefit of method overloading is that it allows you to implement methods that support the same semantic operation but differ by argument number or type. Note: 1. Overloaded methods MUST change the argument list 2. Overloaded methods CAN change the return type 3. Overloaded methods CAN change the access modifier 4. Overloaded methods CAN declare new or broader checked exceptions 5. A method can be overloaded in the same class or in a subclass Method overriding occurs when sub class declares a method that has the same type arguments as a method declared by one of its superclass. The key benefit of overriding is the ability to define behavior that's specific to a particular subclass type. Note: 1. The overriding method cannot have a more restrictive access modifier than the method being overridden (Ex: You can't override a method marked public and make it protected). 2. You cannot override a method marked final 3. You cannot override a method marked static"}; public class StringRecursiveReversal { String reverse = ""; public String reverseString(String str){ if(str.length() == 1){ return str; } else { reverse += str.charAt(str.length()-1)+reverseString(str.substring(0,str.length()-1)); return reverse; } } public static void main(String a[]){ StringRecursiveReversal srr = newStringRecursiveReversal(); System.out.println("Result: "+srr.reverseString("SeleniumWebdriver")); } } public class NumberReverse { publicint reverseNumber(int number){ int reverse = 0; while(number != 0){ reverse = (reverse*10)+(number%10); number = number/10; } return reverse; } public static void main(String a[]){ NumberReverse nr = newNumberReverse(); System.out.println("Result: "+nr.reverseNumber(17868)); } } Yes, derived classes still can override the overloaded methods. Polymorphism can still happen. Compiler will not binding the method calls since it is overloaded, because it might be overridden now or in the future. NO, because main is a static method. A static method can't be overridden in Java. To invoke a superclass method that has been overridden in a subclass, you must either call the method directly through a superclass instance, or use the super prefix in the subclass itself. From the point of the view of the subclass, the super prefix provides an explicit reference to the superclass' implementation of the method. super.overriddenMethod(); There will be situations where we have to choose eithrt to go for static or non static. Answer below question in yes or no format, based on the result you choose to go for Static/Non-Static 1. Are you going to Edit the frequently ? 2. Do you want to reflect the variable changes throughout your application ? 3. Do you want to create object for your Variable ? 4. Do you want to support Multi Threading ? 5. Is it ok to have duplicate variabes Ask these Questions for Methods to be Static/NonStatic: 1. Are You going to access Methods using class reference ? 2. Is it ok to have duplicate objects? 3. Do you want to suport Multi Threading ? 4. Are you planning for over riding ? 5. Are you planning for dynamic invoking using reflection api ? 6. Are yor going to write these methods with TestNG ? super is a keyword which is used to access the method or member variables from the superclass.. Note: 1. You can only go back one level. 2. In the constructor, if you use super(), it must be the very first code, and you cannot access any this.path variables or methods to compute its parameters. An interface is a description of a set of methods that conforming implementing classes must have. Note: 1. You can't mark an interface as final. 2. Interface variables must be static. 3. An Interface cannot extend anything but another interfaces. To prevent a specific method from being overridden in a subclass, use the final modifier on the method declaration, which means "this is the final implementation of this method", the end of its inheritance hierarchy. public final void exampleMethod() { // Method statements } You can't instantiate an interface directly, but you can instantiate a class that implements an interface. Yes, it is always necessary to create an object implementation for an interface. Interfaces cannot be instantiated in their own right, so you must write a class that implements the interface and fulfill all the methods defined in it. Interfaces may have member variables, but these are implicitly public, static, andfinal- in other words, interfaces can declare only constants, not instance variables that are available to all implementations and may be used as key references for method arguments for example. Only public and abstract modifiers are allowed for methods in interfaces. Marker interfaces are those which do not declare any required methods, but signify their compatibility with certain operations. Thejava.io.Serializable interface and Cloneable are typical marker interfaces. These do not contain any methods, but classes must implement this interface in order to be serialized and de-serialized. Abstract classes are classes that contain one or more abstract methods. An abstract method is a method that is declared, but contains no implementation. Note: 1. If even a single method is abstract, the whole class must be declared abstract. 2. Abstract classes may not be instantiated, and require subclasses to provide implementations for the abstract methods. 3. You can't mark a class as both abstract and final. An abstract class can never be instantiated. Its sole purpose is to be extended (subclassed). Use Interfaces when... 1. You see that something in your design will change frequently. 2. If various implementations only share method signatures then it is better to use Interfaces. 3. you need some classes to use some methods which you don't want to be included in the class, then you go for the interface, which makes it easy to just implement and make use of the methods defined in the interface. Use Abstract Class when... 1. If various implementations are of the same kind and use common behavior or status then abstract class is better to use. 2. When you want to provide a generalized form of abstraction and leave the implementation task with the inheriting subclass. 3. Abstract classes are an excellent way to create planned inheritance hierarchies. They're also a good choice for nonleaf classes in class hierarchies. Yes, other nonabstract methods can access a method that you declare as abstract. Yes, there can be an abstract class without abstract methods. 1. A constructor is a special method whose task is to initialize the object of its class. 2. It is special because its name is the same as the class name. 3. They do not have return types, not even void and therefore they cannot return values. 4. They cannot be inherited, though a derived class can call the base class constructor. 5. Constructor is invoked whenever an object of its associated class is created. If a class defined by the code does not have any constructor, compiler will automatically provide one no-parameter-constructor (default-constructor) for the class in the byte code. The access modifier (public/private/etc.) of the default constructor is the same as the class itself. No, constructor cannot be inherited, though a derived class can call the base class constructor. 1. Constructors use this to refer to another constructor in the same class with a different parameter list. 2. Constructors use super to invoke the superclass's constructor. If a constructor uses super, it must use it in the first line; otherwise, the compiler will complain. One of the techniques in object-oriented programming is encapsulation. It concerns the hiding of data in a class and making this class available only through methods. Java allows you to control access to classes, methods, and fields via so-called access specifiers Java offers four access specifiers, listed below in decreasing accessibility: 1. Public- public classes, methods, and fields can be accessed from everywhere. 2. Protected- protected methods and fields can only be accessed within the same class to which the methods and fields belong, within its subclasses, and within classes of the same package. 3. Default(no specifier)- If you do not set access to specific level, then such a class, method, or field will be accessible from inside the same package to which the class, method, or field belongs, but not from outside this package. 4. Private- private methods and fields can only be accessed within the same class to which the methods and fields belong. private methods and fields are not visible within subclasses and are not inherited by subclasses. The final modifier keyword makes that the programmer cannot change the value anymore. The actual meaning depends on whether it is applied to a class, a variable, or a method. 1. final Classes- A final class cannot have subclasses. 2. final Variables- A final variable cannot be changed once it is initialized. 3. final Methods- A final method cannot be overridden by subclasses. There are two reasons for marking a method as final: 1. Disallowing subclasses to change the meaning of the method. 2. Increasing efficiency by allowing the compiler to turn calls to the method into inline Java code. Static block which exactly executed exactly once when the class is first loaded into JVM. Before going to the main method the static block will execute.. A static variable is associated with the class as a whole rather than with specific instances of a class. Non-static variables take on unique values with each object instance.: 1. A static method can only call other static methods. 2. A static method must only access static data. 3. A static method cannot reference to the current object using keywords super or this. 1. The Iterator interface is used to step through the elements of a Collection. 2. Iterators let you process each element of a Collection. 3. Iterators are a generic way to go through all the elements of a Collection no matter how it is organized. 4. Iterator is an Interface implemented a different way for every Collection.hasNext() returns true. 3. Within the loop, obtain each element by calling next(). Iterator also has a method remove() when remove is called, the current element in the iteration is deleted. ListIterator is just like Iterator, except it allows us to access the collection in either the forward or backward direction and lets us modify an element 1. The List interface provides support for ordered collections of objects. 2. Lists may contain duplicate elements. The main implementations of the List interface are as follows : 1. ArrayList : Resizable-array implementation of the List interface. The best all-around implementation of the List interface. 2. Vector : Synchronized resizable-array implementation of the List interface with additional "legacy methods." 3. LinkedList : Doubly-linked list implementation of the List interface. May provide better performance than the ArrayList implementation if elements are frequently inserted or deleted within the list. Useful for queues and double-ended queues (deques). Some of the advantages ArrayList has over arrays are: 1. It can grow dynamically 2. It provides more powerful insertion and search mechanisms than arrays. 1. ArrayList internally uses and array to store the elements, when that array gets filled by inserting elements a new array of roughly 1.5 times the size of the original array is created and all the data of old array is copied to new array. 2.. Because,. If you need to support random access, without inserting or removing elements from any place other than the end, then ArrayList offers the optimal collection. If, however, you need to frequently add and remove elements from the middle of the list and only access the list elements sequentially, then LinkedList offers the better implementation. 1. The Set interface provides methods for accessing the elements of a finite mathematical set 2. Sets do not allow duplicate elements 3. Contains no methods other than those inherited from Collection 4. It adds the restriction that duplicate elements are prohibited 5. Two Set objects are equal if they contain the same elements The main implementations of the List interface are as follows: 1. HashSet 2. TreeSet 3. LinkedHashSet 4. EnumSet 1. A HashSet is an unsorted, unordered Set. 2. It uses the hashcode of the object being inserted (so the more efficient your hashcode() implementation the better access performance you'll get). 3. Use this class when you want a collection with no duplicates and you don't care about order when you iterate through it.. 1. A map is an object that stores associations between keys and values (key/value pairs). 2. Given a key, you can find its value. Both keys and values are objects. 3. The keys must be unique, but the values may be duplicated. 4. Some maps can accept a null key and null values, others cannot. The main implementations of the List interface are as follows: 1. HashMap 2. HashTable 3. TreeMap 4. EnumMap. Maps Provide Three Collection Views. 1. Key Set - allow a map's contents to be viewed as a set of keys. 2. Values Collection - allow a map's contents to be viewed as a set of values. 3. Entry Set - allow a map's contents to be viewed as a set of key-value mappings. KeySet is a set returned by the keySet() method of the Map Interface, It is a set that contains all the keys present in the Map. Values Collection View is a collection returned by the values() method of the Map Interface, It contains all the objects present as values in the map. Entry Set view is a set that is returned by the entrySet() method in the map and contains Objects of type Map. Entry each of which has both Key and Value. Create an implementation of the java.lang.Comparable interface that knows how to order your objects and pass it to java.util.Collections.sort(List, Comparator). The Comparable interface is used to sort collections and arrays of objects using the Collections.sort() and java.utils.Arrays.sort() methods respectively. The objects of the class implementing the Comparable interface can be ordered. The Comparable interface in the generic form is written as follows: interface Comparable. learned through everything in your site and I love it, just want to say thank you so much for your great work, God bless you and Chercher Tech :) Suggestion: Maybe you can add the Cucumber Framework and API Automation as well, they're getting popular nowadays
https://chercher.tech/java/java-interview-questions-1
CC-MAIN-2019-13
refinedweb
2,750
56.96
Problem code: PCYCLE We consider permutations of the numbers 1,..., N for some N. By permutation we mean a rearrangment of the number 1,...,N. For example 2 4 5 1 7 6 3 8 is a permutation of 1,2,...,8. Of course, 1 2 3 4 5 6 7 8 is also a permutation of 1,2,...,8. We can "walk around" a permutation in a interesting way and here is how it is done for the permutation above: Start at position 1. At position 1 we have 2 and so we go to position 2. Here we find 4 and so we go to position 4. Here we find 1, which is a position that we have already visited. This completes the first part of our walk and we denote this walk by (1 2 4 1). Such a walk is called a cycle. An interesting property of such walks, that you may take for granted, is that the position we revisit will always be the one we started from! We continue our walk by jumping to first unvisited position, in this case position 3 and continue in the same manner. This time we find 5 at position 3 and so we go to position 5 and find 7 and we go to position 7 and find 3 and thus we get the cycle (3 5 7 3). Next we start at position 6 and get (6 6) and finally we start at position 8 and get the cycle (8 8). We have exhausted all the positions. Our walk through this permutation consists of 4 cycles. One can carry out this walk through any permutation and obtain a set of cycles as the result. Your task is to print out the cycles that result from walking through a given permutation. Input format The first line of the input is a positive integer N indicating the length of the permutation. The next line contains N integers and is a permutation of 1,2,...,N. You may assume that N 1000. Output format The first line of the output must contain a single integer k denoting the number of cycles in the permutation. Line 2 should describe the first cycle, line 3 the second cycle and so on and line k+1 should describe the kth cycle. Examples Sample input 1: 8 2 4 5 1 7 6 3 8 Sample output 1: 4 1 2 4 1 3 5 7 3 6 6 8 8 Sample input 2: 8 1 2 3 4 5 6 7 8 Sample output 2: 8 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 how can i print no of cycles at the starting of the output.??!!!! Could you be more specific? You may assume that N 1000?? N <= 1000 if 1241 is first cycle then can we visit 4 once again in next cycle?? I hate thsi runtime error. I am not getting any error on my system but when i upload i get this runtime error... anyone can please help me out ... ------------------------------------------------code ----------- #include <stdio.h>#include <stdlib.h>#include <string.h>int main(){ int numbers; int *per; int *cycleList; int i,temp,startNode; int index; int count; while(1) { scanf("%d",&numbers); if(numbers == 0) break; per = (int*)malloc(sizeof(int)*numbers); for(i=0; i<numbers; i++) { scanf("%d",&per[i]); } cycleList = (int*)calloc(numbers,sizeof(int)); //find the cycles. startNode = 0; count = 0; while(1) { index = startNode; count++; //STORE THE CYCLES while(cycleList[index] == 0) { cycleList[index] = count; index = per[index]-1; } //Find the next not travelled node; while(cycleList[startNode] != 0) { startNode++; if(startNode >=numbers) break; } if(startNode >=numbers) break; } //Print the result; printf("%dn",count); startNode = 0; temp = count; for(i = 1;i<=count;i++) { //print one cycle index = startNode; printf("%d ",index+1); index = per[index]-1; while(startNode != index) { printf("%d ",index+1); index = per[index]-1; } printf("%d n",index+1); while(cycleList[startNode] <= i) { startNode++; } } //free the memory; free(per); free(cycleList); }} This problem is not accepting C++(g++ 4.3.2) @everyone what is the output for the input which is not circular like 5 1 3 5 7 9 admin is the the input test cases must circular..??? You are given a permutation. That isn't a permutation. is there is space between the entered numbers.....??? is there is space between enter numbers??? is there is any space between output eg: 1241 or 1 2 4 1 which one is correct output..?? pls help... Read the sample input. It is there for a reason. When I submitted the solution to this problem, I got this error. What does this mean? :( internal error occurred in the system cycles obtained using this method are always disjoint...and disjoint cycles are commutative...then the order of the cycles to be printed in the ouput should not matter..does it??? if not..what should be the order?? (i want to confirm this before starting the problem) The order to print them is clearly described in the problem statement. IS THERE NEED TO PRINT "SAMPLE OUTPUT 1:" OR NOT....?? PLS HELP Please check my solution.. I think there is a problem of newline or space in output with my code Are there spaces between the numbers(in input/output)??? @ Rahul - at least one mistake is that your code can't work for N = 1000 for a rather obvious reason. @abcd - N can be up to 1000 - if there weren't spaces, how could you tell what the numbers were? @Stephen Merriman Thanks!,but that wasn't mentioned in the question... Sure it was; it says there are N integers on the line. If there were no spaces it wouldn't be N integers. I swear its giving the right answer, but still it shows wrong answer. My code: code removed by admin Do not post code here. You clearly haven't tested your code on large test cases with N=1000, say. Do so and your error will become obvious. @Stephen: I apologise for my mistake. Thanks. @Admin : please help me in the code ..seem to be working fine with the test cases as well <b> </b> please help...really @ admin yesterday i found that the responses to my submissions were followed by the size of the test cases. This was pretty useful in determining the mistakes in my submitted code. But for some reason this facility is not available today. It will be gr8 if you do something bout it. hey can anyone tell wats wrong with this, its working on my comp but am getting a runtime error when i submit the problem. import java.io.BufferedReader;import java.io.IOException;import java.io.InputStreamReader;import java.util.ArrayList;import java.util.List;import java.util.Scanner;/** * * @author sanket */class MyNumber{ public int value; public boolean mark; MyNumber(int value){ this.value = value; mark=false; }}public class Main{ private List list = new ArrayList(); private List list2 = new ArrayList(); private BufferedReader bf = new BufferedReader(new InputStreamReader(System.in)); private MyNumber num; private int number,testCase,count2,count; private Scanner scanf = new Scanner(System.in); public void getInput(){ MyNumber k; try { testCase = Integer.parseInt(bf.readLine()); list.add(new MyNumber(0)); list2.add(new MyNumber(0)); for(int i=1;i<testCase+1;i++){ k=new MyNumber(scanf.nextInt()); list.add(i,k); list2.add(i,new MyNumber(k.value)); } } catch (IOException ex) { System.out.println(ex); ex.printStackTrace(); } catch(UnsupportedOperationException us){System.out.println(us);} catch(ClassCastException cs){System.out.println(cs);} catch(IllegalArgumentException il){System.out.println(il);} catch(NullPointerException nl){System.out.println(nl);} } public void getCount(){ count=0; count2=0; number=1; try{ do{ num = (MyNumber) list2.get(number); if(!num.mark){ num.mark=true; number=num.value; } else{ number=0; do{ number++; num =(MyNumber)list2.get(number); }while(num.mark); count2++; continue; } count++; }while(count<testCase); System.out.println(count2+1); }catch(Exception ex){ex.printStackTrace(); } } public void getAnswer(){ count =0 ; count2=0; number=1; try{ do{ num = (MyNumber) list.get(number); if(!num.mark){ System.out.print(number); num.mark=true; number=num.value; } else{ System.out.print(number); number=0; do{ number++; num =(MyNumber)list.get(number); }while(num.mark); System.out.println(""); continue; } count++; }while(count<testCase); System.out.print(num.value); }catch(Exception ex){ex.printStackTrace(); } } public static void main(String[] args) throws java.lang.Exception{ Main p = new Main(); p.getInput(); p.getCount(); p.getAnswer(); }} Working for test cases but wrong answer, is it to compiler difference, i used g++ 4.3.2 but have to submit code in 4.0.0-8 as former is not accepted :( i am not able to submit the solution in c++. its saying you cant submit in this language. try link. @Admin plz sir check my solution as i m not able to figure out as to what cud b d mistake... Plz ignore my above comment as i m able to rectify my mistake... my code is simple it give correct answer still it is said as wrong answer #include<stdio.h> #include<stdlib.h> int main() { int n,*a,*visited,i,done=0,temp,count=0; scanf("%d",&n); a=(int *)malloc (sizeof(int)*(n+1)); visited=(int *)malloc (sizeof(int)*(n+1)); for(i=1;i<=n;i++) { scanf("%d",&a[i]); visited[i]=0; } temp=0; count=0; for(i=1;i<=n;i++) { if(visited[i]==0) { temp=i; while(!done) { //printf("%d",temp); visited[temp]=1; temp=a[temp]; visited[temp]=1; if(temp==i) done=1; } //printf("%dn",temp); done=0; count++; } } printf("%dn",count); //making visited 0 again for(i=1;i<=n;i++) visited[i]=0; for(i=1;i<=n;i++) { if(visited[i]==0) { temp=i; while(!done) { printf("%d",temp); visited[temp]=1; temp=a[temp]; visited[temp]=1; if(temp==i) done=1; } printf("%dn",temp); done=0; count++; } } return 0; } you can't submit in this language for this problem. Try link. this error is appearing i am submitting problem in c++ :( i dont know why i m gettin wrong answer....plz check if there is any thin wrong with the code #include<stdio.h>int main(){ int n,i,initial,current; scanf("%d",&n); int a[1000]; for(i=0;i<n;i++) { scanf("%d",&a[i]); } initial=0; current=1; while(initial!=n) { int flag=0,fl=0; while(initial!=current ) { if(flag==0) { current=initial; flag=1; } if(a[current]!=-1) { printf("%d ",current+1); fl=1; int k=current; current = a[current] -1; a[k] = -1; } } if(fl==1) { printf("%d",initial+1); printf("n"); fl=0; } initial++; } return 0;} i m gettin right answer for all test cases i hav tried,,,,,,,,,,,,i hav even tried the one input for n=1000 plz chek the code: but i m gettin wrong answer here Can somebody point out the why my code is giving RuntimeError. i would really appreciate if admin has some explanation i think there is some mistake.my rest of the text is not here in the comments.ok no prob i will write it again here.actually when i run the source of user default1130(top of the list) for this problem on your server it is showing running time =0.07 sec and size=5.4 mb.you can verify by this link id is 470046.But in your list running time and size is 0 for the same source.. just curious to know what could be the problem? The problem why many people are getting WA for this problem is that when the online judge checks the answer it has to be exactly the same as to what they have. Over here before every new line 'n' after a permutation cycle there is a space. If u have this space then only ur answer is accepted. I think the people who have not left a space are corect and thier code should be accepted as majority of the question don't prefer lingering spaces. #include<stdio.h>#include<stdlib.h>int main(void){int num[100];int n;printf("Enter permutation no : ");scanf("%d",&n);for(int i=0;i<n;i++){ printf("Enter the digit positive no. :"); scanf("%d",&num[i]); if(num[i]<0) { printf("Error in input"); exit(0); }} if(n==1&num[0]!=1) { printf("not ambiguous"); }else {for(int j=1;j<n;j++){ if(num[0]!=1) { printf("not ambiguous"); break; } else if(num[j]!=(i)) { printf("not ambiguous"); break; } i--;}if(j==n){ printf("ambiguous");} }return(0);} whats wrong with my code: #include<stdio.h> main() { int t,i,k; scanf("%d",&t); int j,num[1001],check[1001]={0}; for(i=1;i<=t;i++) scanf("%d",&num[i]); int total=0; for(j=1;j<=t;j++) if(check[j]==1) continue; check[j]==1; k=num[j]; while(k!=j) { check[k]=1; k=num[k]; } total++; printf("%dn",total); for(j=1;j<=1000;j++) check[j]=0; printf("%d",j); {printf("%d",k); check[k]=1; printf("%d n",j); return
http://www.codechef.com/problems/PCYCLE
crawl-003
refinedweb
2,196
66.54
6 Mar 23:46 2012 fsogsmd and om-gta04 Denis 'GNUtoo' Carikli <GNUtoo@...> 2012-03-06 22:46:30 GMT 2012-03-06 22:46:30 GMT hi, all GTA04 revisions have an option modem, but at least 2 things are lacking: * CLCC polling: it seem to poll somehow but it doesn't work and doesn't send the notification when the call ends. * DTMF fails to work: in the AT manual (27007-3d0.doc ) it says that +VTS doesn't expect a response, and indeed in the option modem it seem not to expect that. So I tried to remove what makes fsogsmd wait for a response but I failed: In fsogsmd/src/lib/at/atcallmediators.vala I did the following without success(I don't send a patch since it doesn't work): public class AtCallSendDtmf : CallSendDtmf { public override async void run( string tones ) throws FreeSmartphone.GSM.Error, FreeSmartphone.Error { var cmd = theModem.createAtCommand<PlusVTS>( "+VTS" ); - var response = yield theModem.processAtCommandAsync( cmd, cmd.issue( tones ) ); - checkResponseOk( cmd, response ); + yield theModem.processAtCommandAsync( cmd, cmd.issue( tones ) ); } } Denis.
http://blog.gmane.org/gmane.comp.hardware.smartphones.userland/month=20120301
CC-MAIN-2014-42
refinedweb
178
59.09
C++ Tutorial: A Beginner's Guide to std::vector, Part 1 Environment: VC6 SP5, STLPort, Windows 2000 SP2 This C++ tutorial is meant to help beginning and intermediate C++ programmers get a grip on the standard template class. (The article was updated.) Rationale behind using vectors The final technical vote of the C++ Standard took place on November 14th, 1997; that was more than five years ago. However, significant parts of the Standard, especially the Standard Library, are still not very popular among many C++ users. A constant reader of CodeGuru's C++ forums will soon notice that many questions and answers still imply hand-crafted solutions that could be very elegantly solved by using the Standard Library. One issue that comes up very often is the use of C-style arrays, with all their problems and drawbacks. People seem to be scared of the standard vector and its brother deque. One reason might be that the Standard Library documentations are mostly pretty elliptic and esoteric. In this article, I will take vector and try to explain it in a way that is more accessible and understandable. I do not claim that this article is by any means complete. It is meant to give you a start in using vector and to help you avoid the most common pitfalls. We will start small and will not try to handle the topic very academically. Introduction to vector Vector is a template class that is a perfect replacement for the good old C-style arrays. It allows the same natural syntax that is used with plain arrays but offers a series of services that free the C++ programmer from taking care of the allocated memory and help operating consistently on the contained objects. The first step using vector is to include the appropriate header: #include <vector> Note that the header file name does not have any extension; this is true for all of the Standard Library header files. The second thing to know is that all of the Standard Library lives in the namespace std. This means that you have to resolve the names by prepending std:: to them: std::vector<int> v; // declares a vector of integers For small projects, you can bring the entire namespace std into scope by inserting a using directive on top of your cpp file: #include <vector> using namespace std; //... vector<int> v; // no need to prepend std:: any more This is okay for small projects, as long as you write the using directive in your cpp file. Never write a using directive into a header file! This would bloat the entire namespace std into each and every cpp file that includes that header. For larger projects, it is better to explicitly qualify every name accordingly. I am not a fan of such shortcuts. In this article, I will qualify each name accordingly. I will introduce some typedefs in the examples where appropriate—for better readability. Now, what is std::vector<T> v;? It is a template class that will wrap an array of Ts. In this widely used notation, 'T' stands for any data type, built-in, or user-defined class. The vector will store the Ts in a contiguous memory area that it will handle for you, and let you access the individual Ts simply by writing v[0], v[1], and so on, exactly like you would do for a C-style array. Note that for bigger projects it can be tedious to repeatedly write out the explicit type of the vectors. You may use a typedef if you want: typedef std::vector<int> int_vec_t; // or whatever you // want to name it //... int_vec_t v; Do not use a macro! #define int_vec_t std::vector<int> ; // very poor style! For the beginning, let's see what a vector can do for us. Let's start small and take the example of an array of integers. If you used plain arrays, you had either a static or a dynamic array: size_t size = 10; int sarray[10]; int *darray = new int[size]; // do something with them: for(int i=0; i<10; ++i){ sarray[i] = i; darray[i] = i; } // don't forget to delete darray when you're done delete [] darray; Let's do the same thing using a vector: #include <vector> //... size_t size = 10; std::vector<int> array(size); // make room for 10 integers, // and initialize them to 0 // do something with them: for(int i=0; i<size; ++i){ array[i] = i; } // no need to delete anything As you see, vector combines the advantages of both the static and the dynamic array because it takes a non-const size parameter such as the dynamic one and automatically deletes the used memory like the static one. The standard vector defines the operator [], to allow a "natural" syntax. For the sake of performance, the operator [] does not check whether the index is a valid one. Similar to a C-style array, using an invalid index will mostly buy you an access violation. In addition to operator [], vector defines the member function at(). This function does the same thing as the operator [], but checks the index. If the index is invalid, it will throw an object of class std::out_of_range. std::vector<int> array; try{ array.at(1000) = 0; } catch(std::out_of_range o){ std::cout<<o.what()<<std::endl; } Depending on the implementation of the C++ Standard Library you use, the above snippet will print a more or less explicit error message. STLPort prints the word "vector", the Dinkumware implementation that comes with Visual C++ prints "invalid vector<T> subscript". Other implementations may print something else. Note that vector is a standard container. The controlled sequence also can be accessed using iterators. More on iterators later in this article. For now, let's keep it simple. Now, what if you don't know how many elements you will have? If you were using a C-style array to store the elements, you'd either need to implement a logic that allows to grow your array from time to time, or you would allocate an array that is "big enough." The latter is a poor man's approach and the former will give you a headache. Not so vector: #include <vector> #include <iostream> //... std::vector<char> array; char c = 0; while(c != 'x'){ std::cin>>c; array.push_back(c); } In the previous example, push_back() appends one element at a time to the array. This is what we want, but it has a small pitfall. To understand what that is, you have to know that a vector has a so-called 'controlled sequence' and a certain amount of allocated storage for that sequence. The controlled sequence is just another name for the array in the guts of the vector. To hold this array, vector will allocate some memory, mostly more than it needs. You can push_back() elements until the allocated memory is exhausted. Then, vector will trigger a reallocation and will grow the allocated memory block. This can mean that it will have to move (that means: copy) the controlled sequence into a larger block. And copying around a large number of elements can slow down your application dramatically. Note that the reallocation is absolutely transparent for you (barring catastrophic failure—out of memory). You need to do nothing; vector will do all what that takes under the hood. Of course, there is something you can do to avoid having vector reallocate the storage too often. Just read on. In the previous example, we declared the vector using its default constructor. This creates an empty vector. Depending on the implementation of the Standard Library being used, the empty vector might or might not allocate some memory "just in case." If we want to avoid a too-often reallocation of the vector's storage, we can use its reserve() member function: #include <vector> #include <iostream> //... std::vector<char> array; array.reserve(10); // make room for 10 elements char c = 0; while(c != 'x'){ std::cin>>c; array.push_back(c); } The parameter we pass to reserve() depends on the context, of course. The function reserve() will ensure that we have room for at least 10 elements in this case. If the vector already has room for the required number of elements, reserve() does nothing. In other words, reserve() will grow the allocated storage of the vector, if necessary, but will never shrink it. As a side note, the following two code snippets are not the same thing: // snip 1: std::vector<int> v(10); // snip 2: std::vector<int> v; v.reserve(10); The first snippet defines a vector containing 10 integers, and initializes them with their default value (0). If we hadn't integers but some user-defined class, vector would call the default ctor 10 times and contain 10 readily constructed objects. The second snippet defines an empty vector, and then tells it to make room for 10 integers. The vector will allocate enough memory to hold at least 10 integers, but will not initialize this memory. If we had no integers, but some user-defined class, the second snippet wouldn't construct any instance of that class. To find out how many elements would fit in the currently allocated storage of a vector, use the capacity() member function. To find out how many elements are currently contained by the vector, use the size() member function: #include <vector> #include <iostream> //... std::vector<int> array; int i = 999; // some integer value array.reserve(10); // make room for 10 elements array.push_back(i); std::cout<<array.capacity()<<std::endl; std::cout<<array.size()<<std::endl; This will print 10 1 That means that the number of elements that can be added to a vector without triggering a reallocation always is capacity() - size(). Note that, for the previous example, only 0 is a valid index for array. Yes, we have made room for at least 10 elements with reserve(), but the memory is not initialized. Because int is a built-in type, writing all 10 elements with operator [] would actually work, but we would have a vector that is in an inconsistent state, because size() would still return 1. Moreover, if we tried to access the other elements than the first using array.at(), a std::out_of_range would be thrown.. The important thing to remember is that the role of reserve() is to minimize the number of potential reallocations and that it will not influence the number of elements in the controled sequence. A call to reserve() with a parameter smaller than the current capacity() is benign—it simply does nothing. The correct way of enlarging the number of contained elements is to call vector's member function resize(). The member function resize() has following properties: - If the new size is larger than the old size of the vector, it will preserve all elements already present in the controlled sequence; the rest will be initialized according to the second parameter. If the new size is smaller than the old size, it will preserve only the first new_size elements. The rest is discarded and shouldn't be used any more—consider these elements invalid. - If the new size is larger than capacity(), it will reallocate storage so all new_size elements fit. resize() will never shrink capacity(). Example: std::vector<int> array; // create an empty vector array.reserve(3); // make room for 3 elements // at this point, capacity() is 3 // and size() is 0 array.push_back(999); // append an element array.resize(5); // resize the vector // at this point, the vector contains // 999, 0, 0, 0, 0 array.push_back(333); // append another element into the vector // at this point, the vector contains // 999, 0, 0, 0, 0, 333 array.reserve(1); // will do nothing, as capacity() > 1 array.resize(3); // at this point, the vector contains // 999, 0, 0 // capacity() remains 6 // size() is 3 array.resize(6, 1); // resize again, fill up with ones // at this point the vector contains // 999, 0, 0, 1, 1, 1 Another way to enlarge the number of controlled elements is to use push_back(). In certain cases, this might be more efficient than calling resize() and then writing the elements. Let's have a closer look under the hood of vector, by looking at the following example: class X { public: X():val_(0){} X(int val):val_(val){} int get(){return val_;} void set(int val){val_=val;} private: int val_; }; //.... std::vector<X> ax; // create an empty vector containing // objects of type class X // version 1: ax.resize(10); // resize the controlled sequence for(int i=0; i<10; ++i){ ax[i].set(i); // set each element's value } //... // version 2: ax.reserve(10); // make room for 10 elements for(int i=0; i<10; ++i){ ax.push_back(X(i)); // insert elements using the second ctor } The two versions are equivalent, meaning that they will produce the same result. In both cases, we start with an empty vector. In the first version, we use resize() to grow the size of the controlled sequence to 10 elements. This will not only reallocate the vectors storage, but will also construct a sequence of 10 elements, using the default ctor of X. When resize() is finished, we will have 10 valid objects of type X in our vector, all of them having val_ == 0, because that's what the default ctor of X does. In a second step, we pick every X in the sequence and use X::set() to change its val_. In the second version, we call reserve() to make room for 10 elements. The vector will reallocate its storage and do nothing more than that. No element is constructed yet. In a second step, we create 10 objects of type X using the second ctor, thus giving them directly the correct value, and push_back() them into the vector. Which method is more efficient? That probably also depends on the implementation of the Standard Library, but the second version is likely to be slightly more efficient because it doesn't call X::set() for each element. Now that we have seen how to declare a vector and how to fill it up, let's see how we can operate on it. We will start with an analogy to C-style arrays and will progressively discover other possibilities, that are better or safer. There are two ways of accessing a C-style array: either by using the subscript operator, or by using pointers. Also, passing a C-style array to a function means passing a pointer to the first element. Can we do the same thing with a vector? The answer is yes. Let's take a small example: #include <iostream> double mean(double *array, size_t n) { double m=0; for(size_t i=0; i<n; ++i){ m += array[i]; } return m/n; } int main() { double a[] = {1, 2, 3, 4, 5}; std::cout<<mean(a, 5)<<std::endl; // will print 3 return 0; } When we say mean(a, 5), the first parameter actually is the address of the first element in the array &a[0]. We know that a vector is required to keep its elements in a contiguous block of memory, in order. That means that we can pass the address of the first element of a vector to the function mean() and it will work: int main() { std::vector<double> a; a.push_back(1); a.push_back(2); a.push_back(3); a.push_back(4); a.push_back(5); std::cout<<mean(&a[0], 5)<<std::endl; // will print 3 return 0; } That's nice, but it's still not quite the same. We were able to directly initialize the C-style array, but we had to push_back() the elements into the vector. Can we do better? Well, yes. We cannot directly use an initializer list for the vector, but we can use an intermediary array: double p[] = {1, 2, 3, 4, 5}; std::vector<double> a(p, p+5); Here we use another constructor provided by vector. It takes two parameters: a pointer to the first element of a C-style array and a pointer to one past the last element of that array. It will initialize the vector with a copy of each element in the array. Two things are important to note: The array is copied and it does not somehow go into the possession of the newly created vector, and the range we supply is from the first element to one past the last element in the array. Understanding the second point is crucial when working with vectors or any other standard containers. The controlled sequence is always expressed in terms of [first, one-past-last)—not only for ctors, but also for every function that operates on a range of elements. When taking the address of elements contained in a vector, there is something you have to watch out for: an internal reallocation of the vector will invalidate the pointers you hold to its elements. std::vector<int> v(5); int *pi = &v[3]; v.push_back(999); // <-- may trigger a reallocation *pi = 333; // <-- probably an error, pi isn't valid any more In the previous example, we take the address of the fourth element of the vector and store it in pi. Then we push_back() another element to the end of the vector. Then we try to use pi. Boom! The reason is that push_back() may trigger a reallocation of v's internal storage if this is not large enough to hold the additional element, too. pi will then point to a memory address that has just been deleted, and using it has undefined results. The bad news is that the vector might or might not reallocate the internal storage—you can't tell on the general case. The solution is either not to use pointers that might have been invalidated, or to make sure that the vector won't reallocate. The latter means to use reserve() wisely in order to have the vector handle memory (re)allocation at defined times. From the member functions we have seen so far, only push_back() and resize() can invalidate pointers into the vector. There are other member functions that invalidate pointers; we will discuss them later in this tutorial. Note that both the subscript operator and the member function at() never invalidate pointers into the vector. Speaking of pointers into the vector, we can introduce a standard concept at this point: iterators. Iterators are the way the Standard Library models a common interface for all containers—vector, list, set, deque, and so on. The reason is that operations that are "natural" for one container (like subscripting for vector) do not make sense for other containers. The Standard Library needs a common way of applying algorithms like iterating, finding, sorting to all containers—thus the concept of iterators. An iterator is a handle to a contained element. You can find an exact definition in your favorite textbook, if you want. The internal representation of an iterator is irrelevant at this point. Important is that if you have an iterator, you can dereference it to obtain the element it "points" to (for vector the most natural implementation of an iterator is indeed a plain vanilla pointer—but don't count on this). Let's get a grip on iterators with a small example: #include <vector> #include <iostream> int main() { std::vector<double> a; std::vector<double>::const_iterator i; a.push_back(1); a.push_back(2); a.push_back(3); a.push_back(4); a.push_back(5); for(i=a.begin(); i!=a.end(); ++i){ std::cout<<(*i)<<std::endl; } return 0; } Let's take this small program step by step: std::vector<double>::const_iterator i; This declares a const iterator i for a vector<double>. We are using a const iterator because we do not intend to modify the contents of the vector. ...i=a.begin();... The member function begin() returns an iterator that "points" to the first element in the sequence. ...i!=a.end();... The member function end() returns an iterator that "points" to one-past-the-last-element in the sequence. Note that dereferencing the iterator returned by end() is illegal and has undefined results. ...++i You can advance from one element to the next by incrementing the iterator. Note that the same program, but using pointers instead of iterators, leads to a very similar construct: #include <vector> #include <iostream> int main() { std::vector<double> a; const double *p; a.push_back(1); a.push_back(2); a.push_back(3); a.push_back(4); a.push_back(5); for(p=&a[0]; p!=&a[0]+5; ++p){ std::cout<<(*p)<<std::endl; } return 0; } So, if we can use pointers to basically achieve the same thing in the same way, why bother with iterators at all? The answer is that we have to use iterators if we want to apply some standard algorithm, like sorting, to the vector. The Standard Library does not implement the algorithms as member functions of the various containers, but as free template functions that can operate on many containers. The combination of standard containers in general (and vector in particular) and standard algorithms, is a very powerful tool; unfortunately, much too often neglected by programmers. By using it you can avoid large portions of hand crafted, error-prone code, and it enables you to write compact, portable, and maintainable programs. Let's have a look at the member functions vector provides: Constructors A complete set of C++ constructors , C++ destructor, and copy operator is provided. Let's have a look at them on the example of a vector of standard strings: typedef std::vector<std::string> str_vec_t; str_vec_t v1; // create an empty vector str_vec_t v2(10); // 10 copies of empty strings str_vec_t v3(10, "hello"); // 10 copies of the string // "hello" str_vec_t v4(v3); // copy ctor std::list<std::string> sl; // create a list of strings // and populate it sl.push_back("cat"); sl.push_back("dog"); sl.push_back("mouse"); str_vec_t v5(sl.begin(), sl.end()); // a copy of the range in // another container // (here, a list) v1 = v5; // will copy all elements // from v5 to v1 The assign() function The assign() function will reinitialize the vector. We can pass either a valid element range using the [first, last) iterators or we can specify the number of elements to be created and the element value. v1.assign(sl.begin(), sl.end()); // copies the list into // the vector v1.assign(3, "hello"); // initializes the vector // with 3 strings "hello" The assignment completely changes the elements of the vector. The old elements (if any) are discarded and the size of the vector is set to the number of elements assigned. Of course, assign() may trigger an internal reallocation. Stack operations We have seen the function push_back(). It appends an element to the end of the controlled sequence. There is a counterpart function, pop_back(), that removes the last element in the controlled sequence. The removed element becomes invalid, and size() is decremented. Note that pop_back() does not return the value of the popped element. You have to peek it before you pop it. The reason why this is so is exception safe. Popping on an empty vector is an error and has undefined results. std::vector<int> v; v.push_back(999); v.pop_back(); Note that pop_back() does not shrink the capacity(). Predefined iterators We have seen the iterators begin() and end(). They point to the first, respectively, to one-past-the-last element in the controlled sequence. There also are rbegin() and rend() which point to the first, respectively, to the one-past-the-last element of the reverse sequence. Note that both rbegin() and rend() return the type reverse_iterator (or const_reverse_iterator for their const versions)—which is not the same as iterator, (respectively const_iterator). To obtain a "normal" iterator from a reverse iterator, use reverse_iterator's base() member function: std::vector<int> v; v.push_back(999); std::vector<int>::reverse_iterator r = v.rbegin(); std::vector<int>::iterator i = r.base(); // will point to the last // element in the sequence Element access We have seen the subscript operator [] that provides unchecked access and the member function at(), which will throw an object of type std::out_of_range if the index passed is invalid. Two other member functions exist, front() and back(), which return a reference to the first, respectively the last element in the controlled sequence. Note that they do not return iterators! std::vector<int> v; v.push_back(999); // fill up the vector //... // following statements are equivalent: int i = v.front(); int i = v[0]; int i = v.at(0); int i = *(v.begin()); // following statements are equivalent: int j = v.back(); int j = v[v.size()-1]; int j = v.at(v.size()-1); int j = *(v.end()-1); Note that we cannot write *(--v.end()) because v.end() is not a l-value. List operations A few operations provided by vector are actually native for list. They are provided by the most containers and deal with inserting and erasing elements in the middle of the controlled sequence. Let's demonstrate them by some examples: #include <vector> #include <iostream> int main() { std::vector<int> q; q.push_back(10); q.push_back(11); q.push_back(12); std::vector<int> v; for(int i=0; i<5; ++i){ v.push_back(i); } // v contains 0 1 2 3 4 std::vector<int>::iterator it = v.begin() + 1; // insert 33 before the second element: it = v.insert(it, 33); // v contains 0 33 1 2 3 4 // it points to the inserted element //insert the contents of q before the second element: v.insert(it, q.begin(), q.end()); // v contains 0 10 11 12 33 1 2 3 4 // iterator 'it' is invalid it = v.begin() + 3; // it points to the fourth element of v // insert three time -1 before the fourth element: v.insert(it, 3, -1); // v contains 0 10 11 -1 -1 -1 12 33 1 2 3 4 // iterator 'it' is invalid // erase the fifth element of v it = v.begin() + 4; v.erase(it); // v contains 0 10 11 -1 -1 12 33 1 2 3 4 // iterator 'it' is invalid // erase the second to the fifth element: it = v.begin() + 1; v.erase(it, it + 4); // v contains 0 12 33 1 2 3 4 // iterator 'it' is invalid // clear all of v's elements v.clear(); return 0; } Note that both insert() and erase() may invalidate any iterators you might hold. The first version of insert() returns an iterator that points to the inserted element. The other two versions return void. Inserting elements may trigger a reallocation. In this case, all iterators in the container become invalid. If no reallocation occurs (for example, by a call to reserve() prior to inserting), only iterators printing between the insertion point and the end of the sequence become invalid. Erasing elements never triggers a reallocation, nor does it influence the capacity(). However, all iterators that point between the first element erased and the end of the sequence become invalid. Calling clear() removes all elements from the controlled sequence. The memory allocated is not freed, however. All iterators become invalid, of course. Note that both insert() and erase() are not very efficient for vectors. They are expected to perform in amortized linear time, O(n)+. If your application often uses insertion and erasure, vector probably isn't the best choice of a container for you. Comparison operations You can compare the contents of two vectors on an element-by-element basis using the operators ==, != and <. Two vectors are equal if both have the same size() and the elements are correspondingly equal. Note that the capacity() of two equal vectors need not to be the same. The operator < orders the vector's lexicographically. std::vector<int> v1, v2; //... if(v1 == v2) ... Swapping contents Sometimes, it is practical to be able to swap() the contents of two vectors. A common application is forcing a vector to release the memory it holds. We have seen that erasing the elements or clearing the vector doesn't influence its capacity() (in other words, the memory allocated). We need to do a small trick: std::vector<int> v; //... v.clear(); v.swap(std::vector<int>(v)); Normally (see below), vectors simply swap their guts. In the previous example, we create a temporary vector by using the copy ctor and swap() its contents with v. The temporary object will receive the entire memory held by v and v will receive the memory held by the temporary object—which is likely to allocate nothing on creation. The temporarily created vector is destroyed at the end of the above statement, and all the memory formally held by v is freed. The vector class template has a second, default template parameter: template<class T, class A = allocator<T> > class vector ... The allocator is a class that supplies the functions used by the container to allocate and deallocate memory for its elements. In this tutorial, we assumed that we have a default allocator and we will continue to assume this. For the sake of completeness, note that swap() will perform in constant time (simply swap the guts of the two vectors) if both allocators are the same. This is in most cases so. In this first tutorial, we have scratched the surface of the Standard Library and met std::vector. In the next tutorial, we will have a look at more advanced topics related to vector, respectively applying standard algorithms to it, and will discuss design decisions, such as the question of when to store objects and when to store pointers to objects in the vector. We also will introduce a close relative of the vector, that is, std::deque. great findPosted by Chetnik on 01/01/2018 09:22am This old article is awesome and it helped me a lot. I was lucky to find it. Let the force be with you...Serbia do Tokia, long live Russia! Thanks broReply studentPosted by Traci on 09/08/2017 09:56am This helped me SOO much! Can't thank you enough!Reply Useful tutorialPosted by Nastaran Hendijani on 08/26/2017 09:09pm Thanks for the useful tutorial, concise yet comprehensive.Reply Typo foundPosted by Aza D. Oberman on 08/08/2017 05:19am Typo "ctor" at: ."Reply MrPosted by Richard on 06/26/2017 03:11pm Excellent article. I wish all such articles were as well articulated. For learners these step by step, well explained articles are perfect.Reply tempPosted by luk on 06/10/2017 02:07am #include #include #include #include #include #include #include #include using namespace std; template class Vector { public: size_t size()const; bool empty()const; void pushBack(T item); void popBack(); T& operator[](size_t idx); Vector&operator+=(Vector const& lValue); friend ostream& operator& vec) { for (unsigned i = 0; i items; }; template class SortBySize { public: bool operator()(const Vector & el1, const Vector &el2) { return el1.size() void Vector::pushBack(T item) { items.push_back(item); } template void Vector::popBack() { if (items.empty()) { throw("Vector::pop(): empty stack"); } items.pop_back(); } template bool Vector::empty()const { return items.empty(); } template size_t Vector::size()const { return items.size(); } template T& Vector::operator[](size_t idx) { if (idx = size()) throw "exception"; return items[idx]; } template Vector&Vector::operator+=(Vector const& lValue) { if (this-size() == lValue.size()) { for (unsigned i = 0; i items[i] += lValue.items[i]; } return *this; } else throw "exception"; } /* Main */ int main() { srand((unsigned)time(NULL)); const int size = 11, min = 10, max = 50; Vector a, b, c; cout t[] { a, b, c }; cout , SortBySize s(t, t + 3); for (auto it = s.begin(); it != s.end(); ++it) { cout size()Reply Software engineerPosted by Jack on 06/05/2017 09:59pm Very good work! Thanks a lot.Reply Great Article :)Posted by Kamal Lochan Jena on 04/11/2017 09:50am Really it is very nice and helpful. Thank you very much :)Reply ArticlePosted by Abhinav8 on 03/22/2017 11:37am Very well written. Good work!Reply Great Article --- We need one the detailed for cinPosted by Daniel Krueger on 03/15/2017 06:53pm This is very thorough I love it! Can you write one for cin object? It always seems that the reference for cin always misses some of the important things... Especially like why does repeated calls to a cin cause simple programs to do unexpected things... I really wanted to learn how io with hardware works but I feel like That's not going to happen until I have a better understanding of what cin is doing behind the scenes.. including it's error states!Reply
https://www.codeguru.com/cpp/cpp/cpp_mfc/stl/article.php/c4027/C-Tutorial-A-Beginners-Guide-to-stdvector-Part-1.htm
CC-MAIN-2018-30
refinedweb
5,434
55.13
Opened 9 years ago Last modified 10 months ago #3266 enhancement new Provide tools for managing new deprecation policy Description Twisted has recently adopted a new deprecation policy (see thread starting at). We need to have tools to make following this policy as easy as pie. Change History (20) comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by Here is what we should do in the first implementation: twisted.python.deprecation.deprecate should be a function twisted.python.deprecation.release_8_1 should be an object which encapsulates both a Version and a Date of release. Either every time a release is done, or every time a deprecation is first added after a particular release, a new release_* constant should be added. deprecate(most_recent_version, "text", other_warning_arguments) will emit either a PendingDeprecationWarning or a DeprecationWarning, depending on whether the passed in version is more than 6 months and two releases away from whatever the current version is (defined by twisted.version). FUTURE IDEA: Provide a twisted.python.deprecation.NEXT_RELEASE which, during release or during an SVN commit hook, will automatically be replaced by the literal name of the constant for the most recent release. comment:3 Changed 9 years ago by I guess we'll definitely need to associate a date with each new release when it's released, so deprecate() has something to compare the passed version with. comment:4 Changed 8 years ago by So, for example, instead of writing: @twisted.python.deprecate.deprecated(twisted._8_1_0) def foo(x, y): return x + y we should write: def foo(x, y): deprecate(twisted.python.deprecate.release_8_1_0, "Don't use foo; use bar.", stacklevel=2) Or should we keep using the current twisted.python.deprecate.deprecated and change the implementation of that function? Perhaps we should still write the first version above, but twisted.python.deprecate.deprecated should be the thing which is checking dates and knows about the current version number? The only thing the current API is missing is the date information. We could make that information available (ie, a dict keyed on version objects). On the other hand, maybe that will just result in a fragile mess that we should avoid. There are four uses of twisted.python.deprecate.deprecated in our code base now and they're all basically the same. They create a new Version giving the version of Twisted they expect to first be released in and then they create a decorator with it. So okay, I guess we do need a new API. The current one doesn't express enough information for the use-case of Twisted deprecations - it's too general. I'm tempted to put this code in twisted.python._release instead of some other public module anywhere. It's really really really for Twisted only (even more than the release stuff, which just happens to not be general purpose). comment:5 Changed 8 years ago by comment:6 Changed 8 years ago by Forget what I said about this being for Twisted only. It clearly has general purpose applications. But I'm still going to start it off as a private thing. comment:7 Changed 8 years ago by comment:8 Changed 8 years ago by This is looking like this at the moment: from twisted.python._release import deprecation, twisted_8_1 @deprecation.decorator(twisted_8_1) def foo(x): pass I hope testing can still be done with callDeprecated but it's not clear how that will work yet. comment:9 Changed 8 years ago by Difficulties with testing: - callDeprecated takes a version instead of a release - the warning text emitted will probably be change after the first release the deprecation is included in - there's already assertWarnsand callDeprecatedand I feel bad adding a third API - All the stuff that the API needs to know about is in twisted.python._release. Putting a method into trial for it seems bad. I'm leaning towards... something based on a general warning catching API, I guess? Still not really sure. comment:10 Changed 8 years ago by Random thoughts: - Re-use callDeprecated, have it send a deprecation signal if it is passed a version. - I don't think warning text needs to be explicitly part of the test. It's enough to test that it raises the right sort of warning. - I'd feel bad about a third API too. - This might be an argument for having a base test case that's there for Twisted-specific needs and doesn't make API promises to the rest of the world. - Or maybe it's an argument for liberating assertions from the shackles of TestCase objects. - In any case, it's not too bad to have domain-specific assertions in Trial. After all, it will make it easier to write good tests for things that change APIs. - I'd probably be happy with a general warning catching API that had specifics for our deprecation system. - IMO, the goal is to make it easy, even fun, to deprecate bad things. comment:11 Changed 8 years ago by comment:12 Changed 8 years ago by Now I'm thinking about something like this: from twisted.python.deprecate import _DeprecationPolicy, _Release from twisted.python._release import deprecation, twisted_8_0 @deprecation.decorator(twisted_8_0) def foo(): pass ... def test_foo(self): foo() self.assertEqual( len(self.flushWarnings( category=deprecation.deprecationType(twisted_8_0), offendingFunction=self.test_foo)), 1) This implies some stuff: - deprecate most or all of twisted.python.deprecate - deprecate callDeprecated - deprecate assertWarns - add a better warning observer to trial - add support for emitting any non-flushed warnings at the end of each test comment:13 Changed 8 years ago by Maybe I'll document some motivation before I forget it: callDeprecatedand failUnlessWarnsmake dealing with the stack level difficult. Stack levels are fragile, and you don't really care about them anyway, you care about where the warning message ultimately points in your code. Having something like offendingFunction(perhaps to be augmented later with the addition of mutually exclusive offendingModuleand offendingClass(any others?) lets tests express what they really care about. Of course, it's still a little redundant, since most often the test method itself will be the offending function, but this won't always be the case. (just as stacklevel=2 isn't always right) - The python warnings module is so hard to work with, having higher order functions is hard, because it suggests that certain things are possible (nesting, for example) when they really aren't. - This approach is more flexible, since the result is a data structure representing warnings, and each test can write the most appropriate assertions for the particular case being tested. failUnlessWarnsand callDeprecatedboth suffer from the limitation that more than one warning causes them to flip out, and callDeprecatedcan only handle DeprecationWarnings (not PendingDeprecationWarningor any other warning category - although it likely doesn't care about non-deprecation warnings) and can't handle a different string message associated with the warning. callDeprecatedand failUnlessWarnsalso don't accept a release-like object, only a version, and they accept a version which represents the wrong thing (since the developer can't know what the right version to pass will be until a release following their development happens). This could be rectified by handling different types in each of these methods, but I started off trying that and it results in very ugly code which would involve emitting a deprecation for the old style /anyway/ eventually. - the deprecationobject can eventually support many different kinds of deprecations (calling this function is deprecated, importing this module is deprecated, passing this argument to this function is deprecated, not defining this attribute on this object you passed to this function is deprecated, etc) and flushWarningsallows all of these to be tested. twisted.python.deprecate.deprecatedwould have difficulty with this. comment:14 Changed 8 years ago by comment:15 Changed 8 years ago by comment:16 Changed 7 years ago by comment:17 Changed 6 years ago by comment:18 Changed 4 years ago by comment:19 Changed 4 years ago by comment:20 Changed 10 months ago by Do we still need / want this type of API? Please re-open if you can describe the tool.
https://twistedmatrix.com/trac/ticket/3266
CC-MAIN-2017-09
refinedweb
1,362
53.21
This tutorial series is divided into three phases 1.flask restful web service 2. Flash restful API (to be updated) 3. Flash httpauth implements permission control (to be updated) 4.uwsgi management flash application (to be updated) Article excerpted from I corrected some of the procedural problems arising from the update of bug and its version, and removed the dross and extracted the essence. Flask is a popular web development framework in python, so after using flask to do large and small projects, I want to make a blog. On the one hand, it is used to record key knowledge, on the other hand, it is also used by those who want to get started or understand python and flask. ---- This article requires the reader to have a certain foundation of python ---- - A flash applet (developed with pycharm) from flask import Flask app = Flask(__name__) @app.route('/') def index(): return 'hello world' if __name__ == (__main__): app.run(debug=True) Description of some parameters above: - From flash import - App = flag (\\\\\\\\\ - @App. Route ('/') flag route annotator Run the above program to start a flash program. At this time, visit to access the flash program. If you want to customize the port of the program, modify it in the last line as follows: app.run('0.0.0.0', 8080) Note: this method is only applicable to the python flash_app.py method. If you use the flash run method, it will not take effect. If you need to query the data yourself, the subsequent production environment will not use any direct operation method, but will use the agent management procedures such as uwsgi. We will introduce the related use of uwsgi later. At this time, the program will start on the local port 8080; What is REST? Six design specifications define the characteristics of a REST system: Client server: the client and server are isolated, the server provides services, and the client consumes. Stateless: each request from the client to the server must contain the information necessary to understand the request. In other words, the server does not store the information the client requested the last time for the next time. Cacheable: the server must indicate whether client requests can be cached. Layered system: the communication between the client and the server should be in a standard way, that is, when the middle layer replaces the server to respond, the client does not need to make any changes. Unified interface: the communication method between server and client must be unified. On demand coding: the server can provide executable code or scripts for clients to execute in their environment. This constraint is the only one that is optional. Using flash to implement restful services The first entry to web services #) The above tasks list is our simulated data attribute. The interface return must be in json or str format, otherwise an error will be reported. In @ app.route, we define the route of web URL access and the way of http request resources (methods). This involves a knowledge point, http request method. The common http request resource methods are listed below: ========== =============================================== ============================= HTTP method URL action ========== =============================================== ============================== GET http://[hostname]/todo/api/v1.0/tasks retrieve task list GET http://[hostname] / todo / API / v1.0 / tasks / [task [ID] retrieve a task POST http://[hostname]/todo/api/v1.0/tasks create a new task PUT http://[hostname]/todo/api/v1.0/tasks/[task_id] update task DELETE http://[hostname] / todo / API / v1.0 / tasks / [task [ID] delete task ========== ================================================ ============================= Let's perform the next restful according to the common http request resource mode In the above example, we have implemented the first GET method to GET all the data in the task list, but most of the time, we don't need to show all the data on the page, just show part of the data, and then GET as many times as necessary. Then we use the second GET method, which should be GET route parameters. Note: in the browser, all requests are GET methods. The HTTP methods are divided into the above several kinds. Most of the time, the front-end uses ajax/axios and other technical requests to obtain data and then renders the data to the page. Therefore, when testing restful services, it is recommended to use curl or use postman tools to do professional development tests. Note: when using postman to call service, add content type: application / JSON to the request header Here we use curl to access the tasks function we just created: $ curl -i # Result:" } ] } It can be seen that the tasks function just returns the tasks dictionary without any change. Next, we will try other operations on the data: Get specified data, modify data, delete data Get single task data: @app.route('/todo/api/v1.0/tasks/<int:id>', methods=['GET']) def get_task(): task = list(filter(lambda t: t['id'] == task_id, tasks)) if len(task) == 0: # abort(404) return jsonify({'error': 'no such data.'}) return jsonify({'task': task[0]}) Here we use the filter() higher-order function and lambda to implement dictionary filtering, and map() function will be used later; Note: in python3, you need to convert the filter() and map() functions to < list > before you can assign values. Otherwise, an error is reported. To access the get task method: # Access data with < ID > 2 $" } } # Access to non-existent < ID > Data $ curl -i HTTP/1.0 404 NOT FOUND Content-Type: text/html Content-Length: 238 Server: Werkzeug/0.8.3 Python/2.7.3 Date: Mon, 20 May 2013 05:21:52 GMT {'error': 'no such data.'} In the above test, we conducted 2 visits, one for the data with <id> = 2 in the dictionary, and the other for the data with <id> = 3. We will see that the data return is user-friendly custom exception return. The abort method used in the original text is not recommended by the author. In practical application, we may have multiple interfaces that can not access the data and return 404 You don't want all 404 returns to be the following: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"> <title>404 Not Found</title> <h1>Not Found</h1> This is not friendly, and does not help us to effectively find problems in front-end development or self-test, so we can return the exception problems we defined through the return method. Of course, here we also implement the custom abort exception method as follows: from flask import make_response @app.errorhandler(404) def not_found(error): print(error) return make_response(jsonify({'error': 'Not found'}), 404) explain: @app.errorhandler(404) is the exception decorator in flask The parameter error in the def not found (error) function definition cannot be left blank. The reason is very simple. The decorator will capture traceback and pass in the not found function. You can view the error and print the specific information by yourself. The make? Response function returns a method for the flash service. The corresponding abort return information is: $ curl -i HTTP/1.0 404 NOT FOUND Content-Type: application/json Content-Length: 26 Server: Werkzeug/0.8.3 Python/2.7.3 Date: Mon, 20 May 2013 05:36:54 GMT { "error": "Not found" } The method of obtaining single data and exception handling of flask are described above. Next is POST method, which is generally used for data addition, update, form submission, etc. the methods are as follows: from flask import request #http request processing module to obtain request header, parameters, submission data and other information @app.route('/todo/api/v1.0/tasks', methods=['POST']) def create_task(): if not request.json or not 'title' in request.json: return jsonify({'error': 'args not matching'}) task = { 'id': tasks[-1]['id'] + 1, 'title': request.json['title'], 'description': request.json.get('description', ""), 'done': False } tasks.append(task) return jsonify({'task': task}), 201 Request the method: $ running Cygwin's version of curl on Windows, the above command will not have any problems. However, if you use native curl, the commands are a little different: curl -i -H "Content-Type: application/json" -X POST -d "{"""title""":"""Read a book"""}" Request the GET method we just started to request data again, and check the data change: $" } ] } At this time, you can see that in the original tasks dictionary defined by us, there are more new data with < ID > 3, indicating that our POST data adding interface is normal. As for the remaining update and deletion methods, we will not explain them one by one, and the implementation methods are listed as follows: @app.route('/todo/api/v1.0/tasks/<int:task_id>', methods=['PUT']) def update_task(task_id): task = list': task[0]}) @app.route('/todo/api/v1.0/tasks/<int:task_id>', methods=['DELETE']) def delete_task(task_id): task = list(filter(lambda t: t['id'] == task_id, tasks)) if len(task) == 0: abort(404) tasks.remove(task[0]) return jsonify({'result': True}) The above is the way that flash implements restful web services. Of course, flash can achieve much more than that. Here, it is just a good example. In the follow-up work and development, you will gradually find the charm and power of flash. In the original text, the basic authentication of web services is introduced immediately. After thinking about it, the author decides not to explain it here. In the subsequent httpauth of restful services, the system will be introduced. Write at the end: Technology is a process of continuous improvement and learning. We must think and practice more while learning other people's things, so that we can become our own knowledge for subsequent realization.
https://programmer.ink/think/5ebbe6704eb4a.html
CC-MAIN-2020-45
refinedweb
1,593
62.98
The latest version of the book is P1.0, released 11 (21-Nov-17) Paper page: 13 "use the --maxfail option to specify how many failures are okay with you." If I specify --maxfail=1 it stops at the first failure (just like -x). That doesn't mean that I am "okay with 1 failure". I am okay with 0 failures and I want to exit after 1 failure.--Miro - Reported in: P1.0 (19-Dec-17) PDF page: 13 The section heading for maxfail only has one dash, should be two: this "–maxfail=num" should be: "--maxfail=num"--Jamie Czuy - Reported in: P1.0 (21-Sep-17) PDF page: 15 In the first paragraph after the example, filenames "test\_two.py" and "test\_one.py" shouldn't have backslashes in them.--Tim Bell - Reported in: P1.0 (13-Sep-17) PDF page: 24-25 There are two related issues, one of which is an outright error. Note that I am using the sample files downloaded today from the tar file. 1. The book reads "The test file, tests/test_task.py, contains the tests". In my sample files, the file is located in ch2/tasks_proj/tests/unit/test_task.py. Or tests/unit/test_task.py if you prefer (though see issue #2). You left out (as a minimum) the unit/ subdirectory. 2. This section (pages 24-25 in the beginning of chapter 2) is incredibly confusing since you never provide context for which subdirectory this source code is from, AND you switch with no warning from one directory to a completely different directory on page 25. You never say what directory the directory structure found on page 24 is in, and an assumption of it being in code/ch2/ is of course wrong (it is actually in code/tasks_proj). But then, with no warning, you talk about the tests/test_task.py (which is a typo, see issue #1), which is not in code/tasks_proj but rather in code/ch2/tasks_proj/tests/unit. Combine this lack (and sudden change) of context with the fact that there is no tests/test_task.py, and the result is, shall we say, a bit frustrating. Note: This is on pages labeled 24 & 25 in the PDF, not the 24th page in the PDF.--Todd Roberts - Reported in: P1.0 (11-Feb-18) Paper page: 25 On p.20, you suggest that we "humor me and learn enough about [virtual environments] to create one for trying out things in this book". Great suggestion! (BTW, I had trouble with venv on MacOS but virtualenv worked fine, so your recommendation in that regard is good!) Just five pages later, you start walking us through the installation of /tasks_proj/ with pip. Following your earlier suggestion, I of course created a venv and installed everything there. But then when I got to `cd /path/to/code/ch2/tasks_proj/tests/unit``, no tests were found. After awhile, I figured out that I needed to install pytest in the venv too, which worked, but it was confusing because until I did so, pytest ran (seemingly correctly) and simply reported finding no tests. Some hand-holding in the form of walking through the installation _into a venv_ would be helpful here.--Tom Baker - Reported in: P1.0 (10-May-18) PDF page: 25 "Installing a Package Locally The test file, tests/test_task.py, contains ..." The partial path should be 'tests/unit/test_task.py' as per the header comment to the fisrt code snippet on P26--Mark Thornber - Reported in: P1.0 (06-Mar-18) PDF page: 26 Final code snippet on this page has two "pip install" commands. The first says pip install ./tasks_proj/ The second says pip install --no-cache-dir ./tasks_proj I believe any one of the commands in isolation is sufficient. If both are run consecutively, then the last command will simply echo "Requirement already satisfied" for all the packages that need to be installed. I suspect the final command was the one the author wanted, but am unsure. --Terry Bates - Reported in: B5.0 (04-Sep-17) PDF page: 30 def list_tasks(owner=None): # type: (str|None) -> list of Task would be better as either: def list_tasks(owner=None): # type: (str|None) -> list of Tasks -or- def list_tasks(owner=None): # type: (str|None) -> list of Task objects --Charles Coggins - Reported in: P1.0 (15-Sep-17) PDF page: 30 It would be good to cover import pytest with pytest.raises(SystemExit) as ...: ... This is needed if the program you are testing is a (command-line) script that calls sys.exit(). It took me too long to figure this out.--Tom Verhoeff - Reported in: B5.0 (04-Sep-17) PDF page: 31 Change @pytest.mark.smoke def test_list_raises(): """list() should raise an exception with wrong type param.""" with pytest.raises(TypeError): tasks.list_tasks(owner=123) to @pytest.mark.smoke def test_list_tasks_raises(): """list_tasks() should raise an exception with wrong type param.""" with pytest.raises(TypeError): tasks.list_tasks(owner=123) --Charles E Coggins - Reported in: P1.0 (16-Sep-17) PDF page: 31 """ To add a smoke test suite to the Tasks project, we can add @mark.pytest.smoke to some of the tests. Let’s add it to a couple of tests in test_api_exceptions.py (note that the markers smoke and get aren’t built into pytest; I just made them up): """ @mark.pytest.smoke should be @pytest.mark.smoke--Chien-Ming Lai - Reported in: B5.0 (04-Sep-17) PDF page: 32 If the page 31 typo/suggestion is taken (to change 'test_list_raises' to 'test_list_tasks_raises'), then the two references on this page should also be changed: test_api_exceptions.py::test_list_raises PASSED -to- test_api_exceptions.py::test_list_tasks_raises PASSED --Charles E Coggins - Reported in: P1.0 (08-May-18) PDF page: 44 Paper page: 29 The test file, tests/test_task.py, contains the tests we worked on in Running pytest, in files test_three.py and test_four.py. The correct path should be tests/unit/test_task.py--genico - Reported in: B5.0 (04-Sep-17) PDF page: 46 """We can use the same data or multiple tests.""" should be """We can use the same data for multiple tests.""" --Charles Coggins - Reported in: P1.0 (08-Oct-17) PDF page: 47 "To add a smoke test suite to the Tasks project, we can add @mark.pytest.smoke to some of the tests" The above should say @pytest.mark.smoke. - Reported in: P1.0 (31-Oct-17) PDF page: 51 Next, let’s rework some our tests for tasks_proj to properly use fixtures. It should be: [...] some OF our tests [...]--Karol Babioch - Reported in: B6.0 (08-Sep-17) PDF page: 61 Change "usefixtures takes a string that is composed of a comma-separated list of fixtures to use." to "usefixtures takes a comma separated list of strings representing fixture names." --Charles Coggins - Reported in: B6.0 (08-Sep-17) PDF page: 63 The sentence: "Here, lue is now the fixture name, instead of fixture_with_a_name_much_longer_than_lue." should be changed to: "Here, lue is now the fixture name, instead of ultimate_answer_to_life_the_universe_and_everything." --Charles E Coggins - Reported in: B6.0 (08-Sep-17) PDF page: 63 The test run output is incorrect. The line: "test_rename_fixture.py::test_everything_2 (fixtures used: lue)." should be changed to: "test_rename_fixture.py::test_everything (fixtures used: lue)." --Charles Coggins - Reported in: B6.0 (08-Sep-17) PDF page: 74 Consider changing the variable name in this line: "file = tmpdir_factory.mktemp('data').join('author_file.json')" You could use 'file_' or anything other than 'file' since that is a built-in function in Python2. I know the test code was written for Python3, but some people might still run it with Python2 and this syntax sticks out as a bad practice. --Charles Coggins - Reported in: P1.0 (07-Jan-18) PDF page: 77 At the beginning of the 'using cache' section the phrase: We want to make sure 'order' dependencies Should be 'other'--Gabriele Bonetti - Reported in: P1.0 (10-Oct-17) PDF page: 80 At: "You can pass in --clear-cache to clear the cache before the session." The right argument is "--cache-clear".--Paulo Fernando Cruz Romeira - Reported in: P1.0 (12-Feb-18) Paper page: 84 In ch4/cap/test_capsys.py at the bottom of the page, the following line causes an error: > print('YIKES! {}'.format(problem), file=sys.stderr) This is what pytest says: > E File "../code/ch4/cap/test_capsys.py", line 24 > E print('YIKES! {}'.format(problem), file=sys.stderr) > E ^ > E SyntaxError: invalid syntax I get no errors, when I insert two "=": > print('YIKES! {}'.format(problem), file==sys.stderr) But I'm not sure this is what was intended.--gregor - Reported in: P1.0 (21-Sep-17) Paper page: 88 "This little function uses the regular expression module function re.sub to replace ~ with our new temporary directory." - code (book and src) uses replace though: lambda x: x.replace('~', str(fake_home_dir))--Bob - Reported in: P1.0 (12-Feb-18) Paper page: 89 In "Using doctest namespace" the doctest tests for divide look like this: >>> um.divide(10, 5) 2.0 (twice in each file!) Pytest throws errors, for it gets a 2 instead of a 2.0.--gregor - Reported in: P1.0 (16-Jul-18) PDF page: 92-93 There is no example of the pytest invocation for the test, to show the warning. I've tried the usual, but nothing emits the warning. I've been following the examples and most of the code behaves as expected (great job!). For example: [root@localhost ch4]# pytest test_warnings.py ================================================== test session starts ================================================== platform linux2 -- Python 2.7.5, pytest-3.6.3, py-1.5.4, pluggy-0.6.0 rootdir: /root/ptwpt/code/ch4, inifile: collected 2 items test_warnings.py .F [100%] ======================================================= FAILURES ======================================================== _________________________________________________ test_lame_function_2 __________________________________________________ def test_lame_function_2(): with pytest.warns(None) as warning_list: lame_function() > assert len(warning_list) == 1 E assert 0 == 1 E + where 0 = len(WarningsChecker(record=True)) test_warnings.py:22: AssertionError ========================================== 1 failed, 1 passed in 0.01 seconds =========================================== [root@localhost ch4]# --Kirk Franks - Reported in: P1.0 (27-Dec-17) Paper page: 101 In the writing your own pkugins section there are a couple of references to printing the username of the tester in the output header. From the code and tests, it look like this feature was dropped but there are a couple of orphaned references to it: 101 "to just make sure the username shows up:" 104 ("Features" section of README) "Includes user name of person running the tests in pytest output." References--Laurence - Reported in: P1.0 (20-Feb-18) Paper page: 114 In the book on pg. 114, the code snippets for pytest.ini, tox.ini and setup.cfg contain a line like this: > ... more options ... These lines are also included in the code, meaning in the files (pytest.ini, tox.ini and setup.cfg). If you do "pytest --help" in the ch6/format/ folder, you'll get an error. If you put a comment ";" in front of "... more options...", the command "pytest --help", when run in this particular folder, will run again.--gregor - Reported in: P1.0 (19-Oct-17) PDF page: 134 The first sentence says: "Let’s pause and install version 3 of Tasks:" but then the following code snippet has "$ pip install -e ch7/tasks_proj_v2" -- "version 3" should presumably be "version 2". --Tim Bell - Reported in: P1.0 (19-Oct-17) PDF page: 135 In the text (in various places on this page), the function "tasks_db()" is mentioned. However, the code defines "_tasks_db()": the text is missing the leading underscore in the function name. --Tim Bell - Reported in: P1.0 (19-Oct-17) PDF page: 136 The inline code in the second-last line on the page has 'tasks_db' instead of '_tasks_db': the leading underscore is missing. --Tim Bell
https://pragprog.com/titles/bopytest/errata/
CC-MAIN-2018-34
refinedweb
1,956
76.62