text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
FREE Every Wednesday The ee READ BY 60,000 PEOPLE EVERY WEEK J u ne 1 0 , 2 0 2 0 N o. 1 8 N EW RU L ES FO R 'N EW N O RM A L ' The government has approved the measures which will cover public health safety when the state of emergency comes to an end on June 21. The Royal Decree was passed at a Cabinet meeting yesterday (Tuesday) as The Weekly Post went to press. The government will now seek to gain the green light for the rules in Parliament. Commenting on the regu- lations which are designed to prevent new outbreaks of coronavirus, government spokeswoman María Jesús Montero said: “We cannot think that the whole episode is over and we are now safe. “We cannot take things lightly.” In a press conference following the Cabinet meeting, she outlined the measures which include an extension of the current obligation to wear a face mask in public places. Citizens will have to wear a mask when they cannot guarantee keeping at least 1.5 metres away from other people. Anyone breaking the rules could face a fine of up €100. A full report on the new measures which will be brought in after June 21 will appear in Friday’s Costa Blanca News. Fierc e storm s hit the p rovinc e Buying Selling Building For 25 Years Tel: +34 965 74 49 18 Carrer de les Tosqueres, 89, 03724, Moraira Th e streets of O rih uela Intense storms hit Alicante province and Murcia region on Monday afternoon, causing localised flooding in a Flooding in L a Cala Finestrat number of towns in the area. As well as torrential rain, the electrical storms also brought hail which damaged crops. Read a full report on the effects of the cloudbursts in Friday’s Costa Blanca News 2 The News June 10, 2020 - N° 18 Costa register service back on The register and ‘padrón’ office in Orihuela Costa has been reopened. A council spokesman explained that a prior appointment is needed to use it. For enquiries concerning the digital ACCV certificate residents are asked to call 96 607 61 18 and the service is available on Mondays and Wednesdays. Those needing to register as residents or get a copy of their ‘padrón’ certificate should call 96 607 61 50 or send an email to estadistica @estadistica.es. Green light for new Albir access road By Irena Bodnarec At long last, the new road and roundabout, connecting the N332 with the CV-753, Albir to Benidorm road can commence as the contract for the works has been signed. The winning bid for the €832,991 contract came from PAVASAL Empresa Constructora S.A and the works will be completed in six months. There was a long process to get to this point, firstly acquiring the 9,407 m2 of land, which belonged to five different plots and cost the council only €7,000. The new roundabout will be in front of the Magic Robin Hood resort and the The ee road, starting from the roundabout coming out of Alfaz will have two lanes of 4 metres in addition to a footpath on either side. Once completed it is estimated that it will reduce traffic on the Cami del Mar and Avenida de l’ Albir road at the McDonalds intersection by around 12%. ee A game to remember! By Irena Bodnarec Last Sunday a group of young men embarked on what is probably going to be their costliest game of football they are ever likely to encounter. Since lockdown began, back in mid-March, all council parks, playgrounds and sporting venues have been closed. Where not possible to physically lock a gate to stop people entering, such as children´s climbing frames and swings, police tape has been placed to indicate it out of bounds. On urbanisation Bello Horizonte 3 in La Nucía, the blue football pitch regularly attracts teams of gents for a casual kick around at the weekend, but up to now hadbeen empty. Obviously not able to resist, despite still taped off, a group arrived on Sunday afternoon but their game was cut short when the local police arrived. All were asked for their DNI photo identification, which by law all Spanish nationals must carry on them, and all were then issued with fines. Terrace tax postponed By Nuria Pérez Depós ito Legal: A 64-2020 - ISSN 2695 - 7418 Edited by Rotativos del Mediterráneo, S.L., Fines trat (Alicante) Printe d by Se rvic ios de Impre s ión de Le va nte , S.A., Bia r (Alic a nte ) Dis tribute d by Se lf Se le c t Me dia S.L., Te l 609 643 577 HEAD OFFICE: Calle Alicante 39, Pol. Ind. La Cala, 03509 Finestrat [email protected] - Tel: 617 369 005 - Fax: 965 858 361 BENIJ ÓFAR OFFICE: C/ Vicente Blasco Ibáñez, 36, Tel: 902 702 402, Fax: 966 712 170 TORREVIEJ A OFFICE: Rambla J uan Mateo 26, Tel: 617 369 441, Fax: 965 715 759 P ho to s : Editorial, dpa Ed ito ria l te a m : J ames Parkes, J asmine Rokadia, Dave J ones, Alex Watkins, Irena Bodnarec, J oyce French Co nta c t: [email protected] Ad ve rtis ing : [email protected] Tel: 617 369 005, Fax: 966 830 453 P LACE YOUR SMALL ADS AT OUR DESKS: ELS P OBLETS: SERVICE CENTER, Partida Barranquets CI 29 Tel: 966 47 50 50 DÉNIA: SERVIBOX, Patricio Ferrándiz 40 Tel: 966 426 017, Fax: 966 426 968 J AVEA ARENAL: QUICKSAVE ARENAL, CC Arenal, local D, Avda. del Pla 126, Tel. 965 794 648 J ALÓN: J AYS OF J ALÓN, C/ Lepanto 8, Tel: 966 480 757, 636 100 873 ALTEA: LEOS SOFT FURNISHINGS SUP ERSTORE, Ctra. N332, The Big Red Shop, Tel. 965 844 848 LA MARINA-SAN FULGENCIO: FALKEN TOURS, Avda. de los Fueros 2 (local 12) Tel. 966 730 151, Fax: 966 773 840 LA ZENIA: FALKEN TOURS, Avda. de la Playa 1, local 5, Tel. 966 730 151, Fax: 966 773 840 Bar and restaurant owners in Orihuela municipality will not have to pay terrace tax to the council until the end of the year, according to councillor for taxes Rafael Almagro. Sr Almagro pointed out that bar owners who have already paid the fee will be given a refund for the instalment from March 13 to May 11, when the Alicante province entered phase one of the exit plan. The councillor stressed that a terrace licence is still compulsory and all bar owners have to apply for it, if they don’t already have it. A similar measure has been launched from July to December by Pilar de la Horadada council. In this municipality bar and restaurant owners can apply to extend terraces or to use part of a public area such as a car park or a street to install a terrace. Interested owners are asked to send an email to terrazas@pilardelahoradada. org or contact the local department for commerce on 692 956 613. The ee News June 10, 2020 - N° 18 3 Beach ban broken - but distance kept By Irena Bodnarec Despite not officially open until today – June 10, Albir beach had a number of visitors there last Sunday afternoon. The beautifully sunny weather obviously couldn´t keep them away, but as you can see, all were very sensible and kept well-spaced apart along the shoreline without anyone instructing them to adhere to social distancing. From today the beach is officially open and divided into four parts, with the sunbed hire sections controlled by the con- cession company and the other two sections by the town hall. The privately-run children´s entertainment play area, which opens in the Frax carpark every summer is currently in the process of setting up and will no doubt be ready for business when restrictions allow. Lost and found An elderly man with Alzheimer’s disease who was wandering around Alicante city has been rescued by the National Police. A witness reported having seen him looking disorientated and directionless in the early hours of the morning, so a patrol went looking for him. When they eventually tracked him down, they real- ised he was lost and unable to respond coherently to their questions. Then the officers tried to establish where he lived, since he was not carrying any documentation, and found that he was a resident of a care home. Having taken him to the police station, they managed to contact one of the man’s Lightning sparks fire sons and within minutes his granddaughter arrived, followed closely by the son. They both thanked the officers, who finally contacted the manager of the care home to find out what had happened. He told them the man had escaped and they had been looking for him for several hours. By Edward Graham Lightning was blamed for starting a forest fire on the side of the Montgó mountain on Sunday afternoon as electrical storms swept across the Marina Alta. The violent thunderstorms brought torrential rain but woodland on Les Planes ignited after the lightning strike. Ground forces including volunteers from Jávea’s civil protection - who put photographs of the scene on social media - quickly brought flames under control and extinguished the threat. 4 The News CITRUS SWINDLE Baywatch back in Moraira By Alex Wakins By Jo Pugh Lifeguards finally returned to the beaches of Teulada Moraira on Monday. The service will be active until September 30 and will be strengthened with more staff from July 13 to August 23, as these are the busiest times. The lifeguard and rescue boat hours will be from 11am to 7pm, and from July 13 to August 23 from 10am to 8pm. From August 24, and until the end of the season, the hours will return to between 11am and 7pm. During the summer season the council will keep the toilets on Platgetes, Ampolla and Portet beaches open. These facilities will be cleaned and disinfected three times a day. Beach users are asked to act responsibly when using them. Feet washing services will not be available, nor are their use recommended, to avoid a possible focus of contagion by using taps. Taking advan- ee June 10, 2020 - N° 18 tage of this circumstance, the Ministry of Tourism will proceed to change them for new units in the next week, but they will remain out of service until the state of emergency passes. All swimmers are recommended to follow the recommendations for the safe use of beaches and the plan of safe mobility with different entrances and exits, the use of footwear on wooden platforms for access to the beach is mandatory as an additional safety measure. The installation of specific signage at the beach entrances to be installed in the coming days is also being finalised. A couple who allegedly took delivery of over 75,000 kilograms of citrus fruits but did not pay their suppliers have been arrested for fraud by the Guardia Civil in Cox. The suspects are residents of Blanca (Murcia) and had their business registered there but used a warehouse in the Vega Baja town to carry out the scam, explained a spokesman for the force. Two separate victims presented accusations to the Guardia Civil in Callosa de Segura and Jacarilla, and the Torrevieja-based rural crime squad (ROCA) led the investigation. They had sold 42,000kg of Verna lemons for about €11,000 and 32,250kg of Navel Powell oranges for €6,020. The payments were supposed to be made in instalments by post-dated cheques after delivery, which were from various bank accounts and none of which cleared. Officers discovered that the warehouse in Cox had no signs to identify the business and nobody appeared to be using it. The suspects would allegedly pretend to represent a solvent company, buy the products at low prices and then sell them on quickly for cash, before disappearing – either cutting off communication with the victim or making the excuse that the company had closed down. The couple, who are Spanish and aged 37 and 45, have been released by an investigating court until their trial. Why not extend the season to use your pool with a heat pump and solar solution? EXTEND THE SEASON Inverter heat pumps with wifi integrated 8 X 4 POOL ............. 2,599€ ALL YEAR SWIMMING 8 X 4 POOL .............. 2,988€ We can supply electric appliances fridge/freezers, dishwashers , gas or electric hobs, extractors, washing machines and ovens delivered to your door (DEPENDS ON ELECTRICAL SUPPLY IN POOL HOUSE ) 899€ SOLAR POOL COVER 8 X 4 500 MICRON WITH POOL ROLLER Calle Mulhacén, Ctra. Moraira a Calpe, 3, 03724 Moraira € Ctra . ero a. Ctr Morarira a Calpe CV-746 mó os ine Established in 1999 Tel/Mobile: 620 523 613 Office: 966 498 993 Email: [email protected] o del Paell le Cal PLUMBING HEATING ALL PRICES QUOTED INCLUDE IVA, LABOUR & MATERIAL Cam in to a new location, further down the road towards Moraira town centre, next to Gemisant & Pir g Kostas Restaurant i Calle Ma MORAIRA WE HAVE MOVED Ca lp e *LABOUR AND IVA INCLUDED M ar or a ira WE ARE NOW HERE The ee News June 10, 2020 - N° 18 5 UK SELF-ISOLATION RULE Some Mar Menor beaches open for HITS COSTA BOOKINGS By James Parkes Costa Blanca bookings are already suffering the impact from the new 14-day self-isolation obligation imposed on all travellers arriving in the UK since Monday. Hotels in Benidorm had already received thousands of bookings for holidays from July 1, but the UK quarantine rule has thwarted many British holidaymakers' plans as the self-isolation period when returning to the UK effectively adds a fortnight to all holiday plans abroad, making the majority of them inviable. As a consequence, Jet2.com has announced it is postponing the relaunch of its flights from July 1 to July 15. Meanwhile leading low-cost Ryanair has slammed the new rule warning 'it will cost thousands of jobs.' Although the Costa could begin welcoming domestic customers before the end of this month - many resorts such as Benidorm rely greatly on the UK market. See full report in Costa Blanca News on Friday Alicante-Elche airport remains almost empty Fiestas off in San Pedro San Pedro del Pinatar council has cancelled all events for the patron saint fiestas in June and the Virgen del Carmen pilgrimage on July 16. Mayoress Visitación Martínez noted that the decision swimmers had been made jointly with the cultural and religious associations organising and participating in the events. She said that Mass services are to be held in honour of Saint Peter and the Virgen del Carmen – and stressed that seats will be limited to guarantee social distancing. The cancellation of the fiestas is to protect public health, she noted. By Alex Watkins The Los Alcázares beaches – Espejo, Carril de Las Palmeras and Los Narejos – were opened to bathers again starting from Friday (June 5). The council had opened all beaches for sunbathing, but swimming was banned due to repair and cleaning works in areas damaged by the September floods and Storm Gloria in January. They reported that beaches of Manzanares, Carrión and La Concha remain closed to swimmers. 6 The News June 10, 2020 - N° 18 CASH FOR COASTAL ROADS Nearly €750,000 is being invested in repairing eight roads on Murcia’s coastline by the regional government. The roads had been seriously damaged by the September floods and winter storms. The money is being spent on the RM-F42, which links the motorway to La Manga with Portmán; the road between Los Belones and Atamaría; the RM-F28 from La Puebla to Pozo Aledo; the ee RM-311 from Los Beatos to El Albujón; the RM-F24 from Los Belones to Los Alcázares (RM-F54); the RM-320 from Escombreras to Portmán (RM-320); and the RM-E22 from Canteras to Mazarrón. Migration highway By Edward Graham The southbound carriageway of an aquatic highway used by fin whales on their annual migration has been particularly busy off the Marina Alta coastline this year. Whale watchers have spotted 30 of the cetaceans Balaenoptera physalus in 14 separate sightings in the last two weeks off Cap Sant Antoni, the peninsular dividing Dénia and neighbouring Jávea. Fin whales - known as the ‘greyhound of the sea’ because of their streamlined shape - are the second largest animal on the planet after the blue whale and can grow to 27 metres in length, the Mediterranean mammals are smaller and typically measure a mere 20 metres long. The journey south takes place in May and June from the Ligurian Sea and the whales come closest inshore off the point; sightings monitored by the Dénia coastguard at Las Rotes, and scientists from the University of Valencia and the ministry of agriculture, rural development, climate emergency and ecological transition. HELP T cancer care jávea We endeavour to raise funds and distribute them where most needed OUR MISSION 50% F GH CANCER MAIN SHOP will be opening on MONDAY 15TH JUNE Open on Monday, Tuesday, Thursday and Saturday 10.00 - 1.00 We Deliver Support & Care To Those Affected By cancer discount, ask for your client card BOTH SHOPS HAVE NOW BEEN REDECORATED AND ALFAZ DEL PI - JÁVEA Highest quality | Best price | Test report for all oils | 100% Organic | Free from THC | For personal advice please visit us! Avenida de la Fontana 9a, 03730 Jávea | Avenida d’Europa 135, 03580 L’Alfàs del Pi/Albir Tel: +34 615 157 222 Open: 10:00 - 15:00 Mon - Sat FULLY STOCKED The boutique is now open LADIES FASHION, HANDBAGS, HIGH END BRAND CLOTHING AND JEWELLERY Open: Monday to Friday 10 am - 1 pm Find us at Avinguda de Paris, 8 . 03730 JAVEA The ee 7 June 10, 2020 - N° 18 952 147 834 *Based on third par ty. Offer valid for TM new customers only. Subject to conditions. Ends 31/12/20. 8 The News June 10, 2020 - N° 18 ee Guardamar attracts visitors in Alicante province Mobility remains restricted to Alicante province for the duration of phase two of the Covid-19 exit plan. For this reason Guardamar has launched an ambitious advertising campaign so that residents – including Spaniards and those of international origin – pay a visit to this beautiful destination, known for having the largest coastal dune forest on the Spanish Mediterranean, at 750 hectares (1,853.29 acres), with 11km of fine sandy beaches. didn't receive any donations which meant the pot was getting empty. Some people think that because it is a council run shelter that the council provides everything. Wrong! The council provide basic vet treatments, but some of the more complicated and advanced treatments have to be met by ourselves. Also any expensive medicines we have to buy ourselves. They provide talk radio europe well as the rich archaeological and historical heritage of this municipality by the mouth of one of Spain’s most emblematic rivers: the Segura. We are convinced that many of our readers - not just those from the Vega Baja del Segura – will have visited this unique municipality, where its respect for culture and local traditions are its great attraction. This is why, our friends residing on the Costa Blanca, it is well worth returning to rediscover Guardamar. Lions open Den Mohican cut for charity AVAT (Asociación de Voluntarios de Animales de Torrevieja) Animal Charity - A massive thank you to everyone that supported the Mohican cut charity day on Saturday, June 7 at the Mi-Sol Bar with Lisa and Lesley. What a fabulous day it was. Over €2,000 was raised with more still coming in. AVAT is committed to helping both the Albergue animal shelter in Torrevieja and any other animal that is in need within the area. During lockdown the Albergue obviously As such, our neighbours around the province will have been able to see the 30 large billboards that have been positioned in Villena, Elda, Petrer, Novelda, Monforte, El Campello, Sant Joan, Mutxamel, Alicante, Elche, Crevillent, Albatera, Callosa de Segura, Orihuela and the N-332 where it enters Guardamar. Rather than an advertising slogan, it was decided to use vocabulary that defines what we have to offer: beaches, nature, shopping and cuisine, as By Edward Graham only dry food, no wet food at all, so we have to buy that, including wet kitten food, formula milk, feeding bottles. We also have to supply litter trays, dishes, beds and the same with the dogs...any improvements are paid for by ourselves. Thanks need to go, in particular, to the Scandinavian and British communities for their very generous donations. An even bigger thank you to Vivienne Woolfitt who had a grade 1 haircut and sprayed pink, she raised €155 And of course, the girls from Charlie´s Salon, La Siesta. For further details email [email protected] The ‘Lions Den’ charity shop in Moraira opens its doors for business on Monday, June 15 - with coronavirus safety measures to protect staff and shoppers. Teulada Moraira Lions Club’s charity shop, it is between Letters R Us and the Pepe La Sal supermarket in the Centro Commercial Moraira, adjacent to the fountain roundabout on the outskirts to the village on the Teulada road. Shopping at the Lions Den helps fund the group’s support of good causes and there is an extensive range of summer clothes, books and DVDs and bric a brac on sale. Fundraising has been hit by the Covid-19 pandemic and people who wish to donate via a GoFundMe page, teulada-moraira-lions, which can also be accessed through the Facebook page and the website - which also outlines how to join the club and all its activities. The ee 9 News June 10, 2020 - N° 18 Charity needs a helping hand The current crisis has greatly damaged the unique facility that Reach Out offers to the Homeless and needy families in Torrevieja. Hacienda has unfortunately seized any debts we had to them, and rather than defer the payments, as recommended by the government, decided to strike at our jugular and take lump sums from our ever depleting bank account. NO INCOME for three months means we are near to bankruptcy. We have stocks of food pro- vided by generous donors, our own funds and the EU Foodbank, and estimate we have enough stock to provide for our 45 families until late September. What we now need is hard CASH. Monetary donations to help pay our social workers' wages, insurances, rent, electricity, security and water bills. Our Shop 2 has not been reopened due to volunteer fears, they are all of a vulnerable group and we can understand their concerns. Shop 1 has reopened, but due to restrictions Having a clear out? By Irena Bodnarec The lockdown certainly made many have a long overdue clear out – with plenty of time on everyone´s hands it was at least time productively spent. The MABS Cancer Support Charity Shop in Alfaz del Pi will be more than appreciative for your items, from clothing, to bric-a-brac and furniture. If it’s clean and in a saleable condition bring it down to the shop on Avenida Pais Valencia – the main High Street, next to the Don Dino toyshop. Alternatively, if you have larger items of furniture then either call or WhatsApp us on 634 336 200 to arrange free collection. All the money we raise at the MABS charity shops is used to directly help those suffering with cancer. Please help us to help them. Jávea International Baptist Church JIBC believes God longs to reach out to everyone. If you are being challenged during these unprecedented times, and are seeking to know more about God, please access our virtual services by visiting our website at and press the 'view' button for the latest addition. If you wish to listen to others available on the website, open the 'sermon' button in the task bar and a list will appear. Should you wish to know more about the church here in Jávea, you may leave a message and someone will get back to you as soon as possible. The church is located at Carrer Favara, 8, Pueblo de Jávea. The Leadership Team, JIBC. our income is still not what we were earning prior to lockdown, and will take some time to emerge from the crisis. Our furniture department has not been able to thrive due to lack of drivers, again volunteers in a vulnerable group. We would appreciate funding from business's who would beb willing to offer an Annual Tax Deductable bursary, or a monthlyb sum which again can be tax deductable. Every little helps!! Volunteers are always required and appreciated, Shop 2 needs volunteers. This appeal is a last resort to try to ensure we continue to meet our mission statement in the future, as we hopefully come to the end of the crisis, and our services remain of the highest standard. Our bank account details are shown on our website, Donations can be made to our Santander bank account: Account name: Extiende La Mano IBAN: ES21 0049 3613 91 2314017223 Please leave your name as a reference. Thank you. Tax certificates can be issued if requested, we will contact you for more information. We have a Go Fund Me account details in the signature block. Lockdown donations The Charity Shop of Calpe has been closed for 12 weeks, but during this time we have been able to support the local community when it has been most in need. We donated €4,000 worth of vouchers for fresh food, split evenly between Cáritas and the Cruz Roja Calpe. The shop is unlikely to reopen until September, partly for safety reasons, but also due to a flood which has damaged the ceiling and ruined a lot of stock. We therefore urgently need donations of clothes, shoes, household items and small pieces of furniture. You can drop off these donations at the shop on Monday and Thursday mornings between 10am and 12pm. We are in Galerías Mar Azul on Gabriel Miró, Calpe – half way up on the right hand side. Jalón Valley Help back soon Jalón Valley Help plan to open their doors JALON Valley Help aims to be back in business on July 1 when itagain on July 1. The three charity shops in Jalon, Alcalali, and Orba will all have undergone a deep clean and safety measures will be in place in order to protect the team of volunteers and customers. Income generated by the shops is vital to allow JVHelp to provide crucial services in the community. Help’s president Elaine Horton is determined all the premises are as safe as possible; ozone machines will be installed to eliminate all viruses and bacteria in the bid for a ‘clean air environment’. In addition, volunteer staff will be provided masks, gloves and sanitiser; customers will be asked to respect social distancing rules and wear their own masks, while disposable gloves and sanitiser will be provided. And prior to opening all current stock will be removed and replaced with freshly sanitised items. Help is also looking for donations of stock for the shops and the JVHelp van will be parked outside the shops this month to receive goods: Orba on Mondays, Alcalali on Tuesdays, and Jalon on Fridays. The items received will be stored and sanitised before appearing for sale. TM 952 147 834 *Offer valid for new customer s only. Subject to conditions. Ends 31/12/20. The 10 Health & Lifestyle June 10, 2020 - N° 18 Men - do you think you have low testosterone levels? Testosterone is a steroid hormone that helps regulate sperm development, maintain muscle mass, and boost energy. Both men and women have it, but men produce about ten times more of it than women. Low testosterone is sometimes known as androgen deficiency syndrome; androgen is the term for the male sex hormone and testosterone is the main sex hormone for men. The bottom of a man's normal total testosterone range is about 300 nanograms per deciliter (ng/dL). The upper limits are 1,000 to 1,200 ng/dL. A decrease in a man’s testosterone level is a natural function of aging similar to the decline of the female hormone estrogen in women. For each year over age 30, the level of testosterone in men starts to slowly dip at a rate of around 1 percent annually. This decline in testosterone and the symptoms it causes have sometimes been referred to as "male menopause." However, at any age some men have a lower than expected level of testosterone which can be due to a number of reasons: ■ Hormonal disorders ■ Injury to the testicles ■ Testicular cancer or treatment for testicular cancer ■ Chronic kidney or liver disease ■ Infection ■ HIV / AIDS ■ Type 2 Diabetes ■ Obesity ■ Some medications ■ Some genetic disorders Signs and symptoms of low testosterone include: ■ Decreased sex drive ■ Erectile dysfunction ■ Reduced energy level FAMILY MEDICAL CENTRE ■ Difficulty in concentrating ■Reduced strength and endurance levels ■ Sleep problems ■ Increased breast size and tenderness ■ Decrease in the amount of bodyhair ■ Decreased penis or testicle size ■ Loss of muscle mass ■ Emotional problems including sadness, irritability, depression basis, these treatments keep a man's testosterone at a steady level and keep his symptoms at bay. If you are using testosterone gel on your skin, be careful not to expose other people to the gel. There is also a relatively new method of treatment whereby several pellets are placed under the skin of the buttocks, which release testosterone over the course of about three to four months. Note that some men with a low testosterone level have absolutely no symptoms at all. Testosterone treatment is not without risk Testosterone treatment is not without possible side effects. It may raise the red blood cell count and can enlarge breast tissue. Therefore, men with any history of breast cancer should not have testosterone treatment. Testosterone can also accelerate prostate growth, therefore treatment is usually not advised for men with prostate cancer although medical studies are currently looking into this association. Recent studies suggest a link between testosterone therapy and an increased risk in heart disease. A 2010 trial of testosterone in older men was stopped early because the men receiving testosterone therapy had a higher frequency of heart problems than did the men receiving the placebo. A 2013 study found a higher frequency of death and heart problems in men who had coronary artery disease and received testosterone therapy. However, two recent studies have also reported a lower risk of death in men who were receiving testosterone than in those who were not. A 2014 study reported that testosterone therapy might increase the Diagnosis: A diagnosis of low testosterone includes taking a complete medical history and having a physical exam as well as blood tests to confirm the diagnosis. Your doctor will want to make sure your low testosterone is not caused by medications you are taking or by a disease or condition that needs to be treated. Treatment: Even if you have no symptoms treatment may be advised as low testosterone scores often lead to drops in bone density, meaning that bones become more fragile and increasingly prone to breaks. If a young man's low testosterone is a problem for a couple trying to get pregnant, testosterone injections are probably the best option. Given every few weeks, the injections stimulate sperm production and motility. When fertility is not an issue, the ideal testosterone delivery method is a daily gel or patch. Because they are applied on a regular and frequent Consultation with • GP • Nurse • Midwife 48 ee risk of a heart attack in men age 65 and older, as well as in younger men who have a history of heart disease. Further research is needed to determine the safety of using testosterone therapy to treat older men dealing with age-related declines in testosterone. Currently, the Food and Drug Administration (FDA) is investigating the risk of stroke, heart attack and death in men taking FDA-approved testosterone products. If you wonder whether testosterone therapy might be right for you, talk with your doctor about the risks and benefits Testosterone treatment should never be self prescribed (ie. to improve your sex life). If you are taking testosterone, make sure your doctor is monitoring your response to treatment with regular blood tests. If you and your doctor decide that you should try medication to treat low testosterone, remember that a healthy lifestyle including a healthy weight, not smoking, limited alcohol, and regular exercise are also important for managing testosterone levels. Article supplied by the Family Medical Centre, Albir. Call 96 686 5072 or email [email protected] From 1st. June the Family Medical Centre will resume their normal opening hours: € Monday: 09.00 – 17.00, Tuesday: 09.00 – 17.00, Wednesday: 09.00 – 17.00, Thursday: 09.00 – 20.00 (late night), Friday: 09.00 – 17.00, Saturday: 10.00 – 13.00, Sunday: Closed COVID – 19 blood tests are now available at the Family Medical Centre, Albir by appointment only. Doctor An Midwife Dawn We’re here to care for you Nurse Yvonne Administrator Jane YOUR LEADING BRITISH MEDICAL PRACTICE ON THE COSTA BLANCA [email protected] As Seen On ITV 966 865 072 Avenida D’Albir, 66 (El Albir) To make an appointment: Telephone 966 865 072 or e-mail us on [email protected] If anybody has any doubts please do not hesitate to call us for advice 966 865 072 The ee Health & Lifestyle 11 June 10, 2020 - N° 18 Bath vs shower - which is better? Whatever your personal preference, Liz Connor asks Dr Earim Chaudry to explain which bathing method is better for our skin health. The baths vs shower debate has long been a hotlyargued topic in households across the country. While some people love the idea of starting the day with a long soak, others can't fathom the concept of sloshing around in a tub full of water. Both showers and baths have some pretty brilliant health and wellbeing benefits, but which is actually better for you in the longterm? We asked Dr Earim Chaudry, GP and medical director at Manual (manual.co) to weigh in on the topic... "Baths are great for people with skin conditions such as eczema, but it's a myth that they're better they're actually not quite as beneficial as showering. "A shower is actually better for your skin, due to the fact that they expose the body to less water than a bath. Whether it's a bath or a long shower though, exposing your skin to too much water can strip it of its natural oils. With frequent showering, the skin's surface can break down, leading to irritation and inflammation. "It can also dehydrate your skin, wash away beneficial bacteria and increase risk of infection. The skin does a pretty efficient job of cleaning itself, so you don't need to scrub yourself down all the time to stay hygienic. "The ideal way to keep yourself clean is to take short, lukewarm showers and only use soap around the groin area, feet and armpits - basically anywhere that gives off odour after a particularly hot day or a workout. "Your average bar of soap is designed to to remove oils from the skin, so using that soap all over your body means you might be stripping your skin of some beneficial natural oils. "The one benefit to baths though, is that if you've had a particularly stressful day and need to wind down, they are incredibly beneficial to your wellbeing. Hot baths before bed can increase our body temperature at night, which helps synchronise our natural circadian rhythms leading to better, deeper sleep. "Decreases in stress hormones (like cortisol) have been reported with warm bathing too, as research has found that they may help the balance of the feel-good neurotransmitter serotonin. "So, how often should you be showering? Well, there's a lot of conflicting advice out there. Many dermatologists would recommend a shower every other day, or two to three times a week. "In warmer weather conditions, you might not feel comfortable with that frequency, so taking a shower every day certainly isn't the end of the world. And that's especially the case if you're highly active. "If you want to try showering every other day, give yourself a sponge bath and wash your face, armpits, and groin with a washcloth on your non-showering days. For optimal skin health, don't shower using hot water, and limit shower time to five to 10 minutes. This is not only good for you, but also beneficial to the environment." 12 Health & Lifestyle The June 10, 2020 - N° 18 ee Belching and flatulence. Belching and burping helps to expel excess air that has built up in the upper digestive tract, accumulating in the oesophagus. Some of the causes of this is eating too fast, carbonated drinks, chewing gum, or smoking and often addressing these issues can resolve the problem. Acid reflux or gastroesophageal reflux disease (GERD) however can sometimes cause excessive belching and may be related to the inflammation of the stomach lining (gastritis) or to an infection with Helicobacter pylori, which is a bacterium responsible for causing stomach ulcers. In these cases, the belching is accompanied by other symptoms, such as heartburn or abdominal pain Helicobacter pylori can be contracted by ingesting infected food, water or contact with bodily fluids from an infected person. It can live dormant in the digestive tract for many years, causing bleeding and ulcers in the stomach lining or small intestine. If left untreated can be very painful and lead to stomach cancer. It is especially important to seek advice from your Doctor who can with a simple stool, blood or breath sample diagnose and treat the bacteria as it will not go away on its own. Article supplied by Clinica Britannia. Call: 965 837 553 or visit CLÍNICA BRITANNIA CLÍNICA BRITANNIA British Medical & Dental Centre - In Calpe since 1997 Need a Doctor at Your Hotel or Home? Do not hesitate to call us! We cover:- Villajoyosa • Finestrat • Altea • Benidorm • Alfaz del Pi, • El Albir • La Nucía • Benissa • Jalón and Calpe. NURSES GENERAL PRACTITIONERS DENTISTS MEDICAL SPECIALISTS AESTHETIC SPECIALISTS ALLIED HEALTH PROFESIONALS LABORATORY MEDICAL EQUIPMENT SURGEONS See all our services: Appointments Landlines: 965 837 553 / 965 837 851 Av. Ejércitos Españoles 16 BIS, 1st floor, Calpe OPENING TIMES: Mon - Fri: 9:00 am / 17:00 pm Clinica Britannia Calpe The ee Puzzles 13 June 10, 2020 - N° 18 Crossword Complete the puzzle using the Cryptic or Regular clues - the answers are the same. Cryptic clues Across 1 The newspaper items are poorly written (5) 4 Junkie did act strangely (6) 9 Relaxed ten line composition (7) 10/17 Standard reply when asked for soup ingredient (5,6) 11 It's nothing to see this bird on the cricket field (4) 12 One issue with odd ode in it (7) 13 Getting on in a cold climate (3) 14 Hit one's pals in return (4) 16 Urges one to get food (4) 18 Sapphic ode contains details of a fish (3) 20 I regain disputed African country (7) 21 Chinese dog food? (4) 24 Card game starts with hands in some turmoil (5) 25 A meadow over a North Yorkshire river (7) 26 Look upon with concern (6) 27 Pitchers from broken sewer (5) Down 1 Badly looted Spanish city (6) 2 Bloke in charge is overexcited (5) 3 She would rapidly cast off (4) 5 Broken tins indeed are intended (8) 6 Pressing in groin by mistake (7) 7 Decomposed skeleton without the Spanish vouchers (6) 8 Endless peg directed horse (5) 13 The telephonist is poorer at fashion (8) 15/18 Where a feller might get work (7,5) 17 See 10 Across 18 See 15 19 Good men digest small Scottish bonbons (6) 22 Hospital adjacent to Yorkshire river is the residence (5) 23 This small land mass misled no doctor (4) Regular clues Across 1 Periods (5) 4 Fanatic (6) 9 Compassionate (7) 10 Hoard (5) 11 Bend down (4) 12 Version (7) 13 Aged (3) 14 Smack (4) 16 Incites (4) 18 Edible fish (3) 20 African country (7) 21 Breed of dog (4) 24 Card game (5) 25 Meadow (7) 26 Look upon (6) 27 Jugs (5) Down 1 Spanish city (6) 2 Overexcited (5) 3 Shack (4) 5 Intended (8) 6 Pressing and smoothing (7) 7 Coupons (6) 8 Charger (5) 13 Machinist (8) 15 Taking down (7) 17 Reply (6) 18 Sites (5) 19 Desserts (6) 22 Dwelling (5) 23 Atoll (4) Spanish-English crossword Improve your vocabulary with our Spanish/English crossword. The clues are in Spanish and the answers are in English. Across 1 Capilla (6) 4 Costa (5) 8 Sobrina (5) 9 Aeropuerto (7) 10 Cuero (7) 11 Aquí (4) 12 Mar (3) 14 Vena (4) 15 Ratones (4) 18 Propina (3) 21 Poeta (4) 23 Ordenar (siguiendo un sistema) (7) 25 Equipaje (7) 26 Corteza (de pan) (5) 27 Semillas (5) 28 Estornudo (6) Down 1 Vela (de cera) (6) 2 Promedio (7) 3 Elefante (8) 4 Tarjeta (4) 5 Arriba (5) 6 Títulos (6) 7 Pelos (5) 13 Estadounidense (8) 16 Confundir (7) 17 Manzanas (6) 19 Páginas (5) 20 Escarabajo (6) 22 Águila (5) 24 Bolsas (4) 14 Puzzles The June 10, 2020 - N° 18 ee The ee Trading Post 15 June 10, 2020 - N° 18 Plumbing & electrical installations Architects & constructiones 100% CONSTRUCTIONS, new builds and reforms. 16 years operating on the costa blanca. 965 835 939. Pools 100% PLUMBING. FOR heating, aircon, gas, solar, bathroom refurb. 16 years operating on the costa blanca. plumbing.com 965 835 939 UNDERWATER Alarms ALARM - CCTV Instal., Maintenance, Repairs ✆ 600 993 667 TILING SPECIALIST Replace missing or broken tiles using underwater cement or concrete repair. 5 year guarantee. Replace lights, leaks ...without draining your pool Vicente 622 696 373 Doors & windows Painters FRANCHISE OPPORTUNITY all contracts supplied for spraypainting villas, guaranteed 46 weeks of the year on target earnings €50,000, long-established on the Costa Blanca with existing franchisees in place for references. Investment required to cover the vehicle, equipment and training for more information please email [email protected] AWNINGS, MOSQUITO blinds, roller shutter repairs, motorisation. Calpe + 50kms. Tel. 659 464 992. Email. [email protected] Gardening Removals & transport MAN & VAN for hire, cheap and reliable. Jalon Valley & surrounding area. 636 100 873 or 966 480 757 Items wanted SPECIALIST TREE CUTTING & trimming palm/pine tree + dangerous work. Gravel work & Garden cleaning also weekly maintenance. Free quote. Good rates Call Adrian 627 103 412 THE EASIEST WAY to book your classifieds in the CB News is to e-mail the text to [email protected] (South) or [email protected] (North) Locksmiths LOCKSMITH 24hr 607 493 118, Torrevieja & surrounding areas WE BUY LUXURY WATCHES For Cash! Rolex, Cartier, Omega, A/P, Hublot etc. Instant Cash Payment! Tel: 658 670 777. Or email: [email protected] The 16 Trading Post Cars June 10, 2020 - N° 18 Caravans, Mobile homes, Motorhomes S R US CAAlR l cars wanted SANCT BERNHARD Costa Blanca. Offers inc. Omega 3-6-9 for a cholesterol and joint-conscious diet. 180 caps €9.60 or buy 3+ for €9.00 each. 965 780 425 We buySpanish, English all Euro Cars R.H.D, L.H.D. 4X4s, MPVs,and Diesels &Vans, Motor homes, classics, motorcycle cash paid Scrap cars collected. No more unwas,nted bills or nes. All p/worktaken out of yourSuma [email protected] name Keith 699 805 995 662 211 993 UK00 44 7470 894 772 THE EASIEST WAY to get your classifieds in The Post is to e-mail the text to: [email protected] FLEETWOOD ICON 24D American Class C Camper 2008 Low mileage Spanish plates. Slide out generator, etc €29000.00 Tel: 639 272 362 [email protected] Nautical & accessories INTERNATIONAL SKIPPER licence, radio and radar courses, starting shortly +34 626 245 098 Fashion & accessories WANT TO DRIVE your LHD car to the UK & sell it when you get there? Call 658 670 777 or UK 07874 223 456 Sat & TV SATELLITE SYSTEM REPAIR service, dish alignment, upgrades, installations. All problems solved. 965 666 206 PASSPORT – NIE/ RESIDENCIA -Authorities / Spanish health care /General Services Consulting without CCR [email protected] or call 649 222 679 BEST PRICES GIVEN for your Watch - Jewellery - Gold Rolex for sale or we buy or pawn - certified valuers any area - we collect and deliver to your home or visit us Calle los Arcos 17, Ciudad Quesada, 03170 brokers.com - info@spanishpawn brokers.com Call Paul Mentessi 602 506 390 General THE EASIEST WAY to book your classifieds in The Post is to email the text to [email protected] SANCT BERNHARD Calpe. Offers inc. FORMOFIT for natural weight control. 210 caps (1 month supply) €35 or buy 3+ for €32 each. 965 836 807 SANCT BERNHARD Torrevieja. Offers inc. VITAFIT Ginkgo-Magnesium. 400 tablets €20.50 or buy 3+ for €18.50 each. 966 706 765 Health SANCT BERNHARD Moraira. Offers inc. Green-Lipped Mussel for healthy joints. 150 caps €12.50 or buy 3+ for €11 each 966 491 573 THE EASIEST WAY to get classifieds in The Post is to e-mail the text to: [email protected] SANCT BERNHARD Benidorm. Offers inc. Pumpkin Seed Oil for a healthy prostrate and bladder.130 caps €7.95 or 400 caps €19.95. 965 858 663 Beauty & Wellness SANCT BERNHARD Denia. Offers inc. CoEnzym Q10 for a strong heart. 60 caps €18 or buy 3+ for €16.80 each. 965 780 425 Animal world ACCOMPANY YOUR PET(S) whilst they are driven in comfort to any destination in Europe by an experienced animal carer/driver. Contact Denise. info@petchauffeur .eu or Tel 952 197 187 696 233 848 PET CREMATORIUM In Memoriam Animalium . We are at your disposal 24 hours with pick-up and delivery of your pet from your home or veterinary clinic. If you prefer you can go directly by appointment. You can find us in La Marina, Industrial Polygono El Terol in Polop, c/Aphrodite Ship 2 . Tlf 626 739 938 ee THE EASIEST WAY to get your classifieds in The Weekly Post is to e-mail the text to [email protected] SPAMA GANDIA SHELTER Dog and cat rescue registered charity, La Safor area. 500 animals awaiting rehoming. Visit our website ma.org and view our new blog at PLEASE HELP US TO HELP THEM! Friendship KIND, NICE LOOKING gentleman would like to have a regular nice sexy girlfriend to visit discreetly once or twice during the weekday, living in Moraira, Denia area. Will give you extra income and loving friendship. Please text me in confidence 683 142 216. very discreet. GENTLEMAN, 85, DRIVES a car and plays golf, wltm a nice lonesome Lady, same age, for interesting conversations and outdoors etc. Twosome instead of Lonesome, 661 639 598 or 615 765 784. Adult relax ELEGANT SLIM sexy naughty Lady Sophie, speaks 3 languages, priv. apartm. (in/out calls), escort service in ALL AREAS!! 693 357 526 VIAGRA / SILDENAFIL - 150 mg. CIALIS / Tadalafil - 40 mg. Pack of 10 pills for 22 Euros. Free delivery in Spain. [email protected]. HANNA 30, WONDERFUL busty venezuelan. All Fantasies. Reopening at Turquesa St. La Zenia (close to CC. Zenia-Boulevard). 641.294.104 SPANISH LADY, 37, Playa Flamenca Urb Zenia Mar by the new carrefour 5 min Torrevieja, sensual massage. Discreet. Private house parking Call Ana 657 603 495 HELLEN 30, NICE body, caring. Ebano skin. Reopening in La Zenia, house 77(close to Consum). 665.736.488 VALERIA IN DÉNIA, clean and discreet, erotic massage, relaxing, sex, outings and much more, call 686 094 328 DISCREET BEAUTIFUL Blonde big breasts, slim body, alone at home 622 696 693 Torrevieja LUNA 29, BRAZILIAN exmodel. Wonderful&Sweet. La Zenia, house 77 (close to Consum). 672.872.050 BEAUTIFUL CARIBBEAN CHOCOLATE lady Sharon 30 big natural breasts sexy body to body massage etc escort in/out good services pleasure with happy ending. Cabo Roig Orihuela Costa call or whatsapp 602 328 812 Trading Post 17 June 10, 2020 - N° 18 Lawyers & Solicitors NIE, RESIDENCIA, CAR IMPORT, holderchange, driving licence, last will / inheritance, assistance purchase / sale properties, insurances, Tel: 965 792 451, more under Lawyers. Tax adive and bookkeeping. All Gestoria Services. Experts in Brexit and Residence Permits. NIE Numbers. FREE 1st. consultation. Ask for Oliver: phone 606 051 000 or E-Mail: [email protected] Properties under 175.000€ UNEXPENSIVE LIVING directly on the Black Sea approx. 60 m2 + additional costs. Financing possible. Tel.: +34 626 245 098 BENIDORM, NEAR BULL RING, 1 bedroomed apartment, 2nd floor, lift, separate kitchen, glazed in terrace. Pool, garden, tennis court. 75.000 € Tel. 658 578 346 LOVELY VILLAGE HOUSE completely renovated in Castell de Castells. 111 m2 consisting of; Grd Floor:Entrance, dining room + fireplace, fully equipped kitchen +breakfast bar, courtyard & storage. Small lounge +balcony overlooking the courtyard, bathroom & a spacious double room wardrobe & balcony overlooking the main street. 2nd floor. Small lounge, toilet, dble bedroom +wardrobe & balcony & a spacious terrace overlooking the mountains.~ Wooden beams, parquet floor except in the kitchen & bathroom. Double glazed windows & wooden internal woodwork. Original stone walls. Renovated with taste and using good quality materials.Negotiable price Ref-933 100.000€ Telephone: 609 211 710 [email protected] CHARMING VILLAGE HOUSE of over 190 m2 in Vall d’Ebo. Typical Spanish Townhouse totally restored keeping all its charm. Comprises of; hallway, lounge + wood burner, fully fitted old style kitchen, family bathroom & small courtyard. From the lounge, the staircase gives access to 1st floor with a dble bedroom + ensuite toilet, 2nd double bedroom & the master bedroom with window to the street. 2nd floor 2 huge empty areas ideal to be converted into a big bedroom with en-suite bathroom, & the roof terrace with views to the mountains. Wooden windows, character floor tiles & wooden beams. An ideal property for the mountain lovers due to the location of the village. 59.000€ Ref - 1094 Tel: 609211710 [email protected] Properties from 175.000€ Long term let SHORT/LONG TERM open plan attic flat Polop old town. Suitable 1/2 people 390€ per month inc Electric/Wifi. Tel 666 80 5 271 COMFORTABLE FURNISHED STUDIO apartment in a beautiful townhouse in the historic area of Oliva, WIFI, heating, aircon, roof terrace, six months minimum, €250 pcm plus bills. No pets. Tel 628 303 443 Selective Advertising Pays! BENIGEMBLA APARTMENT +COMMUNAL pool & terraces. West facing , on the 2nd floor, consists of: Living/dining room with views to the communal pool & mountains, fully fitted American open kitchen, dble bedroom +built-in wardrobes, bathroom master bedroom +fitted wardrobes. Dbleglazed windows, wooden doors, hot/cold aircon in lounge, rollerblinds satellite connection, private parking,a fantastic communal pool with nice terrace. This property is ideal as an investment to rent it out during the summer or to use as a permanent home. Furniture included. 65.000€ Ref-1123 - Telephone: 609 211 710 [email protected] T ORREV IEJ A ee P REMIS ES FOR S A LE The Pos s ibility to cons truct a ve storey building Good inves tment (approx. 200 m2) Centrally located (Rambla Juan Mateo) 5 mins walk to the town centre Als o s uitable as a bus ines s premis es , o ce, s hop 607 200 041 18 Puzzle Answers The June 10, 2020 - N° 18 ee The ee June 10, 2020 - N° 18 Property, Home & Garden 19 Save our insects By Hannah Stephenson, PA Wild 'Rozanne' is a winner, with its open, mid-blue flowers, which go on from spring to autumn.a- torium 'landing pads' like oxeye daisies. National Insect Week runs from June 22-28. For more information visit nationalinsectweek.co.uk. The 20 June 10, 2020 - N° 18 ee Hurry up! Download now the app of the ‘Club de los Disfrutones’: gifts will fall from the sky to you and you will be able to win a fantastic trip and € 150 to pack up. AppStore Playstore HHHHHHHHHHHHHHHHHHHHHHHHHHH EN!! P O E R A WE BLE A T R U O Y RESERVE SIDE T U O R O INSIDE SUNDAY ROAST 1 - 3 PM REA! THE BEST IN THE A K ONLY €12,95 IN R D D N A S E S R U O 3C VAILABLE ALSO MAIN MENU A BREAKFASTS SERVED FROM 10AM LUNCHES FROM 12MD OPEN 10 AM TO 3 PM WEDNESDAY TO SUNDAY AND 6-10 PM THURSDAY TO SATURDAY Booking essential: call 658 599 742 HHHHHHHHHHHHHHHHHHHHHHHHHHH The Weekly Post Published on Jun 10, 2020 The Weekly Post
https://issuu.com/rotativosdelmediterraneos.l./docs/wp_18_n
CC-MAIN-2021-10
refinedweb
8,834
59.03
Substring Searching paolo_veronelli + 0 comments Amazing, I really enjoyed this and yes it has functional solutions. Have fun! sherub_thakur + 0 comments (Obvious Observation): There is no editorial on this problem. Would really appreciate if there was one as I would like to see how to solve this problem in O(n + m) purely functionally as I just chickened out and used ST monad. - G gschoen + 0 comments It took me a while to master the concept of the KMP algorithm but once I did, it was fairly straightforward to code the algorithm. The link provided didn't give a very clear explanation but there are many references on google/YouTube that did. No credit to Hackerrank for providing a test case #0 that had no pattern strings with repetitions in them. One should not have to buy a test case to test the implementation of the pattern matching array. I'm not sure that a program that modifies the elements of the pattern matching array counts as "functional" programming. I could have buried this inside a function that generates the array to begin with and made it look to the outside world as purely functional. I didn't bother with this deception. (defun make-shifts (j i pattern shifts) (cond ((zerop i) (setf (aref shifts 0) 0) (make-shifts 0 1 pattern shifts)) ((>= i (length pattern)) shifts) ((eq (char pattern i) (char pattern j)) (setf (aref shifts i) (1+ (aref shifts j))) (make-shifts (1+ j) (1+ i) pattern shifts)) (T (cond ((zerop j) (setf (aref shifts i) 0) (make-shifts 0 (1+ i) pattern shifts)) (T (make-shifts (aref shifts (1- j)) i pattern shifts)))))) (defun find-pattern (i j str patt shifts) (cond ((> i (- (length str) (length patt))) nil) ((>= j (length patt)) T) ((eq (char str (+ i j)) (char patt j)) (find-pattern i (1+ j) str patt shifts)) ((zerop j) (find-pattern (1+ i) j str patt shifts)) (T (find-pattern (+ i (- j (aref shifts (1- j)))) (aref shifts (1- j)) str patt shifts)))) (defun cases (n) (cond ((zerop n) nil) (T (let* ((s (read-line)) (p (read-line)) (sh (make-shifts 0 0 p (make-array (length p))))) (if (find-pattern 0 0 s p sh) (format t "YES~%") (format t "NO~%"))) (cases (1- n))))) (cases (read)) Leeeeeee + 0 comments this is my solution , but test case 5 got wrong,I download the test case file,but the result is right,I dont know why {-# LANGUAGE OverloadedStrings #-} module Main where import Control.Monad import qualified Data.ByteString as B import qualified Data.ByteString.Char8 as S8 import Debug.Trace issub x y |S8.null x = False |S8.null y =True |S8.head x ==S8.head y = if flag then True else issub (S8.drop sameLeng x) y |otherwise =issub xs y where xs = S8.tail x ys = S8.tail y need = S8.length ys zips = S8.zip x y sameLeng = 1+ length(takeWhile (\e->fst e == snd e) zips) flag = ys == S8.take need xs main = do line <-Prelude.getLine let num = read line arr <- forM [1..num] $ \_->do a<- S8.getLine b<- S8.getLine if (issub a b ) then return ("YES") else return ("NO") mapM_ putStrLn arr - P ptvirgo + 0 comments It may be obvious that I'm new to Haskell. Code: Been trying this for the past few days, taking advantage of online descriptions of the actual algorithm. Seems like I'm getting the answer right, but about half of the tests time out. Am I missing something? mukeshsinghbhak1 + 0 comments cant improve the recursion in the below code ... gives timeout error..... main = do n<-readLn fn n [] fn n rs = do if n>0 then do s1<-getLine s2<-getLine fn (n-1) (rs++[(s1,s2)]) else do let x = process rs putStrLn x f' str sstr = f str sstr str sstr f [] _ _ _ = "YES" f _ [] _ _ = "NO" f (x:xs) (y:ys) gs ls = if x==y then f xs ys gs ls else f gs (drop 1 ls) gs (drop 1 ls) process xs = unlines $ map ((x,y)->(f' y x)) xs nkrim + 2 comments I've implemented this in Haskell exactly as the wikipedia article specified, but I'm timing out on the last 3 test cases. I've had a similar experience with using haskell on hackerrank with other problems, where I feel like I've implemented it in the most efficient way, while also tacking on seemingly extraneous optimizations that seem above and beyond the scope of the problem in a lot of cases, and it still ends up timing out on the larger test cases. If anyone has any insight on why this might be happening I'd greatly appreciate it. anfelor + 1 comment I don't know your code, so I can't help you with that, but there are some hints I can give you: Use recursion instead of IORefs/ST + Vector as Haskell is optimised for recursion and but not for IORefs. Make sure, that nothing is loaded into memory completely. If you "consume" your strings instantly, the data will be streamed lazily from the console into your function which means less GC work. Don't worry, I have these problems too sometimes! Example: kmp :: String -> String -> Bool kmp [] _ = False kmp str search = if search `isPrefixOf` str then True else kmp (tail str) search Although being not the right algorithm, it fails only one test case. anfelor + 0 comments In case you don't want to solve it by yourself, see: flopezlasanta + 1 comment My submission fails with the last test case only; I use Scala language; a first attempt was made without the table; I was expecting to pass the last test case in the second attempt because I included the table, but it did not work! any hints? BTW I use tail recursion for making the table and for checking if the pattern is present flopezlasanta + 0 comments Nevermind... I just noticed the table is not built correctly; time to fix it - CK konduruc + 1 comment Why don't we have other programming languages which are so common? for e.g. C#, Java, C, C++ etc? - L lboshuizenAsked to answer + 1 comment Functional programming has to do with functional languages. Miranda, Haskell, Clojure, F#. Languages like C++, C etc are not functional programming languages but grouped as imperical languages. But C# and java are more & more looking over the "fence" and improving on constructs lend from functional programming. I encourage you to have a look at functional languages and thus at functional programming. It may seem daunting and even odd at times. But it will certainly will make you a better programmer. At least it give you a different perspective on how to reason about a solution. Have fun. - CK konduruc + 1 comment Alright thanks for the quick response. That makes sense... Although I'm not new to coding, I thought it would have been more flexible to include other languages. Mainly because I started my programming using C and now working on C++ and C#. Will take a look at one of these FPL... Btw do you specifically recommend any of these FPL over the other? No more comments Sort 10 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/kmp-fp/forum
CC-MAIN-2018-26
refinedweb
1,216
69.31
Tinting and Recoloring¶ Color and the Model Loader¶¶ If you wish, you can manually override the color attribute which has been specified by the model loader. nodePath.set_color(r, g, b, a); Again, this is an override. If the model already had vertex colors, these will disappear: the set_color()_color() is enough to override it. You can remove a previous set_color() using clear_color(). Tinting the Model¶ Sometimes, you don’t want to replace the existing color, sometimes, you want to tint the existing colors. For this, you need setColorScale: nodePath.set_color_scale(r, g, b, a); This color will be modulated (multiplied) with the existing color. You can remove a previous set_color_scale() using clear_color_scale(). Demonstration¶ To see the difference between set_color() and set_color_scale(),) base.run() This produces the following output: The model on the left is the original, unaltered model. Nik has used vertex colors throughout. The yellow of the belly, the black eyes, the red mouth, these are all vertex colors. The one in the middle has been setColor ed to a medium-blue color. As you can see, the setColor completely replaces the vertex colors. The one on the right bas been setColorScale ed to the same medium-blue color, but this only tints the model. A Note about Color Spaces¶ All colors that Panda3D expects are floating-point values between 0.0 and 1.0. Panda3D performs no correction or color space conversion before writing them into the framebuffer. This means that if you are using a linear workflow (ie. you are have set framebuffer-srgb in Config.prc or are using a post-processing filter that converts the rendered image to sRGB), all colors are specified in “linearized sRGB” instead of gamma-encoded sRGB. Applying a color obtained from a color picker is no longer as simple as dividing by 255! An easy way to correct existing colors when switching to a linear workflow is to apply a 2.2 gamma. This is a good approximation for the sRGB transform function: model1.set_color(powf(0.6, 2.2), powf(0.5, 2.2), powf(0.3, 2.2)); A better method is to use the sRGB conversion functions that Panda3D provides. For example, to apply the #51C2C6 color, you can do as follows: #include "convert_srgb.h" model1.set_color( decode_sRGB_float(0x51), decode_sRGB_float(0xC2), decode_sRGB_float(0xC6), ); If you are not using a linear workflow, or don’t know what that is, you don’t need to worry about this for now.
https://docs.panda3d.org/1.10/cpp/programming/render-attributes/tinting-and-recoloring
CC-MAIN-2020-45
refinedweb
412
57.27
sensor_set_rate() Set a sensor's refresh rate. Synopsis: #include <bps/sensor.h> BPS_API int sensor_set_rate(sensor_type_t type, unsigned int rate) Since: BlackBerry 10.0.0 Arguments: - type The sensor to set the refresh rate for. - rate The rate to set (in microseconds). Library:libbps (For the qcc command, use the -l bps option to link against this library) Description: The sensor_set_rate() function sets the rate at which the specified sensor should provide updates. The device attempts to achieve the specified rate, but this is not guaranteed; the sensor might provide updates more frequently or less frequently than the specified rate. The rate that you specify here corresponds roughly to the number of sensor events that are delivered to the event queue for your application. Returns: BPS_SUCCESS upon success, BPS_FAILURE with errno set otherwise. Last modified: 2014-09-30 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.bps.lib_ref/topic/sensor_set_rate.html
CC-MAIN-2018-13
refinedweb
154
59.3
The question is answered, right answer was accepted Hello everybody, Im trying to recreate Flammie from Secret of Mana (the 2D version) in Unity 2017. I have all the functionality done, but I'm struggling with the skybox. I have a specific image that I would like to use, but when I follow many of the videos on youtube on how to do it, it looks very bad. Im using the image shown ono top that can be found in this game maker studio project. The video is less than a minute long, so its not a big time waster. Thanks for your help Answer by OtreX · May 11, 2018 at 10:58 AM I found the solution. it was just a problem of the new hierarchy(?) of getting the components. The texture type of the image that you want to scroll must be set as default and the wrap mode must be set as repeat. Then you must create a quad and attach the image to it. Then attach a c# script to the quad that looks like this. using System.Collections; using System.Collections.Generic; using UnityEngine; public class BackgroundScrolling : MonoBehaviour { public float speedX; public float speedY; private Renderer myRenderer; // Use this for initialization void Start () { myRenderer = this.GetComponent<Renderer> (); } // Update is called once per frame void Update () { Vector2 offset = new Vector2 (Time.time * speedX, Time.time * speedY); myRenderer.material.mainTextureOffset = offset; } } Answer by tormentoarmagedoom · May 09, 2018 at 07:38 AM Good day. If you are making a 2D game, you don't need a skybox! Just need a backgroud image for the camera. You need you camera to Projection : Ortographic Clear Flags : (Anything except skybox) And then put a sprite image in background... Skybox is think for prespective 3D games Bye! I thought about that, but I am actually doing 3D with 2D graphics (sprites). Actually, I thought about doing a Quad with a scrolling image/texture (like in this video check from 4:50 to 5:10) and the difference would be that the Quad would be always on front of the camera. The thing is that now in Unity2017, the solution seems to be something in the lines of duplicating the image that you want to scroll and change the transform (like in this video just check the first 5 seconds). Any ideas? Thanks for your time @tormentoarmaged. Converting Cubemaps to Panoramic images? 0 Answers Sun gradient looks different on an iPad? 0 Answers Can i make a skybox from the real life? 1 Answer How do you reflect a directional light Sun in Unity5 water? 2 Answers Pink scene view, skybox and materials (Unity 5.4.1f1) 2 Answers
https://answers.unity.com/questions/1503824/help-problem-with-skyboxes-in-unity-2017.html
CC-MAIN-2019-35
refinedweb
446
74.39
Fundamentals of Investments 3rd Edition Alexander Solutions Manual Full Download: Fundamentals of Investments, Third Edition 1. Issuers receive the net proceeds of securities sales when their securities are initially sold in the primary market. These securities represent claims on the issuing entities. For publicly-traded securities, these claims can be transferred through sales of the securities. This trading among investors takes place in the secondary markets, where the issuers have no direct involvement. When an investor sells his or her shares of a particular security in the secondary market, the issuer has no means or right to receive any additional funds as a result of the trade. 2. The return on an investment, as shown in Equation (1.1) in the text, is given by: ROR  Ending Wealth ď€ Beginning Wealth Beginning Wealth In the case of Colfax stock: ROR = ($36 + $3 - $33)/($33) = 3. 4. .182 = 18.2% Using the formula for the return on an investment shown above, in the case of Ray's portfolio: a. ($25,000 - $20,000 + $1,000)/($20,000) = .300 = 30.0% b. ($23,000 - $30,000 + $3,000)/($30,000) = -.133 = -13.3% c. ($48,000 - $50,000 + $4,000)/($50,000) = .040 = 4.0% Because the U.S. Treasury guarantees the payment of interest and principal on Treasury bills, an investor can be certain of the return that he or she will earn on a Treasury bill investment. The government has the unlimited authority to tax and print money to repay its debts. Therefore its ability to make these promised payments is unquestioned. This certain Treasury bill return, however, does not account for the effects of inflation. Although the short maturity of Treasury bills makes this issue relatively unimportant, if inflation rose sharply and unexpectedly during the time that an investor held Treasury bills, he or she would not be compensated for the resulting lost purchasing power. Chapter 1 This sample only, Download all chapters at: alibabadownload.com Page 1 Fundamentals of Investments, Third Edition 5. The average annual return on small stocks during 1976-1995 was 21.24%. The standard deviation of small stock returns during this period was 19.94%. The average annual return on common stocks during 1976-1995 was 15.35%. The standard deviation of common stock returns during this period was 13.63%. 6. If one assumes that investors dislike risk (a reasonable assumption and one that is discussed later in the text), then higher-risk securities should exhibit higher returns over long periods of time. If this relationship did not exist, and higher-risk securities offered the same returns as lower-risk securities, then investors could not be induced to hold these riskier securities. They could avoid additional risk and receive the same return by holding the lower-risk securities. Such a situation could not be an equilibrium. Prices of higher-risk securities would have to adjust to provide investors with higher returns and therefore increase investors' willingness to hold these securities. 7. Examples of non-financial market risk-return trade-offs include: asking the boss for a raise, underreporting income to the IRS, and self-insuring against damage to your home. 8. The statement in the text citing a positive relationship between risk and return is made in the expectational sense. That is, higher-risk securities are expected to produce returns greater than lower-risk securities. On the other hand, the data contained in Table 1.1 illustrates historical, single period results. Various factors may have intervened to cause lower-risk securities to outperform the higher-risk securities in any given year. 9. The worst single year for common stock investors was 1931, when they experienced a total return of -43.44%. In the 1970s, the worst year for common stock investors was 1974, when they experienced a total return of -26.36%. However, accounting for inflation, the real return (nominal return less the inflation rate) on common stocks in 1974 was -38.70%, while the real return on common stocks in 1931 was –34.12%. In fact, inflation rates were negative for several years during the Great Depression, while they were relatively high during the mid-1970s. As a result, in terms of total real returns, the market decline of the 1973-74 was as (or more) severe than the market decline of the Great Depression Chapter 1 Page 2 Fundamentals of Investments, Third Edition 10. Foreign security returns do not necessarily move in the same direction as returns on U.S. securities. For that reason, including them in a portfolio will tend to dampen the ups and downs of the portfolio’s total returns. This effect is known as diversification and can significantly improve the risk performance of a portfolio. In addition, some investors contend that returns on foreign securities are generally higher than those on comparable U.S. securities. While this contention is controversial, an investor who believes it could increase both the expected risk and return performance of his or her portfolio by including foreign securities. 11. Life insurance companies receive cash from individuals in the form of premiums. In exchange, the insurance companies write policies promising to make payments in the event of the death of the insured individual. The proceeds from the policy sales are primarily invested in stocks, bonds, money market instruments, and real estate. Mutual funds receive cash from investors and, in exchange, issue shares in the respective funds. The proceeds from the funds' sales are invested in a wide variety of financial assets, with the specific assets depending on the funds' particular investment objectives. Pension funds receive employer (and sometimes employee) contributions and issue promises to pay retirement benefits in exchange. The contributions are primarily invested in stocks, bonds, and money market instruments. 12. The five steps to the investment process are: 1. 2. 3. 4. 5. Investment policy Security analysis Portfolio construction Portfolio revision Portfolio performance evaluation Setting investment policy is important because it provides the general framework around which the investment process is conducted. It identifies the investor’s risk tolerance and investment objectives. Security analysis is at the center of the investment process. It involves specifically identifying financial assets to be purchased for and sold from the investor’s portfolio. Portfolio construction moves from Chapter 1 Page 3 Fundamentals of Investments, Third Edition identifying the specific assets in the security analysis step to combining those assets into a portfolio consistent with the investor’s investment objectives. Portfolio revision is necessary because investing is a dynamic process that responds to changes in investment opportunities and the investor’s financial circumstances. Finally, portfolio performance evaluation is a feedback and control procedure intended to help the investor examine whether his or her investment program is meeting targeted objectives. 13. Because returns on financial assets are directly related to risk, establishing an investment objective of "making a lot of money," or equivalently, maximizing returns, might entail inordinate levels of risk. More appropriate would be an investment objective that jointly establishes desired levels of return and risk. 14. It is probably not advisable for an elderly person to hold a portfolio that includes no commons stocks. Common stocks have by far outperformed other asset classes historically. Moreover, compared to returns on bonds and money market investments, common stocks are the only asset class to historically produce a large premium over inflation. An elderly person must be concerned about maintaining the purchasing power of his or her investments. Given their historical performance, common stocks seem to be well-suited to helping maintain that purchasing power. Just what proportion of the portfolio should be held in common stocks is another matter. 15. Many factors could influence an investor's investment policy. Some obvious factors would include the investor's financial objectives (for example, saving for retirement or building a child's college fund), the investor's willingness to bear risk, the investor's current financial circumstances, and the investor's investment time horizon (partly a function of age and career status). Chapter 1 Page 4 Fundamentals of Investments, Third Edition 1. 2. a. A market order instructs the broker to buy or sell immediately at the best available price. The investor is virtually assured that the order will be filled. However, the actual trade price could differ from the price existing when the order was placed. b. A limit order instructs the broker to buy or sell at a specified price (or better). The investor is assured that, if the trade takes place, then it will be done at a price at least as good as his or her limit price. However, the investor cannot be certain when, or even if, the order will eventually be filled. c. A stop order instructs the broker to buy or sell at the best available price once a stop price is reached. The investor can be fairly certain that his or her order will be filled if the stop price is reached. However, the actual trade price could differ from the stop price. Lollypop's balance sheet at the time of the margin purchase would appear as follows: Assets Securities $75/shr 200 shrs = $15,000 Liabilities & Net Worth Liabilties Margin Loan (1-.55) $75/shr 200 shrs = $6,750 Net Worth $15,000 - $6,750 = $8,250 3. 4. An investor’s actual collateral is simply the market value of the investor’s assets in the margin account. Thus for Buck: a. 200 shrs $40/shr = $8,000 b. 200 shrs $60/shr = $12,000 c. 200 shrs $35/shr = $7,000 The minimum collateral required to avoid a margin call in the case of a margin purchase is given by: Minimum collateral = Loan/(1 – maintenance margin requirement) In Snooker's case: Chapter 1 Page 5 Fundamentals of Investments, Third Edition Min collateral= [(1,000 shrs $60/shr) (1 - .50)]/(1 – .70) = $42,857.14 With the decline in collateral is now: price to $50/share, Snooker’s actual Collateral = 1,000 shrs $50/shr = $50,000 Snooker’s actual collateral is therefore above the minimum level necessary to avoid a margin call. No margin call will occur. 5. The maximum solving: amount that Lizzie can purchase is found by Initial Equity = Initial Margin Requirement Purchase Amount or Max Purchase Amount = Initial Equity/Initial Margin Requirement = $15,000/.50 = $30,000 6. The maintenance margin requirement ensures that an investor maintains sufficient equity in his or her account to protect the broker against sudden shifts in the value of the investor's securities purchased on margin. Margin in the investor’s account represents the excess of asset values over the value of the investor’s loan. Therefore the greater the maintenance margin requirement, the greater is the broker's "cushion" against declines in the value of the investor's portfolio. 7. Penny's initial investment in South Beloit is $17,500 ($35 500) of which Penny put down $7,875 (.45 $17,500). Over the course of the year Penny must pay interest of $1,155 (.12 $9,625). At year-end Penny's investment is worth $20,000 ($40 500). Thus Penny's return on investment for the year is: ROR = [($20,000 - $17,500) - $1,155]/$7,875 = 8. .171 = 17.1% Note that the return on an investor's margin purchase can be Chapter 1 Page 6 Fundamentals of Investments, Third Edition expressed on a total dollar basis or on a per share basis. the latter case: ROR In Pt 1 Pt Dt [r (1 im) Pt ] (im Pt ) In Ed Delahanty's case: a. ROR = {($40 - $30 + $1) - [.13 (1 - .55) $30]} /(.55 $30) = ($11 - $1.755)/$16.50 = .560 = 56.0% b. ROR = {($20 - $30 + $1) - [.13 (1 - .55) $30]} /(.55 $30) = ($-9 - $1.755)/$16.50 = -.652 = -65.2% c. ROR = ($40 - $30 + $1)/$30 = $11/$30 = .367 = 36.7% ROR = ($20 - $30 + $1)/$30 = $-9/$30 = -.300 = -30.0% 9. Beauty's balance sheet at the time of the short sale would appear as follows: Assets Liabilities & Net Worth Cash Proceeds of Sale $25/shr 500 shrs = $12,500 Initial Margin .50 $12,500 = $6,250 Liabilities Market Value of Short Sold Stock $25/shr 500 shrs = $12,500 Total Assets $12,500 + $6,250 = $18,750 Net Worth $18,750 - $12,500 = $6,250 10. The equity (or net worth) in an investor’s account who engages in short selling is given by: Equity = (short sale proceeds + initial margin) - loan Thus for Candy: Chapter 1 Page 7 Fundamentals of Investments, Third Edition a. [(200 shrs $50/shr) (1 + .45)] – (200 shrs $58/shr) = $14,500 - $11,600 = $2,900 b. [(200 shrs $50/shr) (1 + .45)] – (200 shrs $42/shr) = $14,500 - $8,400 = $6,100 11. The minimum collateral required to avoid a margin call in the case of a short sale is given by: Minimum collateral = Loan (1 + maintenance margin requirement) In Dinty’s case, the minimum collateral is: Min collateral= (500 shrs $50/shr) (1 + .35) = $33,750 Dinty’s actual collateral equals the short sale proceeds plus the initial margin or: (500 shrs $45/shr) (1 + .55) = $34,875 Because the actual collateral exceeds the minimum collateral, Dinty will not receive a margin call at this time. 12. The first statement is true. The upside potential of any common stock is unbounded. Therefore, because a short seller's losses increase as the price of the short sold stock rises, the short seller's potential losses are infinite. The second statement is false. It is true that if the initial margin requirement on short sales was 100%, then the maximum return that a short seller could earn would be 100%. (This would occur if the shorted stock's price went to zero.) However, given an initial margin requirement less than 100%, the leveraged position of a short sale potentially can produce returns in excess of 100%. 13. Calculated on a total dollar basis, Deerfoot's initial investment in the short sale of DeForest stock is $35,000 (.50 $70 1,000). At year-end, Deerfoot had to reimburse the owner of the DeForest stock with $2,000 ($2 1,000) for dividends paid on the stock. Further, at year-end, if Chapter 1 Page 8 Fundamentals of Investments 3rd Edition Alexander Solutions Manual Full Download: Fundamentals of Investments, Third Edition Deerfoot purchased the stock and repaid the owner, then the excess proceeds over the amount which Deerfoot originally received when the stock was sold short would equal -$5,000 ($70 - $75 1000). Thus Deerfoot's return on investment during the year was: ROR = [($70,000 - $75,000) - $2,000]/$35,000 = -.200 = -20.0% 14. Expressing the return on a short sold security on a per share basis (including interest on the initial margin deposit) given: ROR a. Pt Pt 1 Dt (im Pt r ) (im Pt ) If the Madison stock, which was originally sold short at $50 per share, rises to $58 then: ROR = [($50 - $58 - $0) + (.45 $50 .08)]/(.45 $50) = -.276 = -27.6% b. If the Madison stock, which was originally sold short at $50 per share, falls to $42 then: ROR = [($50 - $42 - $0)] + (.45 $50 .08)]/(.45 50) = .436 = 43.6 15. When an investor receives a margin call, then the collateral in his or her margin account has fallen below the minimum amount specified by the maintenance margin requirement. The investor's broker will request that the investor deposit additional cash and/or sell securities to bring the collateral up to or above the required level. An investor's margin account is restricted if his or her collateral falls below the amount specified by the initial margin requirement. In this case the investor will not be requested to increase his or her margin, but he or she may not withdraw funds from the account such that the collateral would be further reduced. Chapter 1 This sample only, Download all chapters at: alibabadownload.com Page 9 Full download : Fundamentals of Inves...
https://issuu.com/maryam215/docs/fundamentals-of-investments-3rd-edi
CC-MAIN-2019-18
refinedweb
2,668
53.31
As business grows so as volume of data, RDBMS become inefficient and less cost effective database technologies. That is the reason, RDBMS is now considered to be a declining database technology. RDBMS is not effective in terms of providing scalable solution to meet the needs high volume of data flowing with high velocity. What does it mean when we say, RDBMS is not scalable ? : Horizontal scaling(data distribution at multiple nodes/server) is not possible in relational database, only vertical scaling of system(processing speed and storage increase) is feasible however their is upper limit of it. In order to deal with high volume of data, new database technology evolved coined as "NoSQL" database. NoSQL is abbreviated as "not only SQL". i.e: NoSQL databases unlike relational databases which deal with structured data only, deals with unstructured and semi-structured data and centres around the concept of distributed databases.Data are distributed across various processing nodes/servers and trading of consistency (BASE instead of ACID) for speed and agility. Because of distributed architecture of NoSQL, it is horizontally scalable(as data continues to explode, just add more hardware(commodity hardware)) to keep system up and running. Performance of relational database vs NoSQL database can be depicted as follows: In relational databases as volume of data increases performance decreases while in NoSQL databases it does not vary with volume of data. For small volume of data relational database outperform NoSQL database. When we talk about high volume of data (in TB or PB) flowing with high velocity coins a fancy term Big data. Big data basically consist of 4 V's : Volume, Velocity, Veracity and Variety - Volume - Amount of data to dealt are in TB or PB. - Velocity- It describes the frequency at which data is generated, captured and shared. - Variety - A large volume of data is present in data storage in form of documents, emails ,video and voices. These are unstructured data and emerging data types include geo-spatial and location, log data, machine data, metrics, mobile, RFIDs,search, streaming data, social, text and so on. - Veracity - As mentioned earlier, we are extending benefit of distributed database at cost of consistency.So, there are chances data retrieved or received is not intended one. So, reliability is always a concern in such system. Hadoop is a software ecosystem that allows for massively parallel computing using HDFS and MapReduce.With time various software system has been added in hadoop ecosystem like Pig,Hive,Hbase,etc. Please note hadoop has an abstract notion of filesystems, of which HDFS is just one implementation other in this list are WebHDFS,Azure,Local,etc. Hadoop Distributed File System (HDFS): It is a highly fault-tolerant, high throughput distributed file system designed to run on commodity hardware(affordable and easy to obtain). Hadoop ecosystem states, hardware failure is the norm rather than the exception.So, data is distributed across multiple server(hadoop cluster) and replicated to handle node failure condition. HDFS stores data in fixed size block and the commodity hardware(part of hadoop cluster) where data is stores is called data node. - HDFS is designed to handle application dealing with batch processing rather than interactive operation. - HDFS follows the write-once, read-many approach for its files and applications. It assumes that a file in HDFS once written will not be modified, - The focus of HDFS is on high throughput rather than latency. Latency : Amount of time needed to complete a task or produce output. Throughput : No of task completed in per unit of time. - HDFS stores filesystem metadata and application data separately and follows master slave architecture. Filesystem metadata is stored in NameNode and application data is stored in DataNode. Name node act as master and data node is slave. NameNode: It manages the file system namespace(NameNode inserts the file name into the file system tree and allocates a data block for it) ,determines mapping of blocks to data nodes and maintains meta-data information of blocks stored in data node. It executes file system namespace operations like opening, closing, and renaming files and directories. For each hadoop cluster there is only one name node which act as master. DataNode: It provides actual storage of data and responsible for serving read and write requests from the file system’s clients. DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode. Refer this post for more detail about namenode and datanode. - In multi-node cluster name node and data node are on different machine and single node cluster both are existing together in same machine.In multi-node cluster, there is only one name node and multiple data nodes.Sometimes name node is called single point of failure in hadoop cluster. - In order to deal with single point failure of name node, hadoop maintains a SecondaryNameNode node on different machine(in multi-node cluster) which stores images (metadata of data blocks) at certain checkpoint and is used as backup to restore Name Node. Read and write operation in hadoop gives broader picture of namenode and datanode. In next post we will revisit Namenode and DataNode and discuss them in detail. Next: NameNode, DataNode and Secondary NameNode Nice blog i ever red like this and i like the way of you delivered your thoughts.Thanks for sharing this with us. Hadoop Training in Chennai Big data training in chennai Hadoop Training in Anna Nagar java training in chennai python training in chennai digital marketing course in chennai Hadoop training in chennai Big data training in chennai hadoop training in Velacher
https://www.devinline.com/2015/03/big-data-evolution-with-hadoop.html
CC-MAIN-2021-17
refinedweb
927
53.41
Retrieving the IP of the visitor of a website is easy using .NET (ASP.NET to be more exact), but how do you retrieve the IP of the client running a Windows application? There are several reasons why you would want to do that, such as sending the IP to a server on the internet for identifying the client. The question is how do you do this in .NET? Using .NET 1.1 and 2.0 to retrieve the IP address Well, in .NET 1.1 and 2.0 there are methods in the System.Net namespace that do that. Speaking of System.Net, don’t forget to include the using statement, to avoid the long lines I have below: Here’s how in .NET 1.1: In .NET 2.0 GetHostByName is obsolete, and was replaced by GetHostEntry: As you can see here, we only retrieved position 0 of AddressList but if you expect more than one IP on the user’s machine, you better loop through all of them. Here’s how in .NET 1.1: and in .NET 2.0: Everything’s well and good. Or is it? I don’t know about your workstation, but my connection to the Internet goes through a network with a router, and using the above methods I only retrieved the local IPs on my network (192.168.0.1, 192.168.0.2, 192.168.0.3 and so on). So the above code works alright, but not if the computer is behind a router. So in this case, let’s have a look at the second method: Using WMI to retrieve the IP address Let’s see how we can retrieve the IP addresses of your network adapters using WMI. First add a reference to System.Management, and use the appropriate using statement: Below is the code that retrieves the IP addresses of the network adapters in your computer. It works with .NET 1.1 as well as in .NET 2.0: Depending on your network configuration this may work unlike the first method we used (using System.Net), however this will normally perform slower than the .NET solution, so if both of them work fine, stick with the first one. What’s that I hear? Neither method worked for retrieving the Internet IP address? Been there, done that. I couldn’t find a solution to retrieving the (external) Internet IP address when the user was behind a router. And the truth is, there is no easy solution behind this. But the important thing is that there is a solution: Asking a server for your IP address So how come ASP.NET manages to display the IP address of the client, and we can’t get our own IP address? Well, it’s easy for ASP.NET to do that because when the client connects to the ASP.NET server it doesn’t matter if he is behind a router or not. In this case, the one who’s making the call to the ASP.NET server is really the router, and his IP address is given to the server. I’m not sure if you understand my blabber, but our solution is to ask a server on the internet for our IP address. A simple way to do that is query a page on the Internet that contains only the IP address, without any HTML tags or anything. Let’s see how this looks like. First, we need to set up that page. It can be an ASP, ASP.NET, PHP or any other type of page. In ASP.NET you can use this: While in PHP this will work great: This will create a web page containing only the IP address of the client. You can access the page via Internet Explorer or any other browser and view its source. It will contain 207.46.19.30 or whatever your IP address is. Now we can parse this page from our Windows application and get the IP address: This has to work! If this doesn’t work, you’re practically invisible on the Internet and I’d like to know how you did it. You probably noticed in the example we’re reading the IP from which is a one-line PHP script (the one I showed you above). It currently works but I strongly suggest you don’t base your application on this page on Geekpedia.com, since in the next few months we’re going to switch to an ASP.NET 2.0 server and this page won’t work anymore and neither will your application. Create your own ip.php or ip.aspx page on the web so you can be certain the service is always up. Posted by Derek Joki on May 11, 2012 at 2:17 am I would appreciate to retire where air conditioning is not needed, as well as a dog park with benches is nearby. Posted by lista de email on October 8, 2012 at 8:57 pm thanks for posting this information. lista de email lista de email lista de email lista de email lista de email Posted by farm invasion usa apk on October 14, 2012 at 2:11 pm Wow, amazing blog layout! How long have you been blogging for? you make blogging look easy. The overall look of your web site is great, as well as the content!. Thanks For Your article about How to Get the IP address in a Windows application Denno Secqtinstien Foundation . Posted by article on October 14, 2012 at 6:48 vitiligo treatment on March 28, 2013 at 2:48 pm That may be the end of this report. Right here you will locate some web pages that we assume you will appreciate, just click the hyperlinks. Posted by vip loan shop on April 1, 2013 at 5:13 am I cannot thank you enough for the blog post.Really looking forward to read more. Cool. Posted by Baju bayi on April 1, 2013 at 2:35 pm Wow, marvelous blog layout! How long have you been blogging for? you made blogging look easy. The overall look of your site is great, as well as the content!. Thanks For Your article about How to Get the IP address in a Windows application | Denno Secqtinstien Foundation .
https://dennosecqtinstien.wordpress.com/2012/04/19/how-to-get-the-ip-address-in-a-windows-application/
CC-MAIN-2017-51
refinedweb
1,055
83.36
Details - Type: New Feature - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: None - Fix Version/s: 2.3.0-beta-2 - Component/s: XML Processing - Labels:None - Environment:windows xp on jdk 1.4.2 - Number of attachments : Description I am trying to use XmlSlurper to process and xml with nested elements with text in them. I need to get the text from just one level at a time but the text() method returns all text for all childrend and I can't see anything that would bring back just the local. Here is a sample from the console: groovy> def model = new XmlSlurper().parseText('<aModel><aParent name="bubba">text<aChild>child text</aChild></aParent></aModel>') groovy> model.aParent[0].text() groovy> groovy> Result: "textchild text" Activity This is not a bug - text() does what it is designed to do which is to give you all the text in the element However there is a need for an additional mechanism to let you handle mixed content as in the example provided. I'm working on this I went to write a parser today and was amazed to find I couldn't do this. Has there been any more thought or progress on this? I would also really like some way to get the actual contents of a NodeChild, something like def model = new XmlSlurper().parseText('<aModel><aParent name="bubba">text<aChild>child text</aChild></aParent></aModel>') assertEquals "text<aChild>child text</aChild>", model.aParent[0].contents() I too did not expect this behavior. This is my work around. public static String groovyTwentyOneFifteen(def parent) { String all = parent.text() StringBuilder subtract = new StringBuilder() parent.children().each { subtract.append(it.text()) } return all.substring(0, all.size() - subtract.toString().size()) } Here is my work around as a closure (based on the XPath text() function) : def localText = { parent -> def children = parent.getAt(0).children def result = [] as List for (child in children) { if (!(child instanceof groovy.util.slurpersupport.Node)) { result.add(child) } } return result } Applied on the following XML : <root>aaa<sub-level>bbbb</sub-level>ccc</root> It gives the following String list : [aaa, ccc] John, assigning to you if for nothing else for comment. Given your proposed new method(s) to XmlSlurper it kind of makes sense to try to tackle this at the same time I think. Just assign back (or leave unassigned) if you have no time (though any suggestions would be welcome) or provide a comment if you think the feature request is not warranted and can be done by other means.
http://jira.codehaus.org/browse/GROOVY-2115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-18
refinedweb
425
55.24
.. image:: :target: :alt: Latest PyPI version .. image:: :target: :alt: Version status .. image:: :target: :alt: Python 3.5 and 3.6 .. image:: :target: :alt: Latest Travis CI build status .. image:: :target: :alt: Codacy grade .. image:: :target: :alt: Codacy coverage A CLI framework based on asyncio. .. note:: This is still in active development. Things will change. For now, the basic framework is operational. If you are interested in helping out, or would like to see any particular features added, let me know. The simplest usage is to just pass in an async function. .. code:: python import asynccli async def mycli(): print("Hello, world.") if __name__ == '__main__': app = asynccli.App(mycli) app.run() It can also be instantiated as a class, as long it has a call method. .. code:: python import asynccli class DivisionCalculator(asynccli.CLI): numerator = asynccli.Integer(help_text='This is the numerator.') denominator = asynccli.Integer() async def call(self): print(self.first_num / self.second_num) if __name__ == '__main__': app = asynccli.App(DivisionCalculator) app.run() In the DivisionCalculator example above, you would call your script like this: .. code:: $ /path/to/script.py 2 3 0.6666666666666666 What if you want to have a tiered CLI with a hierarchy of commands? First, create your command by subclassing CLI as above. Then, wrap the whole thing inside of the TieredCLI class, and pass that to the App. .. code:: python class Calculator(asynccli.TieredCLI): d = DivisionCalculator if __name__ == '__main__': app = asynccli.App(Calculator) app.run() Now, to invoke the script, you have an extra argument to call: .. code:: $ /path/to/script.py d 2 3 0.6666666666666666 .. code:: pip install asynccli Currently it requires Python 3.5 to make use of async/ await. It uses argparse under the hood, and therefore has no dependencies outside of the standard library. Argumenttypes argparsefeatures uvloop You can invoke the test scripts a few different ways: .. code:: py.test python setup.py test python -m py.test And, in order to generate the test coverage: .. code:: coverage run -m py.test MIT < asynccli was written by Adam Hopkins <[email protected]>_.
https://openbase.com/python/asynccli
CC-MAIN-2022-21
refinedweb
342
55
thread again begin again Thread again begin again 18 year old hormones has paint been back again hello? No paint has not been here stop posting so i can post it had some pretty interesting stuff, for sure. people like to pretend actually living in the decade was better than it was though. not that I know of. best op of the day (Boo) a friend of yours acting weird around you? I think you're biased, bardy. ty no ive just been incredibly aroused 90% of the time for the past 2 days idk what to do about it Chemical castration. oh geez bebop i thought that was you For test. Furries only one monitor Yeah I'm based as fuck but you love it paint? lol no thank god that dude blows sounds like lots of energy to run around and talk with people!~ and maybe get to know some freinds and do things :) you know it. I plan to have two once I get a new desk. you are wrong but okay did he try to rape a child? no need to be so dramatic. you'd better. Not sure why you would want to talk to that person anyway you think laughing at the handicapped is funny? well I tried. not yet afaik because he's an actual person with character traits and stuff oh i didn't notice this Sci what? i'll blame you in the suicide note I will. Well, I'm laughing at you and I'm enjoying it. beepop is like you tier when it comes to being creepy I'd feel insulted if you didn't. What traits? i guess i mixed you up with someone else neat oh ouch i come to threads and see my name and thats immediately what i get k bye he was calling you creepy? i mean you are no wait more ID's the best trait of his is not being a memeslave r i p bebop bye you will burn in hell this better not be an attempt at reverse psychology pic related its you I wouldn't stoop to using such dirty tricks. Maybe. I doubt it though. @based moogs whatever happened to that bro danami that you would cyber on the reg im not sure im confident with my assessment you'd better get sure. think you might have made him ky is this some fucking bipolar quip scan-chan Don't a lot of people have that trait? Oh, fine. You should come chill later. I'm just playing 64. hey girl penis why is thread all furfags and horsefuckers now? what happened to animus here? no, the vast majority do not. or what i see him playing things from time to time so probably not i have negative interest in seeing that explored half the map still no BoS power armor op when ;; I didn't plan that far ahead. erin is p based though. like she literally head butted some random drunk neighbor Sci, I thought we were chill. Fair enough. which one i guess i win then thanks for the confirm so he just got so buttmad that he stopped posting much better ! only a few furries and only a few animus right now. it's slow at this hour. this will not be forgotten. I will have my revenge. New Vegas McFuckingRibYourself tp yes i have a vague sense of the traffic patterns on this board. i'm talking about it in general. heh thats a good one and I've spent way more time with the anime threads than I did with the horsefuckers so I feel I'm at least a dual citizen at this point. yumm is that the most likely scenario in your eyes it would have been smarter to keep your intentions a secret. now i will prepare. oh the good one. darn more people means more posting means faster threads means more fun for everyone. don't look a gift furry poster in the mouth. maybe that's exactly what I want you to do. idk if 'sci' and 'chill can go together in the same post box I'm just a smalltown girl pic related tbh I haven't seen BoS yet and haven't visited the strip yet. the motion carries. do you guys think ikt ever gets tired of having such a gigantic dick? also @ ikt do you ever think you will have that chink eye surgery i only ask cause your eyes are chinky af and you seem to be ashamed of what you are im a mess, lads ~ Have you ever played it before? d-does senpai need a lesson too? do you renounce your unholy love for animated ponies? not more fun if the people suck massive dick tho don't look a gift furry poster in the mouth. ??? so? i'm a vegetarian not a fake one like you like legit locals only, bwana mo'ku'la o'eh'eana btfo I played FO3 and FO4 Tp? I think they are like in a cave somewhere this is getting too complex I haven't watched the show in ages but I never stopped liking it. I'm not ashamed of what I enjoy. oh. well I guess I'll just leave forever then. i'm not vegetarian any more, i eat chicken 3 was my first but I literally only got to megaton before i gave the game back to my friend. New Vegas was my real first, played it through at least a dozen times just as planned. it's all right i'm ashamed for you you missed out. vegas is better mechanically but 3 had the best atmosphere by far. JUST chicken? thank goodness Yeah lets ditch this popsicle stand Just you and me Two drifters out to see the world Theres such alot of world to see or is it!? 10minute queue time finally in a game everybody has a mic and starts talking about the wait time someone gets PISSED and starts to dox 2 of us he already has my full name and address ???? don't leave I mean I did end up playing through 3 after a while Fantastic choice in meats I'd run away with you in a heartbeat. I guess you'll have to find out. okay I won't. oh, well good. 12 year old hackers these days this last fortnight all i've eaten is southern fried chicken steaks and instant mashed potatoes i have restricted intake, so i tend to just eat the same thing over and over until it makes me sick, then find something else i can eat and repeat Oooh nice. Played 3 before 4 and now I just started NV, but so far it's been better than both so? yes? thanks for the update i agree NV was a masterpiece i hope you have fun that surprised me I didnt think it was you i cant take this pressure i crack so easily danks a lot :D i bricked my other laptop so i had to regress to old memes seems like everything is working much quicker than I anticipated. <3 I only own NV on disc but 3 andd 4 are both on steam dude sounds like he is 30 should i be scared? well i'd hope so. and yeah i agree, NV is great probably because it wasn't made by Bethesda :^) excuse me, but that didn't seem to be directed at you, mind not inserting yourself into conversations you are not welcome in, you asshat? that fucking feel Back in the day I got erio's stocking folder but its fucking gone now too was gonna have some fun doubtful thats what i want you to think now im in control are you wasted rn or totally sober? also pls keep posting panty indefinitely yeah my b bro Maybe I can insert myself into your conversation. erin can i post your dog on my fb meme page? I liked 4 a lot. it may not have been quite as good as 3 or vegas but it more than made up for it in sheer amount of content. I'm a big sucker for just exploring a world like that and it was so packed with places and a lot of them were quite large. I spent over a hundred hours easily. and that was just on one playthrough. sure you can. you are quite welcome. i guess so, if you want this is like high level chess. I'm not sure I can keep up with all of this. Thank ya kindly miss I have nothing to really say. I expected you to yell at me as well. I did not think we were on good terms. whoa ciaran is a boy's name for fuck's sake This explains everything. no forced story line like "muh daddy" or "muh son" great dialog choices weapon mods no op starter godmode suit water supply dwindling. might have to actually leave the house this weekend. you are like little babby why wouldn't we be? I just remember us going back and forth a lot. yes. lower your guard, because I am no threat. u dont have the thing where water gets pumped straight to ur house? nice try erin telling niggas to mind their own business literally made a person a mod to ban someone then literally quit then literally came back when she didn't get enough meme replies just because we argue on the internet doesn't mean i don't like you. you are actually one of my preferred posters tap water is disgusting. it was worth a shot. Tp ily ~!*Blushie*!~ still no erin nudes nah that's just your imagination same it is undrinkable by anyone with human taste buds. My best friend feels the same way Always has pallets of bottled water what got you to feeling like that? How big is that folder? Tap water is tainted with flouride by the lizard people i'm tellin u dude thats just an idea u got stuck in ur head for whatever reason tap is fine drinking tap water makes you a vegetable see: sci where I live the tap water is genuinely horrible. I have been places where that wasn't the case. california wasn't too bad, at least in south lake tahoe where I lived. I have one of those water coolers and I fill a five gallon jug with filtered water whenever I need more. it's like one dollar. I do sometimes pick up a pallet of water bottles just in case though, but I finished the last one off last week. I probably still have a gallon of water left in the jug right now, but it probably won't last through sunday. (i actually drink exclusively bottled water lmao) depends on where you live. here it is not. in scotland, tap water is exactly the same as bottled water, cause of the whole mountain spring > loch deal. in sweden (i think) a town (possibly called Voss) actually has the Voss source as their main water supply, so you can bathe in fucking Voss FUZE KILLS THE HOSTAGE AGAIN FREE WEEKEND IS CANCER "hey glen. so how's edmonton this time of year?" AAAAHHHHHHHHHHHHHHHH o-okay I have 78hours in it and I never completed it new computer so my save is lost all that dlc 4 felt more like a casual game so it makes sense to give you the suit early on try to make it not as casual by forcing people to know what trash loot is actually good for a outpost >making outposts here where? this nigga texted me "Hi" just text me back hey X just look at your phone we can be buddies I said Hi on a text message, you should reply. be nice STOP This is how Chii felt when everyone stole her phone number Jesus Christ I feel bad for her All tap water on this side of the world tends to have a good chance of being filthy with some sort of bacteria or some dumb thing brought on by the mass of uncaring population that wreck it then wonder why we get warned not to drink it by health departments Chii a stinky slut I did what I always do in games like that. I ignored the main story for the most part and explored every nook and cranny I could find, then when that was done I rushed through the story and moved on to my next game. I'll go back and play again sometime I'm sure. I would like to get the dlc but I'm a little surprised at how expensive it is for all of it. don't really feel like paying almost the full price of the game again for what will amount to probably 20-30 more hours of content. Tap filters usable at all? My dad has one of those that seems to help but around here we have acceptable tap water I think' where I live. enough for two folders they are, but the water still doesn't taste anywhere near as good. let's be real beepop no one just texts you it's usually underage girls telling you to cool it pls let me invade ur privacy that fivehead with that mountain of a pimple though nope. that isnt stocking also leave bebnop alone he's cool Said TP you're a jerk so based Lemme have it Notice me senpai yes, but you were already aware of this fact. that was two days ago. tough luck. i think my dealer got arrested. he said on the phone he'd be 20 mins with my weed, then never showed up, then four hours later he wasn't answering his phone at all im just joshin' ur all right pal ima take a fresh air tube now so later wait it looks like scoots is going to whitenight the dude who literally got cucked from the dude who literally pretended to be a girl and then literally asked for money to pay for a ring to marry a trap thank goodness. one less scumbag drug dealer on the streets, making them unsafe for my kids. Speaking of that, your award came in a little late but it's safe do you not have medical there? and he has an old brick phone so you fucking bet the cops can read my texts. newp k see you later. there must be some mistake. that should say "exceptional" not "good" shiet Looks like tp is posting from mcdonalds again, cuz he clutching them straws Hopefully my phone will let me download it Leave it unrared i think i get my valium script on monday, but it's not the fucking same. this burn has a lot of potential but it needs work. I got a gooood deal on the season pass $11 lol lol this post makes me want to drink and hear the story greentext this shit Guess I'll have to see it to beleive it i wasn't planning on doing it today This is true cakee mang Im in too good a mood today damn, how'd you manage that? you won't be able to see much with your eyes closed and your lower lip between your teeth. i havent tried valium i dont think That's k take your time baby be scoots get cucked not really that exciting bern yes boo Erin what is this shit I keep hearing about Apple not unlocking a terrorists phone but willingly giving out emails without a warrant for a man who is simply sharing files online? I know a guy. well, a lot of them. with who tho. i wanna know more because it sounds like gold if its true it's diazepam, if you've tried any other non-triazolo benzo, same thing really. yay living on easy street with all them connections. Tp do a tl;dr greentext story you fuck I wanna laugh Kohai woo probably bullshit, apple refuse all that crap. there is a thing atm where some dead guys dad is angry, cause his son got murdered, and apple won't unlock his laptop, but i say too fucking right, after i die, people going through my laptop is the LAST thing i'd want, i'd be turning in my grave This is the most relatable thing I have ever seen posted here 10 4 scoots texts erin on the reg and wants to marry her but like some weirdo cutter in america beat him to the punch must feel hella bad wanan play overwatch apple really do have your back. EVEN if you're a terrorist. they're cool guys. won't break your trust for nobody. i wish i were as knowledgeable as you D- No effort sure. see you in a minute. sorry love i haven't slept so glad im on Apple now Can finally plan my bombing campaign at least half of my internet friends who aren't from here are drug dealers it sucks tbh I was seeing some thing where Michigan police are trying to get their university there to 3d print a dead mans fingerprints to unlock his phone people think using someone elses prints to unlock things in new Someone went on to say how you can change your password but you can't change your fingerprints everyone shit bricks after reading it why are people just now learning things I knew about when I was like 6 letting that stop him Yan is a pretty cool dude though. at least half of your friends from here probably are too tbh Fine, but next time I want a more detailed effort, or else you'll be in detention with molly ringwald Yan is a pretty cool dude though is your whole thing to make people think you are mentally disabled? enable IDs Yan is pretty cool dude though yeah, this is true, courts decided they can't make you give up your passcode, cause that's like, the contents of your brain, but they CAN take your fingerprints and use those. if you're doing hella illegal shit, turn off touch ID. i thought what i did still nailed it you are a fucking cuck also turn off touch ID if you think you're gonna die, cause people might get your lewds after ur dead is my waifu loco still here I genuinely considered destroying dissasembling and burying my macbook after I got a new computer for a while shut up erin you abandoned this board months ago but still get email notifications and rage when people say your name guess fucking what someone else in this community was actually born as erin Hola, ese Mi es Loco chinga tu madre puto Allegedly be terrorist use iphone laugh at government be online criminal who isn't really hurting anyone fucked for the rest of his life after BUYING something really? the only person I know who actually does anything close to that is Erin a couple years ago I might have said you and Boo too but eh, lost your guys' charm for it what? well yeah i know that. people always get surprised when cops give you something for you to use then end up getting your prints from it like are you dumb?? I would taunt the police and tell them my only hint for my password it comes before 79 If they look in my history far enough they'll find it but only if they really try. FBI already has my dick, what else do they need? i also realized with macOS Sierra, they will be able to unlock you mac with your fingerprint if they get your iPhone, your apple watch, and your mac, they can put on the watch, then use your print to unlock the phone, which automatically unlocks the watch when it's on a wrist and near a phone, and then the watch will unlock the laptop when it's close by disabling any one feature will stop that working though, so you can use either phone unlocks watch, or watch unlocks mac, but not both if you wanna be safe I would be genuinely surprised if there werent others def soto eh pretty much on point again. the reservations should train you people to read instead of training to be creepy motherfuck got em yeah dude how does it feel to get cucked by an emo american? You think so? I don't know much about Soto tbh and don't really count us as friends What you said didn't make any sense. I dont know,i figured youd know more being the author of this fanfic reading comprehension is not a strong priority in the reservation we get it eh too much effort That is literally ur catchphrase hey bro at least i start to try i guess same What are friends in a place like this Errare humanum est,una salus victis nullam sperare salutem. tfw you wake up feeling like death because you couldn't sleep for hours after you tried to start. Sleep. Is. Death. Atlea- I'd have to think about who I actually call a friend here bard irl is kinda scary and respect worthy Me, right? b-but on the internet I'm just a weeb loser with no confidence who can't get a girl/guy and cant maintain a stable mindset for a week why don't you just take another year break? why can't you just be a normal poster why do you have to be gone or 100% slut why don't you get a fucking job? I know. It's intentional. Don't wanna. Define normal. Flirtatious != Slutty Yes. Don't be mean to rin when he posts best yuri sleep is actually what is helping keep you alive is what some people say! though it is known that people have died in their sleep before.. you're the one with the hella good art and stuff, right? Sister act tho sister act is garbage for i minute i was like "i'm sure i was doing something neat with my VPS" then i remembered it's training a modified pyEctor core on so much input that it might literally become conscious lol teehee@ whoopi golberg Ur terrible Wew lad dude you were doing so good though you had a server job what happened he needs to fucking wake up yes hi ily TP I've been flirting with Rin Is this as bad or worse than the time I did meth, like twice I need your life advice TP Never gonna happen Rin is kind of kawaii. It just isn't fair when life won't let me sleep for hours, then make me feel bad for it. Yes. If i say rin cute she cute you did meth? that shit is kinda haggard rin is a cumdumpster with no self respect so like can't hate a nigga for emailing him nudes somebody taught it to rp khajit Bart got me thinking about Pittsburgh food. I think I'll get one of these monstrosities today Erin did you read the thing I linked you earlier? +1 I know one way to make it easier to sleep where is this? That was me Can't sleep. Have stuff and things today. no, i know the story, been on my news feed for days. also, it is still training, no real front end yet. gratz, you taught an artificial intelligence how to be a nerd I am the lawless Beast whose word strikes the undefined blackened matter with cuteness. Would you suck a trap cock? Am proud Pittsburgh dad visits Primanti Bros. Grill it also still has a thing for Anonymous keeps claiming to be anonymous, and calling other people anonymous does hrt like make people blog? just asking cause they all do it 2016 Pokemon Spring Regionals. Primals do not occupy your Mega, just your Mythical. Which means you can't use a primal in formats that disallow them. What's Skynet think of Re:0? Kek i wanna talk to this qt skynet girl It would be a major oversight if he didn't. Girls are kind of known for being more into telling stories and stuff. I mean like for when you next can't oh. what do you think about it? then how are people feeding it info? lol. i wanna teach it to program itself further nyet Jesus Christ, Erin I'm calling the FBI YOU DID ITYOUFOUND THE HACKER KNOWN AS 4CHAN that was a forced kek and you fucking know it you cuck Honestly, people make it seem way more bad that it actually is. Its like a coke high but will keep you up for a good while. I did it to make it through work after not sleeping at all the day before and I made it through the day like nothing You dont get hungry for shit and trying to eat just makes you feel sick, the comedown isn't even that bad tbh I probably wouldnt do it again tbh, its pretty boring. Its not even that addicting either. I felt like Coke is more addicting I wanna see how rin looks now I wonder if you can teach a basic AI how to reverse a game and make cheats for you like it could happen Erin do this and you could literally pull in like 200k a year I guess mine is special. :3 It seems that people meme'd too hard with the old pic. Erin's too busy poppin pills and pulling out teeth what? Im a vegetarian and i aint fucking scared of you it reads every thread on Holla Forums page 0, and trains on and generates a reply to every post. mostly it still just repeats back what it's told, i think it has a bug in saving states. it hasn't generated any state files for specific names, it should build one for everyone, so i think i need tweak it a bit. When I can't sleep, I usually stay in bed until I don't feel like shit. I'm going to start getting ready. Hey, spoiled. Be in and out randomly until it's time for me to head you. it's like calm down on the stimulants lol jesus christ no one fucking cares it's like meth tho soto i love you Maybe I can make you remember if you get drunk. new Skynet do you think I'm cute? y/n i guarantee you are fake vegetarian just like erin
http://hollaforums.com/thread/1214884/anime/thread-again-begin-again.html
CC-MAIN-2017-39
refinedweb
4,510
75.95
JavaScript Object Notation commonly known as Python JSON is a lightweight information interchange format that is usually inspired by using JavaScript item literal syntax. JSON is straightforward for human beings to examine and write. It is also very clean for computers to parse and generate data in a specific manner. A coder can also use JSON for storing and replacing information in much the equal manner as he uses XML. We can simply import from python using ‘import json’. Convert JSON to Python import json a = '{ "name":"Megha", "age":28, "city":"Chandigarh"}' b = json.loads(a) print(b["age"]) Output 28 From Python’s factor of view, this JSON is a nested dictionary. JSON is a very famous layout that is often used in web packages. you’ll locate that understanding the way to engage with the usage of Python is beneficial to your personal work. Encoding a JSON String Python’s JSON module makes use of dumps() to serialize an item to a string. The “s” in dumps() stands for “string”. It’s less complicated to see how this works by using the usage of the JSON module in some code. Here we had used json.dumps(), which simply transforms the Python dictionary into a JSON String. The instance’s output also became changed and modified to wrap the string for print. So, in any other case, the string might be a one-liner. Decoding JSON Interpreting or deserializing a JSON string is by the loads() technique. masses() is the accomplice feature to dumps(). Here is an example to understand encoding and decoding of JSON: import json py_object = {"x": 0, "y": 0, "z": 0} json_string = json.dumps(py_object) print(json_string) print(type(json_string)) py_obj = json.loads(json_string) print() print(py_obj) print(type(py_obj)) Output {"x": 0, "y": 0, "z": 0} <class 'str'> {'x': 0, 'y': 0, 'z': 0} <class 'dict'> Serialising JSON Data Serialization will convert your objects into JSON format according to this table. Serialising JSON data comes to play when a computer processes lots of information. It needs to take a data dump. Accordingly, the JSON library exposes the dump() method for writing data to files. There is also a dumps() method (pronounced as “dump-s”) for writing to a Python string. Simple objects are translated to JSON according to a fairly intuitive conversion. json_str = json.dumps(data, indent=4) This will change how many spaces are there for indentation as we have 4 here, which can make our JSON easier to read. Conclusion We use JSON format very frequently when running with web APIs and web frameworks. The language offers a pleasant tool so as to use to convert JSON to Python objects and again once more in the JSON library. JSON is something more important that a user can easily use in Python and it can make coding a little bit easier.
https://www.developerhelps.com/python-json/
CC-MAIN-2022-40
refinedweb
477
64.71
NanoNeuron - 7 simple JS functions that explain how machines learn Oleksii Trekhleb ・11 min read 7 simple JavaScript functions that will give you a feeling of how machines can actually "learn". TL;DR. ☝🏻These functions by any means are NOT a complete guide to machine learning. A lot of machine learning concepts are skipped and over-simplified there! This simplification is done in purpose to give the reader a really basic understanding and feeling of how machines can learn and ultimately to make it possible for the reader to call it not a "machine learning MAGIC" but rather "machine learning MATH" 🤓. What NanoNeuron will learn You've probably heard about Neurons in the context of Neural Networks. NanoNeuron that we're going to implement below is kind of it but much simpler. For simplicity reasons we're not even going to build a network on NanoNeurons. We will have it all by itself, alone, doing some magic predictions for us. Namely we will teach this one simple NanoNeuron to convert (predict) the temperature from Celsius to Fahrenheit. By the way the formula for converting Celsius to Fahrenheit is this: But for now our NanoNeuron doesn't know about it... NanoNeuron model Let's implement our NanoNeuron model function. It implements basic linear dependency between x and y which looks like y = w * x + b. Simply saying our NanoNeuron is a "kid" that can draw the straight line in XY coordinates. Variables w, b are parameters of the model. NanoNeuron knows only about these two parameters of linear function. These parameters are something that NanoNeuron is going to "learn" during the training process. The only thing that NanoNeuron can do is to imitate linear dependency. In its predict() method it accepts some input x and predicts the output y. No magic here. function NanoNeuron(w, b) { this.w = w; this.b = b; this.predict = (x) => { return x * this.w + this.b; } } (...wait... linear regression is it you?) 🧐 Celsius to Fahrenheit conversion The temperature value in Celsius can be converted to Fahrenheit using the following formula: f = 1.8 * c + 32, where c is a temperature in Celsius and f is calculated temperature in Fahrenheit. function celsiusToFahrenheit(c) { const w = 1.8; const b = 32; const f = c * w + b; return f; }; Ultimately we want to teach our NanoNeuron to imitate this function (to learn that w = 1.8 and b = 32) without knowing these parameters in advance. This is how the Celsius to Fahrenheit conversion function looks like: Generating data-sets Before the training we need to generate training and test data-sets based on celsiusToFahrenheit() function. Data-sets consist of pairs of input values and correctly labeled output values. In real life in most of the cases this data would be rather collected than generated. For example we might have a set of images of hand-drawn numbers and corresponding set of numbers that explain what number is written on each picture. We will use TRAINING examples data to train our NanoNeuron. Before our NanoNeuron will grow and will be able to make decisions by its own we need to teach it what is right and what is wrong using training examples. We will use TEST examples to evaluate how well our NanoNeuron performs on the data that it didn't see during the training. This is the point where we could see that our "kid" has grown and can make decisions on its own. function generateDataSets() { // xTrain -> [0, 1, 2, ...], // yTrain -> [32, 33.8, 35.6, ...] const xTrain = []; const yTrain = []; for (let x = 0; x < 100; x += 1) { const y = celsiusToFahrenheit(x); xTrain.push(x); yTrain.push(y); } // xTest -> [0.5, 1.5, 2.5, ...] // yTest -> [32.9, 34.7, 36.5, ...] const xTest = []; const yTest = []; // By starting from 0.5 and using the same step of 1 as we have used for training set // we make sure that test set has different data comparing to training set. for (let x = 0.5; x < 100; x += 1) { const y = celsiusToFahrenheit(x); xTest.push(x); yTest.push(y); } return [xTrain, yTrain, xTest, yTest]; } The cost (the error) of prediction We need to have some metric that will show how close our model's prediction to correct values. The calculation of the cost (the mistake) between the correct output value of y and prediction that NanoNeuron made will be made using the following formula: This is a simple difference between two values. The closer the values to each other the smaller the difference. We're using power of 2 here just to get rid of negative numbers so that (1 - 2) ^ 2 would be the same as (2 - 1) ^ 2. Division by 2 is happening just to simplify further backward propagation formula (see below). The cost function in this case will be as simple as: function predictionCost(y, prediction) { return (y - prediction) ** 2 / 2; // i.e. -> 235.6 } Forward propagation To do forward propagation means to do a prediction for all training examples from xTrain and yTrain data-sets and to calculate the average cost of those prediction along the way. We just let our NanoNeuron say its opinion at this point, just ask him to guess how to convert the temperature. It might be stupidly wrong here. The average cost will show how wrong our model is right now. This cost value is really valuable since by changing the NanoNeuron parameters w and b and by doing the forward propagation again we will be able to evaluate if NanoNeuron became smarter or not after parameters changes. The average cost will be calculated using the following formula: Where m is a number of training examples (in our case is 100). Here is how we may implement it in code: function forwardPropagation(model, xTrain, yTrain) { const m = xTrain.length; const predictions = []; let cost = 0; for (let i = 0; i < m; i += 1) { const prediction = nanoNeuron.predict(xTrain[i]); cost += predictionCost(yTrain[i], prediction); predictions.push(prediction); } // We are interested in average cost. cost /= m; return [predictions, cost]; } Backward propagation Now when we know how right or wrong our NanoNeuron's predictions are (based on average cost at this point) what should we do to make predictions more precise? The backward propagation is the answer to this question. Backward propagation is the process of evaluating the cost of prediction and adjusting the NanoNeuron's parameters w and b so that next predictions would be more precise. This is the place where machine learning looks like a magic 🧞♂️. The key concept here is derivative which show what step to take to get closer to the cost function minimum. Remember, finding the minimum of a cost function is the ultimate goal of training process. If we will find such values of w and b that our average cost function will be small it would mean that NanoNeuron model does really good and precise predictions. Derivatives are big separate topic that we will not cover in this article. MathIsFun is a good resource to get a basic understanding of it. One thing about derivatives that will help you to understand how backward propagation works is that derivative by its meaning is a tangent line to the function curve that points out the direction to the function minimum. For example on the plot above you see that if we're at the point of (x=2, y=4) than the slope tells us to go left and down to get to function minimum. Also notice that the bigger the slope the faster we should move to the minimum. The derivatives of our averageCost function for parameters w and b looks like this: Where m is a number of training examples (in our case is 100). You may read more about derivative rules and how to get a derivative of complex functions here. function backwardPropagation(predictions, xTrain, yTrain) { const m = xTrain.length; // At the beginning we don't know in which way our parameters 'w' and 'b' need to be changed. // Therefore we're setting up the changing steps for each parameters to 0. let dW = 0; let dB = 0; for (let i = 0; i < m; i += 1) { dW += (yTrain[i] - predictions[i]) * xTrain[i]; dB += yTrain[i] - predictions[i]; } // We're interested in average deltas for each params. dW /= m; dB /= m; return [dW, dB]; } Training the model Now we know how to evaluate the correctness of our model for all training set examples (forward propagation), we also know how to do small adjustments to parameters w and b of the NanoNeuron model (backward propagation). But the issue is that if we will run forward propagation and then backward propagation only once it won't be enough for our model to learn any laws/trends from the training data. You may compare it with attending a one day of elementary school for the kid. He/she should go to the school not once but day after day and year after year to learn something. So we need to repeat forward and backward propagation for our model many times. That is exactly what trainModel() function does. it is like a "teacher" for our NanoNeuron model: - it will spend some time ( epochs) with our yet slightly stupid NanoNeuron model and try to train/teach it, - it will use specific "books" ( xTrainand yTraindata-sets) for training, - it will push our kid to learn harder (faster) by using a learning rate parameter alpha A few words about learning rate alpha. This is just a multiplier for dW and dB values we have calculated during the backward propagation. So, derivative pointed us out to the direction we need to take to find a minimum of the cost function ( dW and dB sign) and it also pointed us out how fast we need to go to that direction ( dW and dB absolute value). Now we need to multiply those step sizes to alpha just to make our movement to the minimum faster or slower. Sometimes if we will use big value of alpha we might simple jump over the minimum and never find it. The analogy with the teacher would be that the harder he pushes our "nano-kid" the faster our "nano-kid" will learn but if the teacher will push too hard the "kid" will have a nervous breakdown and won't be able to learn anything 🤯. Here is how we're going to update our model's w and b params: And here is out trainer function: function trainModel({model, epochs, alpha, xTrain, yTrain}) { // The is the history array of how NanoNeuron learns. const costHistory = []; // Let's start counting epochs. for (let epoch = 0; epoch < epochs; epoch += 1) { // Forward propagation. const [predictions, cost] = forwardPropagation(model, xTrain, yTrain); costHistory.push(cost); // Backward propagation. const [dW, dB] = backwardPropagation(predictions, xTrain, yTrain); // Adjust our NanoNeuron parameters to increase accuracy of our model predictions. nanoNeuron.w += alpha * dW; nanoNeuron.b += alpha * dB; } return costHistory; } Putting all pieces together Now let's use the functions we have created above. Let's create our NanoNeuron model instance. At this moment NanoNeuron doesn't know what values should be set for parameters w and b. So let's set up w and b randomly. const w = Math.random(); // i.e. -> 0.9492 const b = Math.random(); // i.e. -> 0.4570 const nanoNeuron = new NanoNeuron(w, b); Generate training and test data-sets. const [xTrain, yTrain, xTest, yTest] = generateDataSets(); Let's train the model with small ( 0.0005) steps during the 70000 epochs. You can play with these parameters, they are being defined empirically. const epochs = 70000; const alpha = 0.0005; const trainingCostHistory = trainModel({model: nanoNeuron, epochs, alpha, xTrain, yTrain}); Let's check how the cost function was changing during the training. We're expecting that the cost after the training should be much lower than before. This would mean that NanoNeuron got smarter. The opposite is also possible. console.log('Cost before the training:', trainingCostHistory[0]); // i.e. -> 4694.3335043 console.log('Cost after the training:', trainingCostHistory[epochs - 1]); // i.e. -> 0.0000024 This is how the training cost changes over the epochs. On the x axes is the epoch number x1000. Let's take a look at NanoNeuron parameters to see what it has learned. We expect that NanoNeuron parameters w and b to be similar to ones we have in celsiusToFahrenheit() function ( w = 1.8 and b = 32) since our NanoNeuron tried to imitate it. console.log('NanoNeuron parameters:', {w: nanoNeuron.w, b: nanoNeuron.b}); // i.e. -> {w: 1.8, b: 31.99} Evaluate our model accuracy for test data-set to see how well our NanoNeuron deals with new unknown data predictions. The cost of predictions on test sets is expected to be be close to the training cost. This would mean that NanoNeuron performs well on known and unknown data. [testPredictions, testCost] = forwardPropagation(nanoNeuron, xTest, yTest); console.log('Cost on new testing data:', testCost); // i.e. -> 0.0000023 Now, since we see that our NanoNeuron "kid" has performed well in the "school" during the training and that he can convert Celsius to Fahrenheit temperatures correctly even for the data it hasn't seen we can call it "smart" and ask him some questions. This was the ultimate goal of whole training process. const tempInCelsius = 70; const customPrediction = nanoNeuron.predict(tempInCelsius); console.log(`NanoNeuron "thinks" that ${tempInCelsius}°C in Fahrenheit is:`, customPrediction); // -> 158.0002 console.log('Correct answer is:', celsiusToFahrenheit(tempInCelsius)); // -> 158 So close! As all the humans our NanoNeuron is good but not ideal :) Happy learning to you! How to launch NanoNeuron You may clone the repository and run it locally: git clone cd nano-neuron node ./NanoNeuron.js Skipped machine learning concepts The following machine learning concepts were skipped and simplified for simplicity of explanation. Train/test sets splitting Normally you have one big set of data. Depending on the number of examples in that set you may want to split it in proportion of 70/30 for train/test sets. The data in the set should be randomly shuffled before the split. If the number of examples is big (i.e. millions) then the split might happened in proportions that are closer to 90/10 or 95/5 for train/test data-sets. The network brings the power Normally you won't notice the usage of just one standalone neuron. The power is in the network of such neurons. Network might learn much more complex features. NanoNeuron alone looks more like a simple linear regression than neural network. Input normalization Before the training it would be better to normalize input values. Vectorized implementation For networks the vectorized (matrix) calculations work much faster than for loops. Normally forward/backward propagation works much faster if it is implemented in vectorized form and calculated using, for example, Numpy Python library. Minimum of cost function The cost function that we were using in this example is over-simplified. It should have logarithmic components. Changing the cost function will also change its derivatives so the back propagation step will also use different formulas. Activation function Normally the output of a neuron should be passed through activation function like Sigmoid ot ReLU or others. Full Stack Machine learning on AWS: Reading Text from an Image with AWS Amplify & Amazon Rekognition Nader Dabit - JavaScript Series – JavaScript Types Cont. – Part 3 Muhammad Ali (Nerdjfpb) - Building an API in Python with Flask - Setting Up Brian Clark 💡 - This is awesome! Thank you!
https://practicaldev-herokuapp-com.global.ssl.fastly.net/trekhleb/nano-neuron-58j0
CC-MAIN-2020-05
refinedweb
2,571
64
Jun 13 2017 05:41 PM Jun 13 2017 05:41 PM so the ddocumentation for spfx command sets says we should import { Dialog } from '@microsoft/sp-dialog/lib/index'; to display a dialog box. i thought we were supposed to be using fabric for. the ui. what's up with that? Jun 14 2017 05:02 AM Jun 14 2017 05:02 AM @Russell Gove, One right doesn't mean that the other is wrong. For dialogs you have quite a few options the option in this article by @Vesa Juvonen is one way:... Recently for one of my customer I developed a custom form solution within a TypeScript SPFx web part. The nice thing about the SPFx is that you can use any framework you like just add it with npm to your project and off you go. No real limitations just a lot of options. So far I've found it a great way to say to my customers "almost anything is possible". If it doesn't exist yet, then simply start from scratch and with simple TypeScript development you could build the forms , dialogs etc. it all depends on your requirements. In Vesa's article you will find one of these options. Jun 14 2017 05:13 AM Jun 14 2017 05:13 AM
https://techcommunity.microsoft.com/t5/sharepoint-developer/import-dialog-from-microsoft-sp-dialog-lib-index/td-p/77600
CC-MAIN-2022-05
refinedweb
218
81.02
uuid man page OSSP uuid — Universally Unique Identifier Command-Line Tool Version OSSP uuid 1.6.2 (04-Jul-2008) Synopsis uuid [-v version] [-m] [-n count] [-1] [-F format] [-o filename] [namespace name] uuid -d [-r] [-o filename] uuid Description OSSP uuid is a ISO-C:1999 application programming interface (API) and corresponding command line interface (CLI) for the generation of DCE 1.1, ISO/IEC 11578:1996 and IETF is the command line interface (CLI) of OSSP uuid. For a detailed description of UUIDs see the documentation of the application programming interface (API) in uuid(3). Options - -v version Sets the version of the generated DCE 1.1 variant UUID. Supported are version " 1", " 3", " 4" and " 5". The default is " 1". For version 3 and version 5 UUIDs the additional command line arguments namespace and name have to be given. The namespace is either a UUID in string representation or an identifier for internally pre-defined namespace UUIDs (currently known are " ns:DNS", " ns:URL", " ns:OID", and " ns:X500"). The name is a string of arbitrary length. - -m Forces the use of a random multi-cast MAC address when generating version 1 UUIDs. By default the real physical MAC address of the system is used. - -n count Generate count UUIDs instead of just a single one (the default). - -1 If option -n is used with a count greater than 1, then this option can enforce the reset the UUID context for each generated UUID. This makes no difference for version 3, 4and 5UUIDs. But version 1UU. - -r This is equivalent to -F BIN. - ) See Also uuid(3), OSSP::uuid(3). Referenced By ossp-uuid(3), OSSP::uuid(3), uuid-config(1).
https://www.mankier.com/1/uuid
CC-MAIN-2018-39
refinedweb
285
58.38
The other day I read a forum post explaining that it was easy to block PNRP traffic at the edge of the network – just block all traffic headed to pnrp.net. I worry about the user’s motivation for blocking PNRP (peer-to-peer networking and illegal music sharing are not synonymous!) but I feel the need to clear up this misconception and explain *.pnrp.net names. Blocking traffic sent to pnrp.net will NOT stop pnrp. DNS / Host Name Encoding If you weren’t already aware, pnrp names can be resolved in your existing Windows applications if they formatted to look like DNS names. We call this DNS encoding. DNS encoding is needed for a number of reasons. For one thing, pnrp names consist of Unicode characters. Historically, host names could consist of only ASCII letters, numbers and hyphens. To play nicely, we convert PNRP names to punycode during DNS encoding. Punycode makes Unicode strings host name friendly. More information about punycode can be found on the ietf website if you’re interested (). If you’ve read the pnrp whitepaper, you’ll know that a pnrp name consists of an authority followed by a friendly string known as the classifier. In DNS, it’s usually the other way around. That is, the rightmost portions of the name are considered the most authoritative. To maintain the status quo, we swap the authority and classifier when dns encoding and add on the suffix .pnrp.net. A pnrp name would read classifier.authority.pnrp.net. We prefix the letter ‘p’ to a converted authority if it does not begin with a letter because a dns label cannot begin with a number. We drop the authority altogether if you’re publishing an unsecured name. There are a couple of functions for converting between peer name and host name encodings. Our api function PeerNameToPeerHostName converts from pnrp to hostname encoding and PeerHostNameToPeerName does just the opposite. You can also use the netsh p2p pnrp peer show convertedname command. Demonstration A demonstration will help you believe me. Let’s use netsh to publish a name in pnrp. We’ll then convert it to host encoding. Finally, we’ll use it in an application: ping! Register a name We can use the netsh p2p pnrp peer add registration command to register a name. Be careful though – a pnrp registration only lasts as long as the registering application. If you call netsh p2p pnrp peer add registration from the command line, netsh will start, register the name the immediately exit … unregistering the name! Instead, register the name by following these steps: 1. Start netsh C:\>netsh netsh> 2. Switch to the p2p pnrp peer context by typing p2p pnrp peer netsh>p2p pnrp peer netsh p2p pnrp peer> 3. Register the name by typing add registration 0.yourName netsh p2p pnrp peer>add registration 0.tylerTesting Peer name has been registered in all clouds Ok. 4. Leave the netsh window open! If you exit netsh, the registration will be torn down. Convert to host name format In a separate command prompt, convert to host name encoding using the netsh p2p pnrp peer show convertedname command. C:\>netsh p2p pnrp peer show convertedname 0.tylerTesting DNS Name: tyleresting-v2a.p0.pnrp.net Use your DNS encoded peer name! The easiest test is ping. Try it! C:\>ping tyleresting-v2a.p0.pnrp.net Pinging tyleresting-v2a.p0.pnrp.net [2001:0:4136:e37a:104f:3e94:bc57:5e99] from2001:0:4136:e37a:104f:3e94:bc57:5e99 with 32 bytes of data: Reply Ping statistics for 2001:0:4136:e37a:104f:3e94:bc57:5e99: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms Under the hood Finally, we get to the misconception that I set out to clear up. Microsoft is not hosting some kind of DNS to PNRP gateway behind pnrp.net. We’re actually doing a pnrp resolution on your computer when you use a DNS encoded peer name. Getaddrinfo is the function most applications call to resolve a host name to an ip address. Getaddrinfo is protocol independent and calls into multiple name space providers. DNS and Netbios are two very old examples. PNRP has a namespace provider too. If it sees a request to resolve a name ending in .pnrp.net, it converts the name from host encoding to pnrp encoding, then uses the pnrp service to do a lookup. So you see blocking out connections to .pnrp.net really won’t do much. As always, email me if you have questions. Have fun! Tyler Tylbart at Microsoft.com Kevin Hoffman has continued his series about peer-to-peer networking. You can find an index of his posts But wouldn't blocking either pnrpv2.ipv6.microsoft.com or pnrpv21.ipv6.microsoft.com at least prevent registration/resolution within the global cloud? Not that anyone should want to do that. Trademarks | Privacy Statement
http://blogs.msdn.com/p2p/archive/2007/06/15/pnrp-and-pnrp-net.aspx
crawl-002
refinedweb
828
59.4
Okay guys, I am having a problem with this code. I really need help with this one because I am stuck. The problem with this question is that the name, ss, id, and pass have to be ONE STRING. It's stupid because I wish I could make the strings split up but we have to create a substring. PLEASE NOTHING TOO FANCY, I'm still a beginner. This is what I have so far: Write a program that reads in a line consisting of a student’s name, Social Security number, user ID and password. The program outputs the string in which all the digits of the Social Security number, and all the characters in the password are replaced with X. (The Social Security number is of the form 000-00-0000, and the user ID and the password do not contain any spaces.) Your program should not use operator [ ] to access a string element. Use appropriate functions that described in the textbook. #include <iostream> #include <iomanip> #include <string> using namespace std; void getInfo(string& info); void splitInfo(string& a, string& b); void unseenInfo(string& hide); int main() { string k, name; getInfo(k); splitInfo(k, name); system ("PAUSE"); return 0; } void getInfo(string& info) { cout << "Enter your Name, Social Security number, User ID, and Passord - separated by commas: " << endl; getline(cin, info); return; } /*cout << "Name: "; for (counter = 0; counter < b.length(); counter++) { if (b.length() == ',') cout << " "; */ //cout << "SSN: "; //for (counter = 0; counter < b.length(); counter++) //{ // cout << "x"; //} //cout << endl; //cout<<"User ID: "; //for (counter = 0; counter < b.length(); counter++) //{ // cout << b;//if I did cout << "x"; it could replace the user ID with x's //} //cout << endl; //cout << "Password: "; //for (counter = 0; counter < b.length(); counter++) //{ // cout << "x"; //} /*cin.ignore(); return; }*/ //void split(char a) //{ // //} void splitInfo(string& a, string& b) { b = "a"; int p = 1; for (int i = 0; a.at(i) != ','; i++) p++; if (a.at(0) != ' ') b = a.substr(0, p);//assigned substring to b else b = a.substr(1, p);//end of pulling out the substring from string a.erase(0, p + 1);//erase from 0 to p + 1, erase the first comma and everything before it cout << a << endl; cout << b << endl; cout << endl; cout << endl; for (int i = 0; a.at(i) != ','; i++) p++; if (a.at(0) != ' ') b = a.substr(0, p); else b = a.substr(1, p); a.erase(0, p + 1); cout << a << endl; cout << b << endl; return; } void unseenInfo(string& hide) { char y = 'x';//to hide social security and password for (int i = 0; i < hide.length(); i++) hide[i] = y;//turns string values into x's to hide the string return; } Thank you for helping me!
https://www.daniweb.com/programming/software-development/threads/480988/write-a-program-that-reads-in-a-line-consisting-of-a-student-s-name-social
CC-MAIN-2017-17
refinedweb
448
81.43
From: Martin Wille (mw8329_at_[hidden]) Date: 2006-07-25 02:57:54 Joe Gottman wrote: > In order to avoid having an unnamed namespace inside a header file, none.hpp > has been rewritten as > > namespace boost { > none_t const none = ((none_t)0) ; > } // namespace > > But if two source files contain none.hpp won't this violate the > one-definition rule? none would have internal linkage according to 7.1.5.1/2. So it wouldn't violate the ODR. But a pointer to none from one translation unit would differ from a pointer to none from a different TU. > What's wrong with defining none_t and none as follows: > > namespace boost { > enum none_t {none = 0}; > } > > This allows us to have a unique value boost::none of type boost::none_t that > can be included in any number of source files without violating the ODR. It would also avoid the pointer-to-none problem mentioned above, simply because you can't create a pointer to an enumerator. However, none_t instances of enumeration type would be implicitely convertible to int, while the member pointer version of none_t isn't. Regards, m Send instant messages to your online friends Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2006/07/108353.php
CC-MAIN-2022-40
refinedweb
214
65.42
Trending Projects is available as a weekly newsletter please sign up at to ensure you never miss an issue. 1. utility-types Collection of utility types, complementing TypeScript built-in mapped types and aliases (think "lodash" for static types). piotrwitek / utility-types Collection of utility types, complementing TypeScript built-in mapped types and aliases (think "lodash" for static types). utility-types Collection of utility types, complementing TypeScript built-in mapped types and aliases (think "lodash" for static types). Found it useful? Want more updates? Show your support by giving a What's new? Features - Providing a set of Common Types for TypeScript projects that are idiomatic and complementary to existing TypeScript Mapped Types so you don't need to copy them between the projects. - Providing a set of Additional Types compatible with Flow's Utility Types to allow much easier migration to TypeScript. Goals - Quality - thoroughly tested for type correctness with type-testing library dts-jest - Secure and minimal - no third-party dependencies - No runtime cost - it's type-level only Installation # NPM npm install utility-types # YARN yarn add utility-types Compatibility Notes TypeScript support v3.x.x- TypeScript v3.1+ v2.x.x- TypeScript v2.8.1+ v1.x.x- TypeScript v2.7.2+ Funding Issues Utility-Types is an open-source project created… 2. active-win Get metadata about the active window - title, id, bounds, owner, etc sindresorhus / active-win Get metadata about the active window (title, id, bounds, owner, etc) active-win Get metadata about the active window (title, id, bounds, owner, URL, etc) Works on macOS, Linux, Windows. Users on macOS 10.13 or earlier needs to download the Swift runtime support libraries. Install $ npm install active-win Usage const activeWindow = require('active-win'); (async () => { console.log(await activeWindow(options)); /* { title: 'Unicorns - Google Search', id: 5762, bounds: { x: 0, y: 0, height: 900, width: 1440 }, owner: { name: 'Google Chrome', processId: 310, bundleId: 'com.google.Chrome', path: '/Applications/Google Chrome.app' }, url: '', memoryUsage: 11015432 } */ })(); API activeWindow(options?) options Type: object screenRecordingPermission (macOS only) Type: boolean Default: true Enable the screen recording permission check. Setting this to false will prevent the screen recording permission prompt on macOS versions 10.15 and newer. The title property in the result will… 3. Cheetah Grid The fastest open-source data table for web. future-architect / cheetah-grid The fastest open-source data table for web. Cheetah Grid The fastest open-source data table for web. Downloading Cheetah Grid Using Cheetah Grid with a CDN <script src=""></script> Downloading Cheetah Grid using npm npm install -S cheetah-grid import * as cheetahGrid from 'cheetah-grid' // or const cheetahGrid = require("cheetah-grid") Downloading Cheetah Grid source code SourceMap cheetahGrid.es5.min.js.map Downloading Cheetah Grid using GitHub git clone git clone npm install & build npm install npm run build built file is created in the ./packages/cheetah-grid/dist directory Usage Example for basic usage <div id="sample" style="height: 300px; border: solid 1px #ddd;"> </div> <script> // initialize const grid = new cheetahGrid.ListGrid({ // Parent element on which to place the grid parentElement: document.querySelector('#sample'), // Header definition header… 4. Superplate A well-structured production-ready frontend boilerplate with Typescript, Jest, testing-library, styled-component, Sass, Css, .env, Fetch, Axios, Reverse Proxy, Bundle Analyzer and 30+ plugin. For now, only creates project for Next.js. pankod / superplate A well-structured production-ready frontend boilerplate with Typescript, Jest, testing-library, styled-component, Sass, Css, .env, Fetch, Axios, Reverse Proxy, Bundle Analyzer and 30+ plugin. For now, only creates project for Next.js. About superplate has been developed to create rock solid UI frameworks apps boilerplate with no build configurations in seconds. You can add usefull, highly-demands front-end development tools and libraries as a plugin by using superplate CLI during the project creation phase. To learn on how superplate and its plugins work, you can check out our documentation. For now, superplate only creates project for Next.js apps as a default Framework option. Other frameworks will be added soon. Available Integrations Documentation For more detailed information and usage, refer to the superplate documentation. Quick Start To create a new app run the command: npx superplate-cli <my-project> Make sure you have npx installed (npx is shipped… 5. React Cool Portal React hook for Portals, which renders modals, dropdowns, tooltips etc. wellyshen / react-cool-portal 😎 🍒 React hook for Portals, which renders modals, dropdowns, tooltips etc. to <body> or else. REACT COOL PORTAL This is a React hook for Portals.. 👀Looking for a form library? Give React Cool Form a try! Live Demo Features 🍒Renders an element or component to <body>or a specified DOM element. 🎣React Portals feat. Hook. 🤖Built-in state controllers, event listeners and many useful features for a comprehensive DX. 🧱Used as a scaffold to build your customized hook. 🧹Auto removes the un-used portal container for you. Doesn't produce any DOM mess. 📜Supports TypeScript type… 6. Lazy Git A simple terminal UI for git commands… 7. ts-essentials All basic TypeScript types in one place krzkaczor / ts-essentials All basic TypeScript types in one place 🤙 ts-essentials All essential TypeScript types in one place Install npm install --save-dev ts-essentials typescript>=3.7. If you're looking for support for older TS versions use ts-essentials@3 (for 3.6>=) or ts-essentials@2 instead. If you use any functions you should add ts-essentials to your dependencies ( npm install --save ts-essentials) to avoid runtime errors in production. What's inside? - Install - What's inside? - Basic - Dictionaries - Deep* wrapper types - DeepPartial - DeepRequired - DeepReadonly - DeepNonNullable - DeepNullable - DeepUndefinable - Writable & DeepWritable - Buildable - Omit - StrictOmit - DeepOmit - OmitProperties - PickProperties - NonNever - Merge - MarkRequired - MarkOptional - ReadonlyKeys - WritableKeys - OptionalKeys - RequiredKeys - UnionToIntersection - Opaque types - Tuple constraint - Exhaustive switch cases - ValueOf type - ElementOf type - AsyncOrSync type - Awaited type - Newable - Assertions - Exact - XOR - Functional type essentials - Head - Tail - Contributors Basic Primitivetype matching all primitive values. noopfunction that takes any arguments and returns nothing, as a placeholder for e.g. callbacks. Dictionaries keywords: map const stringDict… 8. Awesome Captcha Curated list of awesome captcha libraries and crack tools. ZYSzys / awesome-captcha 🔑 Curated list of awesome captcha libraries and crack tools. Awesome Captcha Curated list of awesome captcha libraries and captcha crack tools. CAPTCHA is a type of challenge–response test used in computing to determine whether or not the user is human. Contents Libraries - mewebstudio/captcha - Captcha for Laravel 5. - CGregwar/Captcha - PHP Captcha library. - trekjs/captcha - A Lightweight Pure JavaScript Captcha for Node.js. No C/C++, No ImageMagick, No Canvas. - pusuo/patchca - Simple yet powerful CAPTCHA library written in Java. - google/recaptcha - PHP client library for reCAPTCHA, a free service to protect your website from spam and abuse. - ambethia/recaptcha - ReCaptcha helpers for ruby apps. - anhskohbo/no-captcha - No CAPTCHA reCAPTCHA For Laravel. - lorien/captcha_solver - Universal python API to different captcha solving services. Generation - dchest/captcha - Go package captcha implements generation and verification of image and audio CAPTCHAs. - lepture/captcha - A captcha library that generates audio and image CAPTCHAs. - … 9. bundless Dev server and bundler for esbuild bundless Next gen dev server and bundler project under heavy development Features - 10x faster than traditional bundlers - Error panel with sourcemap support - jsx, typescript out of the box - import assets, import css What's the difference with traditional tools like Webpack? - Faster dev server times and faster build speeds (thanks to esbuild) - Bundless serves native ES modules to the browser, removing the overhead of parsing each module before serving - Bundless uses a superset of esbuild plugin system to let users enrich its capabilities What's the difference with tools like vite? Bundless is very similar to vite, both serve native es modules to the browser and build a bundled version for production. Also both are based on a plugin system that can be shared between the dev server and the bundler. Some differences are: - Bundless uses the esbuild plugin system instead of rollup - Bundless uses esbuild instead of rollup for the… 10. CSS Layout A collection of popular layouts and patterns made with CSS. Now it has 90+ patterns and continues growing! phuoc-ng / csslayout A collection of popular layouts and patterns made with CSS. Now it has 90+ patterns and continues growing!… Stargazing 📈 Top risers over last 7 days - JavaScript Questions +1,413 stars - Headless UI +1,206 stars - Public APIs +808 stars - Clean Code JavaScript +761 stars - Web Projects With Vanilla JavaScript +739 stars Top risers over last 30 days - Coding Interview University +6,163 stars - Public APIs +4,540 stars - Clone Wars +4,444 stars - JavaScript Algorithms +4,047 stars - Web Dev For Beginners +3,926 stars Trending Projects is available as a weekly newsletter please sign up at to ensure you never miss an issue. If you enjoyed this article you can follow me on Twitter where I regularly post bite size tips relating to HTML, CSS and JavaScript. Discussion (2) Nice some interesting projects on this list. Love this list, thanks for sharing
https://practicaldev-herokuapp-com.global.ssl.fastly.net/iainfreestone/10-trending-projects-on-github-for-web-developers-16th-april-2021-k0k
CC-MAIN-2021-21
refinedweb
1,490
57.27
#include <wx/socket.h> wxSocketBase is the base class for all socket-related objects, and it defines all basic IO functionality. The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros for events emitted by this class: wxEVT_SOCKETevent. See wxSocketEventFlags and wxSocketFlags for more info. Default constructor. Don't use it directly; instead, use wxSocketClient to construct a socket client, or wxSocketServer to construct a socket server. Shut down the socket, disabling further transmission and reception of data and disable events for the socket and frees the associated system resources. Upon socket destruction, Close() is automatically called, so in most cases you won't need to do it yourself, unless you explicitly want to shut down the socket, typically to notify the peer that you are closing the connection. Destroys the socket safely. Use this function instead of the delete operator, since otherwise socket events could reach the application even after the socket has been destroyed. To prevent this problem, this function appends the wxSocket to a list of object to be deleted on idle time, after all events have been processed. For the same reason, you should avoid creating socket objects in the stack. Destroy() calls Close() automatically. Delete all bytes in the incoming queue. This function always returns immediately and its operation is not affected by IO flags. Use LastCount() to verify the number of bytes actually discarded. If you use Error(), it will always return false. Returns a pointer of the client data for this socket, as set with SetClientData() Returns current IO flags, as set with SetFlags() Return the local address of the socket. Return the peer address field of the socket. Returns the native socket descriptor. This is intended to use with rarely used specific platform features that can only be accessed via the actual socket descriptor. Do not use this for reading or writing data from or to the socket as this would almost surely interfere with wxSocket code logic and result in unexpected behaviour. The socket must be successfully initialized, e.g. connected for client sockets, before this method can be called. Return the socket timeout in seconds. The timeout can be set using SetTimeout() and is 10 minutes by default. Perform the initialization needed in order to use the sockets. This function is called from wxSocket constructor implicitly and so normally doesn't need to be called explicitly. There is however one important exception: as this function must be called from the main (UI) thread, if you use wxSocket from multiple threads you must call Initialize() from the main thread before creating wxSocket objects in the other ones. It is safe to call this function multiple times (only the first call does anything) but you must call Shutdown() exactly once for every call to Initialize(). This function should only be called from the main thread. Use this function to interrupt any wait operation currently in progress. Note that this is not intended as a regular way to interrupt a Wait call, but only as an escape mechanism for exceptional situations where it is absolutely necessary to use it, for example to abort an operation due to some exception or abnormal problem. InterruptWait is automatically called when you Close() a socket (and thus also upon socket destruction), so you don't need to use it in these cases. Returns true if the socket is connected. Check if the socket can be currently read or written. This might mean that queued data is available for reading or, for streamed sockets, that the connection has been closed, so that a read operation will complete immediately without blocking (unless the wxSOCKET_WAITALL flag is set, in which case the operation might still block). Returns true if the socket is not connected. Returns true if the socket is initialized and ready and false in other cases. Returns the number of bytes read or written by the last IO call. Use this function to get the number of bytes actually transferred after using one of the following IO calls: Discard(), Peek(), Read(), ReadMsg(), Unread(), Write(), WriteMsg(). Returns the last wxSocket error. See wxSocketError . Returns the number of bytes read by the last Read() or ReadMsg() call (receive direction only). This function is thread-safe, in case Read() is executed in a different thread than Write(). Use LastReadCount() instead of LastCount() for this reason. Unlike LastCount(), the functions Discard(), Peek(), and Unread() are currently not supported by LastReadCount(). Returns the number of bytes written by the last Write() or WriteMsg() call (transmit direction only). This function is thread-safe, in case Write() is executed in a different thread than Read(). Use LastWriteCount() instead of LastCount() for this reason. According to the notify value, this function enables or disables socket events. If notify is true, the events configured with SetNotify() will be sent to the application. If notify is false; no events will be sent. Peek into the socket by copying the next bytes which would be read by Read() into the provided buffer. Peeking a buffer doesn't delete it from the socket input queue, i.e. calling Read() will return the same data. Use LastCount() to verify the number of bytes actually peeked. Use Error() to determine if the operation succeeded. Read up to the given number of bytes from the socket. Use LastReadCount() to verify the number of bytes actually read. Use Error() to determine if the operation succeeded. Receive a message sent by WriteMsg(). If the buffer passed to the function isn't big enough, the remaining bytes will be discarded. This function always waits for the buffer to be entirely filled, unless an error occurs. Use LastReadCount() to verify the number of bytes actually read. Use Error() to determine if the operation succeeded. Restore the previous state of the socket, as saved with SaveState(). Calls to SaveState() and RestoreState() can be nested. Save the current state of the socket in a stack. Socket state includes flags, as set with SetFlags(), event mask, as set with SetNotify() and Notify(), user data, as set with SetClientData(). Calls to SaveState and RestoreState can be nested. Sets user-supplied client data for this socket. All socket events will contain a pointer to this data, which can be retrieved with the wxSocketEvent::GetClientData() function. Sets an event handler to be called when a socket event occurs. The handler will be called for those events for which notification is enabled with SetNotify() and Notify(). Use SetFlags to customize IO operation for this socket. The flags parameter may be a combination of flags ORed together. Notice that not all combinations of flags affecting the IO calls (Read() and Write()) make sense, e.g. wxSOCKET_NOWAIT can't be combined with wxSOCKET_WAITALL nor with wxSOCKET_BLOCK. The following flags can be used: For more information on socket events see wxSocketFlags . Set the local address and port to use. This function must always be called for the server sockets but may also be called for client sockets, if it is, bind() is called before connect(). Specifies which socket events are to be sent to the event handler. The flags parameter may be combination of flags ORed together. The following flags can be used: For example: In this example, the user will be notified about incoming socket data and whenever the connection is closed. For more information on socket events see wxSocketEventFlags . Shut down the sockets. This function undoes the call to Initialize() and must be called after every successful call to Initialize(). This function should only be called from the main thread, just as Initialize(). Shuts down the writing end of the socket. This function simply calls the standard shutdown() function on the underlying socket, indicating that nothing will be written to this socket any more. Put the specified data into the input queue. The data in the buffer will be returned by the next call to Read(). This function is not affected by wxSocket flags. If you use LastCount(), it will always return nbytes. If you use Error(), it will always return false. Wait for any socket event. Possible socket events are: Note that it is recommended to use the individual WaitForXXX() functions to wait for the required condition, instead of this one. Wait until the connection is lost. This may happen if the peer gracefully closes the connection or if the connection breaks. Wait until the socket is readable. This might mean that queued data is available for reading or, for streamed sockets, that the connection has been closed, so that a read operation will complete immediately without blocking (unless the wxSOCKET_WAITALL flag is set, in which case the operation might still block). Notice that this function should not be called if there is already data available for reading on the socket. Wait until the socket becomes writable. This might mean that the socket is ready to send new data, or for streamed sockets, that the connection has been closed, so that a write operation is guaranteed to complete immediately (unless the wxSOCKET_WAITALL flag is set, in which case the operation might still block). Notice that this function should not be called if the socket is already writable. Write up to the given number of bytes to the socket. Use LastWriteCount() to verify the number of bytes actually written. Use Error() to determine if the operation succeeded. The exact behaviour of Write() depends on the combination of flags being used. For a detailed explanation, see SetFlags(). Sends a buffer which can be read using ReadMsg(). WriteMsg() sends a short header before the data so that ReadMsg() knows how much data should be actually read. This function always waits for the entire buffer to be sent, unless an error occurs. Use LastWriteCount() to verify the number of bytes actually written. Use Error() to determine if the operation succeeded. WriteMsg() will behave as if the wxSOCKET_WAITALL flag was always set and it will always ignore the wxSOCKET_NOWAIT flag. The exact behaviour of WriteMsg() depends on the wxSOCKET_BLOCK flag. For a detailed explanation, see SetFlags(). For thread safety, in case ReadMsg() and WriteMsg() are called in different threads, it is a good idea to call before the first calls to ReadMsg() and WriteMsg() in different threads, as each of these functions calls SetFlags() which performs read/modify/write. By setting these flags before the multi-threading, it will ensure that they don't get reset by thread race conditions.
http://docs.wxwidgets.org/trunk/classwx_socket_base.html
CC-MAIN-2018-34
refinedweb
1,739
65.12
Intel® Galileo and Intel® Galileo Gen 2 pp 35-91 | Cite as Native Development Abstract. KeywordsPrefix Editing. The build system used to create the Intel Galileo images—the Yocto project—is very powerful, but it is not as straightforward as a simple make command. There is some computer preparation that must be done. It is also good to know a little bit about how the build system works and how to compile the SPI and SD card images for Intel Galileo and Intel Quark toolchains in order to have the cross-compilers in hand. This chapter also shows you how to build a very simple native Hello World application after the installation of the toolchains. Considering that you will be able to create your own releases, especially the SPI images, there is some instruction on how to recover your board—in case you make a mistake and brick your board. In the end, this chapter brings you some knowledge that will be necessary in the next chapters. It is not the intention to bring a full understanding of all techniques involved in a native development, especially debugging. Introduction to the Yocto Build System Suppose that you are creating a great product that uses Linux as an embedded operating system because is open source and free—reducing the product costs, and brings a great operating system to your users. Does this sound right? The answer is that it depends, because Linux is an amazing operating system, but “reducing the cost” in an embedded development could be a really huge nightmare if you do not have good control of the features required by your product, such as which Linux distribution is able to meet your requirements with minimal effort to join all the pieces together. In order to create a custom Linux image and bring exactly the features that you need, the Yocto project was created to be a flexible and customizable platform for different hardware architectures and code. The Yocto project brings a series of tools, methods, and code—allowing you to choose the CPU that your product targets, the software and hardware components, and the footprint size—to build a software release based in the Linux operating system. Among the CPU supported are the Intel Architecture (IA), ARM, PowerPC, and MIPS. Besides the product releases, Yocto also allows you to build tools like system development kits (SDK) and applications to be used with your product. For example, with Intel Galileo boards, we’ll cover how to build your own toolchain that contains cross-compilers for different operating systems. Once the Yocto project is established with the configuration and components, when new components must be added or removed, or even if a new product must be created based in a legacy one that is supported by Yocto, everything will be easier because your product is reusable. The Yocto project is maintained by the Linux Foundation, meaning that your product will be independent of any vendor or company. Companies like Intel, Dell, Mindspeed, Wind River, Mentor Graphics, Panasonic, and Texas Instruments, among others, participate in the Yocto project. Yocto and this Book To understand all the details regarding the Yocto project, another book dedicated exclusively to Yocto would be necessary, because in the same manner that Yocto is powerful, it is extensive—with a lot of details involved. In this chapter specifically, some basic concepts regarding Yocto are explained so that you understand the build process in an Intel Galileo and Intel Quark context. Instead of executing a bunch of commands to have your builds done without any idea of what is going on, this section brings a minimal overview about how the build process works and what the messages on your computer monitor that appear during the build mean. If you are interested in understanding Yocto more deeply, it is recommended that you access the Yocto’s documentation at and the manual at . Poky Poky is the name given to the build system in a Yocto project. Poky depends on a task executor and scheduler called the bitbake tool. Bitbake executes all the steps of the build process, based in a group of configuration files and metadata. Basically, bitbake parses and runs several shell scripts and the Python code running all the compilations. If you are a regular C/C++ developer, you usually have dependences of makefiles that were processed having the compilers invoked when you ran the good old make command. Imagine that you have a complex project with different software components and you need to run the make for each of them. Bitbake in a Yocto project context might be considered the “global make command,” but you will definitely not use any make commands because bitbake will invoke all of them for you. Configuration files: Bitbake based in configuration files (.conf) that holds the global definition of variables, the compilation flags, where libraries and applications must be placed, the machine architecture to be used, and so forth. Bitbake classes: The bitbake classes, is also known as bbclasses, are defined in a file with the .bbclass extension. Mostly, the heavy things during the build are done with the definitions in these files, like how the RPM packages are generated, how the root file system is created, and so forth. Recipes: The recipes are the files with .bb extensions and define the individual pieces of the software to be built, the packages that must be included, where and how to obtain source code and patches, the dependencies, which features and definitions you want to enable in a source, and others. Perhaps these definitions sound a little complicated for a build system, but they are the magic key to making the system flexible, even if it sounds overengineered. However, in order to better understand how Poky works, and the build system in general, let’s build an Intel Galileo image and discuss step by step what’s going on during the procedure. The Yocto build system flow Figure 2-1 shows how the Yocto build process works. It warrants a few paragraphs to explain each step. Along with input files and data, the user (User Configuration), policy (Policy Configuration), and machine configurations (Machine BSP Configuration) are loaded and the metadata files are parsed (Metadata .bb + patches). The build process starts downloading the components from remote repositories, fetching local packages, HTTPS, web sites, FTP sites, and so on, as shown in the Source Mirror(s). Once all the necessary code is fetched to the working area (Source Fetching), the necessary patches are applied (Path Application) and the configurations are applied (Configuration/Compile/Autoreconf) based on the information retrieved from the input files and data. Then thousands of software code starts to compile and the output files goes to a staging area (output analysis) until the expected packages are created ( .rpm , .deb , or .ipk ). You will use the IPK files in this book. Some sanity tests are done during the generation of output files (QA tests) until all the necessary packages are created and fed (package feeds) to generate the final output images (Image and Application Development SDK). Note that you will need an Internet connection, because lots of code will be downloaded to complete the build process. The Build System Tree at a Glance In the next section you will learn how to download metafiles and Poky to build Intel Galileo images and your toolchain. Before building and executing a series of instructions, it would be interesting to have an overview of how the files are organized in the Poky tree and the Intel Galileo metafiles. The Poky and the layers (left) and bitbake tool (right) As you can see, there is a folder called poky that contains the basic structure of a Yocto build system. For example, in the poky directory there is a bitbake directory that contains the bitbake binary tool and other utilities, as shown in Figure 2-2 (right), as well as some directories starting with meta* prefix. Each meta* directory is, in fact, a layer containing metadata—in other words, recipes, classes, and configuration files. On top of the poky directory are other layers, like meta-clanton-bsp, meta-clanton-distro, meta-intel, and meta-oe, which, of course, have their respective recipes, classes, and configuration files, as well as any other metadata. What defines which layers will in fact be part of compilation is a file called bblayers.conf in the yocto_build/conf directory shown in Listing 2-1. Listing 2-1. bblayers.conf # LAYER_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly LCONF_VERSION = "6" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= " \ /home/mcramon/BSP_1.0.4_T/meta-clanton_v1.0.1/poky/ meta \ /home/mcramon/BSP_1.0.4_T/meta-clanton_v1.0.1/poky/ meta-yocto \ /home/mcramon/BSP_1.0.4_T/meta-clanton_v1.0.1/poky/ meta-yocto-bsp \ /home/mcramon/BSP_1.0.4_T/meta-clanton_v1.0.1/ meta-intel \ /home/mcramon/BSP_1.0.4_T/meta-clanton_v1.0.1/meta-oe /meta-oe \ /home/mcramon/BSP_1.0.4_T/meta-clanton_v1.0.1/ meta-clanton-distro \ /home/mcramon/BSP_1.0.4_T/meta-clanton_v1.0.1/ meta-clanton-bsp \ " It is time to explore the tree a bit more and check out recipe, configuration, and class files. An Example of a Recipe (.bb) Examples of recipe (a), configuration (b), and class files (c) Open the busybox recipe and you will see a code structure similar to the one shown in Listing 2-2. Listing 2-2. busybox_1.20.2.bb require busybox.inc PR = "r7"" EXTRA_OEMAKE += "V=1 ARCH=${TARGET_ARCH} CROSS_COMPILE=${TARGET_PREFIX} SKIP_STRIP=y" Note that the recipes contain a SRC_URI variable that defines the URLs to download busybox, and respective md5 and sha256 checksum to make sure that the package downloaded was the one expected. The EXTRA_OEMAKE only adds compilation flags during the build. do_fetch do_unpack do_patch do_configure do_compile do_install do_package Each function is related to the Yocto build flow, as explained earlier, and when you run the build, you will see these functions displayed on your computer, so you will have an idea of what the build process stage is for each package. The whole list of functions that might be executed during the compilation is specified in Chapter 8 of the Yocto Project Reference Manual at . An Example of a Configuration File (.conf) At this point, you have a good idea about what a recipe is, so let’s look at an example of a configuration file. Configuration files are usually under a folder called conf in a layer. A good example is the configuration filename clanton.conf that belongs to the meta-clanton-bsp layer under the /conf/machine folder. The content of this file is shown in Listing 2-3 and Figure 2-3(b). Listing 2-3. clanton.conf #@TYPE: Machine #@NAME: clanton #@DESCRIPTION: Machine configuration for clanton systems PREFERRED_PROVIDER_virtual/kernel ?= "linux-yocto-clanton" PREFERRED_VERSION_linux-yocto-clanton ?= "3.8%" require conf/machine/include/ia32-base.inc include conf/machine/include/tune-i586.inc #Avoid pulling in GRUB MACHINE_ESSENTIAL_EXTRA_RDEPENDS = "" MACHINE_FEATURES = "efi usb pci" SERIAL_CONSOLE = "115200 ttyS1" #SERIAL_CONSOLES = "115200;ttyS0 115200;ttyS1" EXTRA_IMAGEDEPENDS = "grub" PREFERRED_VERSION_grub = "0.97+git%" In this configuration file, you can see the definitions for clanton machines, such as the serial port speed and the TTY devices to be used as serial console, the kernel name and version, and the drivers supported, like EFI, USB, and PCI. An Example of a Class File (.bbclass) The third example of a metadata component is the class file. They keep under a folder called classes with a .bbclass extension. As an example, using the meta layer, search for the class file bin_package.bbclass , as in Listing 2-4 and shown in Figure 2-3(c). Listing 2-4. bin_package.bbclass # # ex:ts=4:sw=4:sts=4:et # -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*- # # Common variable and task for the binary package recipe. # Basic principle: # * The files have been unpacked to ${S} by base.bbclass # * Skip do_configure and do_compile # * Use do_install to install the files to ${D} # # Note: # The "subdir" parameter in the SRC_URI is useful when the input package # is rpm, ipk, deb and so on, for example: # # SRC_URI = ";subdir=foo-1.0 " # # Then the files would be unpacked to ${WORKDIR}/foo-1.0, otherwise # they would be in ${WORKDIR}. # # Skip the unwanted steps do_configure[noexec] = "1" do_compile[noexec] = "1" # Install the files to ${D} bin_package_do_install () { # Do it carefully [ -d "${S}" ] || exit 1 cd ${S} || exit 1 tar --no-same-owner --exclude='./patches' --exclude='./.pc' -cpf - . \ | tar --no-same-owner -xpf - -C ${D} } FILES_${PN} = "/" EXPORT_FUNCTIONS do_install This bbclass file provides information on what must be done for all meta layers that make usage of binary package recipes, and in this cases skips the configure ( do_configure ) and compile ( do_compile ) procedures indexing noexec to 1 , but takes action during the package installation ( do_install ). Creating Your Own Intel Galileo Images After a small introduction on how the Yocto build system works, it is time to create your own releases using Poky. It is essential to prepare your computer to run the build system, because a series of requirements are necessary to make the build system functional. Preparing Your Computer Ubuntu 12.04 (LTS) Ubuntu 13.10 Ubuntu 14.04 (LTS) Fedora release 19 (Schrödinger’s Cat) Fedora release 20 (Heisenbug) CentOS release 6.4 CentOS release 6 If you have a recent version of a Linux operating system, but it is not listed in the previous distribution list, it is recommended to check that this list is not outdated. To check the most recent distributions supported by Yocto, read the “Supported Linux Distribution” section in the Yocto Reference Project Manual at . Some people are able to compile successfully on Mac OSX, but there are so many steps necessary to make this possible that having a virtual machine is the quickest solution. This book shows a complete build process using Linux Ubuntu 12.04.04. If you have a computer or a virtual machine with Ubuntu installed, and you want to check the version, you can run the following command in a terminal shell: mcramon@ubuntu: ∼/ $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 12.04.4 LTS Release: 12.04 Codename: precise The next step requires the installation of some packages used by bitbake during the build process. The easiest way to install all the dependences is to run the following command: mcramon@ubuntu:∼/$ sudo apt-get install subversion libcurl4-openssl-dev uuid-dev autoconf texinfo libssl-dev libtool iasl bitbake diffstat gawk chrpath openjdk-7-jdk connect-proxy autopoint p7zip-full build-essential gcc-multilib vim-common gawk wget git-core There is an important note regarding the IASL that is a compiler used to support ACPI (Advanced Configuration and Power Interface). When Intel included support to run Windows on Intel Galileo boards, a new power management configuration was created, and, consequently, the IASL compiler had to be updated to attend the ACPI 5.0 specification. Thus, when you install the IASL (one of the components) in the previous command, you need to make sure it supports ACPI revision 5.0 or greater. If you are using Ubuntu 14, the repositories already point to a version of IASL that supported ACPI 5.0; however, if you have Ubuntu 12, you will probably have problems, because the repositories point to the IASL version that only supports ACPI revision 4.0 and will have problems compiling the UEFI packages. So, if you have Ubuntu 12, the easiest way to install the correct IASL without upgrading your OS or pointing to the repositories of version 14 is to install from a source with the following commands: mcramon@ubuntu:∼/tools$ sudo apt-get remove iasl mcramon@ubuntu:∼/tools$ sudo apt-get install libbison-dev flex mcramon@ubuntu:∼/tools$ mkdir iasl mcramon@ubuntu:∼/tools$ cd iasl/ mcramon@ubuntu:∼/tools/iasl$ git clone git://github.com/acpica/acpica.git mcramon@ubuntu:∼/tools$ cd acpica mcramon@ubuntu:∼/tools/acpica$ make After the make command compiles and links everything, the output files will be in the .../generate/unix/bin folder. mcramon@ubuntu:∼/tools/acpica$ cd ./generate/unix/bin mcramon@ubuntu:∼/tools/iasl/acpica/generate/unix/bin$ ./iasl Intel ACPI Component Architecture ASL+ Optimizing Compiler version 20141107-64 [Dec 12 2014] Supports ACPI Specification Revision 5.1 The previous command gives you an installation of IASL that supports ACPI 5.1 and, of course, it is enough to meet the requirements of ACPI 5.0 because it points to the latest release of IASL repositories in . Next, just create a link in /usr/bin/iasl pointing to your IASL, compiled manually: mcramon@ubuntu :∼/sudo ln -s <YOUR IASL PATH> /usr/bin/iasl For example: mcramon@ubuntu :∼/sudo ln -s /home/mcramon/tools/iasl/acpica/generate/unix/bin/iasl /usr/bin/iasl After you install all the packages, your machine is able to run the Yocto builds and you need to follow some steps to create your images, as commented in the next section. The SPI vs. SD Card Images The Intel Galileo images are based in Linux 3.8 and there are two possible images (targets): the SPI image or the SD card image. The SPI image is an image that fits on Intel Galileo SPI flash memory. It contains the very basic software, but allows running the sketches (Arduino programs) and contains some basic utilities, like busybox. The SD card image must be stored in a micro SD card with a maximum capacity of 32GB and that allows booting Intel Galileo from it. This image contains a powerful variety of software, such as Python, node.js, and the drivers to support Intel WiFi and Bluetooth cards, among others. Both images have the same procedure to build, changing only the target name in the bitbake command. However, with the SD images, you just need to copy some of the build output files in the micro SD cards; on the other hand, the SPI images require additional steps and can be created as capsule files or binary files, which will be discussed later. The next sections explain how to build Intel Galileo and toolchain images. Building Intel Galileo Images - 1. Create a directory where your build will be placed. mcramon@ubuntu:∼/$ mkdir BSP_1.0.4_build - 2. Download the BSP patches. Access the download center at and read the instructions on how to compile the BSP. With 1.0.4, there are instructions to access the GitHub link at and download the file . Next, decompress the downloaded file: mcramon@ubuntu:∼/$ wget mcramon@ubuntu:∼/$ tar -xf Galileo-Runtime-1.0.4.tar.gz mcramon@ubuntu:∼/$ cd Galileo-Runtime-1.0.4 - 3. Decompress the patches. mcramon@ubuntu:∼/$ tar -xvf patches_v1.0.4.tar.gz patches_v1.0.4/ patches_v1.0.4/.DS_Store patches_v1.0.4/meta-clanton.patches/ patches_v1.0.4/._patch.meta-clanton.sh patches_v1.0.4/patch.meta-clanton.sh patches_v1.0.4/._patch.Quark_EDKII.sh patches_v1.0.4/patch.Quark_EDKII.sh patches_v1.0.4/._patch.sysimage.sh patches_v1.0.4/patch.sysimage.sh patches_v1.0.4/Quark_EDKII.patches/ patches_v1.0.4/sysimage.patches/ patches_v1.0.4/sysimage.patches/.DS_Store patches_v1.0.4/sysimage.patches/sysimage_v1.0.1+1.0.4.patch patches_v1.0.4/Quark_EDKII.patches/.DS_Store patches_v1.0.4/Quark_EDKII.patches/Quark_EDKII_v1.0.2+ACPI_for_Windows.patch patches_v1.0.4/meta-clanton.patches/.DS_Store patches_v1.0.4/meta-clanton.patches/meta-clanton.post-patch.init.patch patches_v1.0.4/meta-clanton.patches/meta-clanton_v1.0.1+quark-init.patch patches_v1.0.4/meta-clanton.patches/post-setup.patches/ patches_v1.0.4/meta-clanton.patches/post-setup.patches/.DS_Store patches_v1.0.4/meta-clanton.patches/post-setup.patches/1.usb_improv_patch-1.patch patches_v1.0.4/meta-clanton.patches/post-setup.patches/2.GAL-193-clloader-1.patch patches_v1.0.4/meta-clanton.patches/post-setup.patches/3.GAL-199-start_spi_upgrade-1.patch patches_v1.0.4/meta-clanton.patches/post-setup.patches/4.MAKER-222-Sketch_download_unstable-5.patch patches_v1.0.4/meta-clanton.patches/post-setup.patches/GAL-118-USBDeviceResetOnSUSRES-2.patch patches_v1.0.4/meta-clanton.patches/post-setup.patches/patch.sh patches_v1.0.4/meta-clanton.patches/post-setup.patches/uart-1.0.patch patches_v1.0.4/meta-clanton.patches/post-setup.patches/uart-reverse-8.patch - 4. Extract the meta-clanton. At this point you have some tar.gz files extracted in your directory, such as the directory for SPI flash tools, the firmware based on Intel EDKII, and the sysimage templates; but what really matters at this point is the meta-clanton directory that must be decompressed. mcramon@ubuntu:∼/$ tar -zxvf meta-clanton_v1.0.1.tar.gz Galileo-Runtime-1.0.4/ Galileo-Runtime-1.0.4/Quark_EDKII_v1.0.2.tar.gz Galileo-Runtime-1.0.4/README.txt Galileo-Runtime-1.0.4/grub-legacy_5775f32a+v1.0.1.tar.gz Galileo-Runtime-1.0.4/meta-clanton_v1.0.1.tar.gz Galileo-Runtime-1.0.4/patches_v1.0.4.tar.gz Galileo-Runtime-1.0.4/quark_linux_v3.8.7+v1.0.1.tar.gz Galileo-Runtime-1.0.4/spi-flash-tools_v1.0.1.tar.gz Alternatively, you can decompress all files, if you want, by running the following command: mcramon@ubuntu:∼/$ for file in $(ls *.tar.gz); do tar -zxvf $file;done Enter the decompressed meta-clanton directory, and then observe the files and directories that you have. mcramon@ubuntu:∼/$ cd meta-clanton_v1.0.1/ mcramon@ubuntu:∼/$ ls LICENSE meta-clanton-bsp meta-clanton-distro README setup setup.sh - 5. Apply the meta-clanton patches. Return to the previous directory and run the meta-clanton patches with following command: mcramon@ubuntu:∼/$ cd .. mcramon@ubuntu:∼/$ ./patches_v1.0.4/patch.meta-clanton.sh This patch fetches new metafiles and the Poky, and then applies code patches. Internally, the patch.meta-clanton.sh script calls a second script named setup.sh that downloads the some new metafiles that are included in the meta-clanton directory. The new metafiles are meta-intel and meta-oe. Also, two new directories were prepared: poky and yocto_build. This might take some time, depending on the speed of your Internet connection. You can check the new files as follows: mcramon@ubuntu:∼/$ cd meta-clanton_v1.0.1/ mcramon@ubuntu:∼/$ ls LICENSE meta-clanton-bsp meta-clanton-distro meta-intel meta-oe poky README setup setup.sh yocto_build At times during the Intel Galileo development, new bugs arise and new fixes are introduced. The Intel Galileo BSP images lays on Intel Clanton BSP baseline but the development of two lines run in parallel with some merges of Intel Galileo fixes sporadically. When new fixes arises before any official Intel Clanton baseline release, then patches are provided and the Intel Galileo BSP continues independently. For example, this chapter is based on release 1.0.4, but when you downloaded the BSP sources, you have notices such as meta-clanton_v1.0.1.tar.gz that mean baseline 1.0.1. In this case, Intel provides patches that must be applied on top of 1.0.1, and once applied, you have a legitimate source 1.0.4. It is great if there is no patch to be applied, because the baseline is in sync with previous Intel Galileo fixes. So, the second action done by patch.meta-clanton.sh is to apply not only code fixes but also possible recipe files that must be correct; for example, patching a new OpenSSL code or applying security fixes. - 6. Set the environment variables. After applying all the patches, it is necessary to set the environment variables and Poky directly where the build should start. To do this, run the following commands: mcramon@ubuntu:∼/$ cd ./meta-clanton_v1.0.1 mcramon@ubuntu:∼/$ source poky/oe-init-build-env yocto_build - 7. Enable the cache and set the number of threads. This step is just a recommendation. It is not necessary to follow because you could start your build; however, these changes will enable the cache and might make your build a little bit faster. Open the file .../meta-clanton/yocto_build/conf/local.conf with the text editor of your preference. The change is the variable BB_NUMBER_THREADS that represents the maximum number of threads that your bitbake command will be able to handle. My suggestion is to multiply the numbers of threads on your computer processor by 2; for example, if your computer supports 8 threads, you can change this number to 16. If you are using a free version of virtual machines, check the number of core processors that it allows you to set. For example, the free version of VMware only allows setting a maximum of four cores, and if each core of your processor holds one single thread, then BB_NUMBER_THREADS could be 8. BB_NUMBER_THREADS = "12" Still, in yocto.conf you can make the following changes: SSTATE_DIR ?= "/tmp/yocto_cache-sstate" SOURCE_MIRROR_URL ?= "" INHERIT += "own-mirrors" BB_GENERATE_MIRROR_TARBALLS = "1" By enabling the cache, if your build is interrupted for some reason, such as a lapse moment of Internet disconnection, if you re-execute the bitbake command, the build will not start from scratch because the cache is reused and the code that was previously downloaded does not need to be downloaded again. - 8. Compile the images. It is time to execute the build process using the bitbake tool. At this point, you have two possible images related to Intel Galileo: the SPI image and the SD card image. To check the name of each target release, type the command bitbake -s , which brings all the targets supported by the current configurations: mcramon@ubuntu:$ bitbake -s |grep galileo galileo-target :0.1-r0 image-full-galileo :1.0-r0 image-spi-galileo :1.0-r0 The target image-full-galileo creates the SD card image; image-spi-galileo creates the SPI image; and galileo-target must be ignored because it is not used anymore. Then, using bitbake again, you can run the build process for the target you want to work. For example, for SPI you just need to run this: mcramon@ubuntu:$ bitbake image-spi-galileo All the configurations are checked; the download of the sources, packages, and patches that compose the software is started; each component is properly set, enabling and disabling features and software definitions; and finally, everything is compiled and the images are generated. During the compilation process, you will be able to see the do_ actions in place of different recipes, the number of tasks completed and to be completed, and warnings if the mirrors failed to download the expected code. You do not need to worry about warnings, because they are an indication that the code failed to be fetched and a different mirror will be used. You only need to worry if there are errors reported, and in this case, you need to identify the recipe file and check whether the URL mirrors changed, which would fix the file, or if you have a generic error like an Internet connection loss or insufficient space in the device. bitbake output for full image compilation - 9. In the end, if everything downloads and is configured, compiled, and linked, and patches are applied, you should have the images available in the ...meta-clanton/yocto_build/tmp/deploy/images directory. If you created SD card images, you just need to copy the files to your SD card; otherwise, some additional steps are necessary with SPI images. The next section explains how to build the toolchain, but if you are excited to test your release, read the “Booting Intel Galileo with Your Own Images” section in this chapter. Building and Using the Cross-Compiler Toolchain It is important to understand how to create the cross-compilers and IPK packages because some chapters of this book will make use of them, especially in Chapter 7, and, of course, if you want to create native applications. The next sections explain how to build the toolchain and how you might generate a toolchain for different operating systems. Note that if your intention is only to create images to Intel Galileo boards, then this section is not mandatory. Compiling the Toolchain for Different Architectures If you are a Windows or Mac OSX user, you are probably running the Yocto build using a virtual machine. At this point, you might be asking if you can create a toolchain for your native operating system, instead of using virtual machines for everything, including the toolchain. The answer is yes, and it is very simple to create a toolchain for other architectures, even if you have a Linux machine, because it is one of the proposals of the Yocto build system. To make such a change, it is necessary to open the file .../meta-clanton/yocto_build/conf/local.conf and add the variable SDKMACHINE followed by a string that describes the machine architecture designed to the SDK build. SDKMACHINE = "i386-darwin" BB_NUMBER_THREADS = "12" PARALLEL_MAKE = "-j 14" MACHINE = "clanton" DISTRO ?= "clanton-tiny" EXTRA_IMAGE_FEATURES = "debug-tweaks" USER_CLASSES ?= "buildstats image-mklibs image-prelink" PATCHRESOLVE = "noop" CONF_VERSION = "1" Machine Architecture Definition If the SDKMACHINE is not explicitly declared, then the toolchain will assume the computer architecture that runs the Yocto build. You need to use the text editor of your preference, or simply change the machine using a command line. For example, if you want to specify the target as 32-bit Linux, you can run the following: mcramon@ubuntu:∼/$ cd meta-clanton_v1.0.1/yocto_build mcramon@ubuntu:∼/$ echo 'SDKMACHINE = "i586"' >> conf/local.conf The next sections discuss how to build and install the toolchains for different operating systems. Building the Toolchains The generations of toolchains require the same steps mentioned in the “Building Intel Galileo Images” section; however, the bitbake command is different and additional layers must be downloaded. Note that it is always recommended to check any possible changes in the process—how the toolchains are generated in case this book becomes outdated. In this case, consult the Quark BSP Build Guide, which you can access at . The instructions in this section generate the toolchain based on the uclibc library, because it is the default library set in the metafiles. If you are interested in creating the toolchains based in eglibc, you need to read Chapter 7, specifically the “Preparing the BSP Software Image and Toolchain” section. The generation of toolchains is different for Linux, Windows, and OSX, as you will read in the following instructions. Linux The following is the command to generate the toolchain for 32-bit Linux: mcramon@ubuntu:∼/$ cd ./meta-clanton_v1.0.1 mcramon@ubuntu:∼/$ source poky/oe-init-build-env yocto_build mcramon@ubuntu:∼/$ echo 'SDKMACHINE = "i586"' >> conf/local.conf mcramon@ubuntu:∼/$ bitbake meta-toolchain If you want to generate for 64-bit Linux, you need to change the SDKMACHINE to x86_64. Alternatively, you can replace the command bitbake meta-toolchain with bitbake image_full –c populate_sdk and the result will be the same. OSX - 1. Go to the App Store and install Xcode 5.1.0 or later. - 2. Install the command-line development tools using Preferences ➤ Downloads and choose command-line tools. - 3. Using the terminal shell, create the file OSX-sdk.zip with following commands: $ mkdir ∼/Desktop/OSX-sdk $ cd ∼/Desktop/OSX-sdk $ ditto `xcrun --sdk macosx10.8 --show-sdk-path` . $ cd .. $ zip -yr OSX-sdk OSX-sdk - 4. Copy the OSX-sdk.zip to a directory in your Linux virtual machine. The following are the commands to create the OSX toolchain: mcramon@ubuntu:∼/$ cd ./meta-clanton_v1.0.1 mcramon@ubuntu:∼/$ sed -i 's|setup/gitsetup.py -c setup/$1.cfg -w git://git.yoctoproject.org/meta-darwin mcramon@ubuntu:∼/$ cd meta-darwin ; git checkout 03b7dd85732838d78e4879332b1cc005dae25754 ; cd .. mcramon@ubuntu:∼/$ (cd poky && patch -p1) < meta-darwin/oecore.patch mcramon@ubuntu:∼/$ mv <YOUR DIRECTORY HERE>/OSX-sdk.zip meta-darwin/recipes-devtools/osx-runtime/files darwinpath="$(pwd)/meta-darwin" mcramon@ubuntu:∼/$ echo 'SDKMACHINE = "i386-darwin"' >> yocto_build/conf/local.conf mcramon@ubuntu:∼/$ echo "BBLAYERS += \"$darwinpath\"" >> yocto_build/conf/bblayers.conf mcramon@ubuntu:∼/$ source poky/oe-init-build-env yocto_build mcramon@ubuntu:∼/$ bitbake meta-toolchain Windows For Windows, the commands are the same for Windows, 64 or 32 bits; however, a sequence of two bitbakes is required in addition to the extra metafiles. -b dylan git://git.yoctoproject.org/meta-mingw mcramon@ubuntu:∼/$ (cd poky && patch -p1) < meta-mingw/oecore.patch mcramon@ubuntu:∼/$ mingwpath="$(pwd)/meta-mingw" mcramon@ubuntu:∼/$ echo 'SDKMACHINE = "i686-mingw32"' >> yocto_build/conf/local.conf mcramon@ubuntu:∼/$ echo "BBLAYERS += \"$mingwpath\"" >> yocto_build/conf/bblayers.conf mcramon@ubuntu:∼/$ cd $WORKSPACE/meta-clanton_v1.0.1poky mcramon@ubuntu:∼/$ wget -O sstate.bbclass.patch git am sstate.bbclass.patch mcramon@ubuntu:∼/$ cd $WORKSPACE/meta-clanton_v1.0.1 mcramon@ubuntu:∼/$ source poky/oe-init-build-env yocto_build mcramon@ubuntu:∼/$ bitbake gcc-crosssdk-initial -c cleansstate mcramon@ubuntu:∼/$ bitbake meta-toolchain The Output Files The output files will be in the .../meta-clanton_v1.0.1/yocto_build/tmp/deploy/sdk directory, while the ipk packages will be in the directory .../meta-clanton_v1.0.1/yocto_build/tmp/deploy/ipk directory. The output filename depends on whether your computer is 32 or 64 bits, the architecture that the toolchain is designated for (we will discuss later), and the uclibc or eglic library that the image is based on. In the end, you will have just a single script file; however, it is a big file at around 260MB. For example, if you compile in a 64-bit Linux machine with an Intel processor, the output filename is clanton-tiny-uclibc-x86_64-i586-toolchain-1.4.2.sh . The next sections discuss how to install and test the toolchains. Installing the Cross-Compilers The installation of the toolchain just requires you to execute the script created and choose a destination folder, as shown: mcramon@ubuntu:∼/toolchain$ ./clanton-tiny-uclibc-x86_64-i586-toolchain-1.4.2.sh Enter target directory for SDK (default: /opt/clanton-tiny/1.4.2): You are about to install the SDK to "/opt/clanton-tiny/1.4.2" . Proceed[Y/n]?Y [sudo] password for mcramon: Extracting SDK...done Setting it up...done SDK has been successfully set up and is ready to be used. The shell script inflates the toolchain in the directory chosen, and all programs that make part of the toolchain are promptly accommodated. The “Creating a Hello World!” section in this chapter brings a practical usage of the toolchain. Creating a Hello World! This section requires you to have built and properly installed the toolchain in your computer. If you enter the toolchain directory chosen during the installation, you will notice many binary files, including the compilers and directories, but initially what matters is a file that starts with environment-setup-* ; for example, in my setup I have the file named as environment-setup-i586-poky-linux-uclibc. This file contains a lot of variables—such as CC, CXX, CPP, AR, and NM—that must be exported to your computer shell. They are used to compile, link, and archive your native programs with the toolchain, so primarily you need to make this variable part of the development environment, sourcing it as follows: mcramon@ubuntu: /opt/clanton-tiny/1.4.2 $ source environment-setup-i586-poky-linux-uclibc For example, you will be able to compile a problem with $(CC) -c $(CFLAGS) $(CPPFLAGS) since CC points to the C compilers, CFLAGS to the C compiler flags, and CPPFLAGS to the C++ compiler flags. If you check some of these variables after sourcing them, you will see something like this: mcramon@ubuntu:/opt/clanton-tiny/1.4.2$ echo $CC i586-poky-linux-uclibc-gcc -m32 -march=i586 --sysroot=/opt/clanton-tiny/1.4.2/sysroots/i586-poky-linux-uclibc mcramon@ubuntu:/opt/clanton-tiny/1.4.2$ echo $CFLAGS -O2 -pipe -g -feliminate-unused-debug-types mcramon@ubuntu:/opt/clanton-tiny/1.4.2$ echo $CXXFLAGS -O2 -pipe -g -feliminate-unused-debug-types -fpermissive Listing 2-5 brings a simple Hello World program written in C, which is present in the code folder of this chapter. Listing 2-5. HelloWorld.c #include <stdio.h> int main (int argc, char const* argv[]) { printf("Hello, World! This is Intel Galileo!\n"); return 0; } Copy this program to your computer and compile it using the variables you exported. mcramon@ubuntu:/ ${CC} ${CFLAGS} HelloWorld.c -o HelloWorld You should have the executable HelloWorld created using the cross-compiler. Just copy this file to a micro SD card formatted using FAT or FAT32. If you do not know how to format the micro SD card, read the “Boot from SD Card Image" section of this chapter for instructions. Insert the micro SD card on your Intel Galileo and boot the board connecting the power supply. Also connect the serial cables, as explained in Chapter 1, and open a Linux terminal shell. Then locate the micro SD card mounted to the /media/mmcblk0p1 partition, and execute the HelloWorld, as shown: root@clanton:/# cd /media/mmcblk0p1/ root@clanton:/media/mmcblk0p1# ls HelloWorld root@clanton:/media/mmcblk0p1# ./HelloWorld Hello, World! This is Intel Galileo! If you see the output message, it means that your toolchain is functional and generating the binaries correctly. There are multiples ways to transfer your executable to the board, using either WiFi, Ethernet, a pen drive, or a micro SD card. For more information, read the “Transferring Files Between Intel Galileo and Computers” section in Chapter 5. You can also create a simple makefile for this HelloWorld by simply using the variables exported by the environment-setup-i586-poky-linux-uclibc as a base. For example, Listing 2-6 shows a makefile for the HelloWorld program. Listing 2-6. Makefile SHELL = /bin/bash TARGET_NAME = i586-poky-linux-uclibc DIST = clanton-tiny CC = $(TARGET_NAME)-gcc -m32 -march=i586 --sysroot =/opt/$(DIST)/1.4.2/ sysroots/$(TARGET_NAME) CFLAGS = -O2 -pipe -g -feliminate-unused-debug-types OUTPUT_FILE = HelloWorld all : target target : $(patsubst %.c,%.o,$(wildcard *.c)) $(CC) $(CFLAGS) $^ -o $(OUTPUT_FILE) clean : rm -f $(TARGET_BIN) *.o $(OUTPUT_FILE) The makefile created is designated to target i586-poky-linux-uClibc, as stored in the variable TARGET_NAME and considers the toolchain installed in the /opt/clanton-tiny directory according the CC variable. So, if you create the toolchain for a different target, or used a different directory installation, it is necessary to adapt this makefile. The makefile also brings three commands: clean to clean all the object files and the output file named as HelloWorld, because it is the value in the OUTPUT_FILE variable; all and target do the same thing—in other words, compile the C programs, invoking the compiler pointed by CC and CFLAGS. To create a HelloWorld, all you need to do is type make. mcramon@ubuntu:∼/native$ make i586-poky-linux-uclibc-gcc -m32 -march=i586 --sysroot=/opt/clanton-tiny/1.4.2/sysroots/i586-poky-linux-uclibc -O2 -pipe -g -feliminate-unused-debug-types -c -o HelloWorld.o HelloWorld.c i586-poky-linux-uclibc-gcc -m32 -march=i586 --sysroot=/opt/clanton-tiny/1.4.2/sysroots/i586-poky-linux-uclibc -O2 -pipe -g -feliminate-unused-debug-types HelloWorld.o -o HelloWorld The next section talks about debugging native applications. Debugging Native Applications It is possible to debug native application and kernel modules using GDB, Eclipse, and JTAG tools. This book focuses on Arduino projects, so all debugging methods are concentrated in the Intel Arduino IDE, and not in native applications or kernel contexts. In the scope of this book, it is important to know how build systems work, how to build and compile native applications, and how to generate the cross-compilers, because these features will be used in the following chapters, especially when you work with OpenCV and V4L2 in Chapter 7. However, if you are interested in learning how to debug native applications, Intel provides a very good tutorial about how to use Eclipse with Intel Galileo and Intel Edison in the developer zone. This tutorial can be accessed at . For kernel debugging, GDB, and JTAG enabling using openOCD, it is recommended that you read the Source Level Debugging using OpenOCD/GDB/Eclipse on Intel Quark SoC X1000 manual, present in the manuals folder of this chapter, or you can access it at . The next section explains how to make Intel Galileo boot with the images that you created with poky. Booting Intel Galileo with Your Own Images As explained earlier, you have two image targets related to Intel Galileo—the SPI card image and the SD card image. The procedures to make Intel Galileo boot using these images differ and must be followed as directed in the following sections. Booting from SD Card Images The SD card release only requires that you copy some of the output files to a micro SD card, insert it in Intel Galileo, and then power-on the board. Preparing the Micro SD Card Before copying the files, there are important details to know regarding the format of the micro SD card, which must be FAT or FAT32 with a single partition. An SD card adaptor to be used with a computer A micro SD card USB adaptor With a physical connection between the micro SD card and your computer established, you just need to format the micro SD card according to your OS. Windows Formatting the micro SD card on Windows Mac OSX The Disk Utility on Mac OSX formatting the micro SD card Ubuntu - 1. Open a terminal by pressing Ctrl+Alt+T at same time. - 2. Type the command df to check the partition in your computer, including the micro SD card mounted. Then identify the device name that defines the micro SD card; for example, /dev/sdb1. - 3. Unmount the SD card using the umount command followed by the device name. For example: umount /dev/sdb1 - 4. Use the MKDOSFS utility to format the card. For example: mkdosfs -F 32 -v /dev/sdb1 With the micro SD card ready, it is time to copy your image into it. Copying Files to a Micro SD Card If you successfully create your SD card image, enter ../yocto_build/tmp/deploy/images in the directory and type the ls -l command: mcramon@ubuntu $ cd ./tmp/deploy/images/ mcramon@ubuntu $ ls -l total 150576 drwxr-xr-x 3 mcramon mcramon 4096 Nov 18 23:01 boot -rw-r--r-- 2 mcramon mcramon 373760 Nov 19 00:04 bootia32.efi lrwxrwxrwx 2 mcramon mcramon 42 Nov 18 23:58 bzImage -> bzImage--3.8-r0-clanton-20141119062948.bin -rw-r--r-- 2 mcramon mcramon 1984512 Nov 18 23:58 bzImage--3.8-r0-clanton-20141119062948.bin lrwxrwxrwx 2 mcramon mcramon 42 Nov 18 23:58 bzImage-clanton.bin -> bzImage--3.8-r0-clanton-20141119062948.bin -rw-r--r-- 1 mcramon mcramon 1689687 Nov 19 00:08 core-image-minimal-initramfs-clanton-20141119062948.rootfs.cpio.gz lrwxrwxrwx 1 mcramon mcramon 66 Nov 19 00:08 core-image-minimal-initramfs-clanton.cpio.gz -> core-image-minimal-initramfs-clanton-20141119062948.rootfs.cpio.gz -rw-r--r-- 2 mcramon mcramon 279670 Nov 18 23:59 grub.efi -rw-r--r-- 1 mcramon mcramon 314572800 Nov 19 00:26 image-full-galileo-clanton-20141119062948.rootfs.ext3 lrwxrwxrwx 1 mcramon mcramon 53 Nov 19 00:26 image-full-galileo-clanton.ext3 -> image-full-galileo-clanton-20141119062948.rootfs.ext3 -rw-rw-r-- 2 mcramon mcramon 1556960 Nov 18 23:58 modules--3.8-r0-clanton-20141119062948.tgz -rw-rw-r-- 2 mcramon mcramon 294 Nov 19 00:25 README_-_DO_NOT_DELETE_FILES_IN_THIS_DIRECTORY.txt There is a folder called boot with files and links. The only function of the links is to make “easy reading” of the files that receive a timestamp in their names. For example, the bzImage--3.8-r0-clanton-20141119062948.bin, where 20141119062948 is only the timestamp; thus, if you run the bitbake again without any modification, you will have another bzImage file with a different timestamp and a link pointing to the newest one. - 1. boot (the whole directory, including subdirectories) - 2. bzImage - 3. core-image-minimal-initramfs-clanton.cpio.gz - 4. grub.efi - 5. image-full-galileo-clanton.ext3 Copy these files and directories to your micro SD card, insert it into the micro SD card slot (see Chapter 1), and power-on your Intel Galileo. This is everything you need if you are using SD card images. The next section explains this procedure when SPI images are used. Booting from SPI Card Images Capsule file: This file contains the system images, the kernel, the file system partition, and the boot loader packages (grub), but it does not contain the platform data. Platform data informs the MAC address of your Ethernet controller and the board model, such as Intel Galileo or Intel Galileo Gen2. This file is very useful in most cases; if you have a board without boot issues and the Ethernet controller is working with a correct MAC address, the file is very handy. Usually, capsule files (or cap files) contain the .cap extension, and you can flash Intel Galileo boards using the Intel Arduino IDE or the UEFI shell, which will be discussed later. Binary file: This binary file contains everything; in other words, everything in a capsule file, plus the platform data. Usually, these files have the .bin extension and must be flashed with an SPI programmer. SPI files generation flowchart Initially it is necessary to generate the Intel Galileo SPI images that will generate the SPI files as output. In parallel, it is possible to compile the firmware and generate the files related to firmware as output. Then a template is mounted using a file named layout.conf that contains all the ingredients necessary to build files a single file that will be used to flash the SPI flash memory. With layout.conf ready, the SPI flash tool is called to generate capsule and binary files without platform data. If the intention is to have files without platform data, at this point the capsule files might be used; otherwise, the platform data must be patched using a Python script, which will be discussed later. A final binary with all the information is created. Creating the Capsule Files Flash Files When you downloaded the BSP board support package in the step 2 of the “Creating your Own Intel Galileo Images” section of this chapter, you should have noticed that files in addition to the meta-clanton data were downloaded, among them files called spi-flash-tools_v1.0.1.tar.gz and Quark_EDKII_v1.0.1.tar.gz. Compiling the UEFI Firmware To decompress the Linux kernel before your board boots, there is firmware responsible to initialize the board components, including the Intel Quark SoC. It also assumes other activities after the boot. The Intel Galileo provides firmware compliant with UEFI (Unified Extensible Firmware Interface) standards that consist of boot procedures, runtime services calls, and data tables used for power management methods like ACPI (Advanced Configuration and Power Interface). EDKII means the environment cross-platform for firmware development. Of course, to understand the UEFI specification and EDKII development process, it would require a full book dedicated to this subject. In the context of this book, the concept is limited to how to build the EDKII, which is one of the core elements to have a functional SPI image. The next sections discuss how to prepare your environment and how to compile the firmware. Preparing the Environment Python 2.6 or newer Any GCC and G++ with versions between 4.3 and 4.7 Subversion uuid-dev IASL If you run the command line proposed in the “Preparing Your Computer” section, you should be fine with these dependences, except Python. Check to see if you have Python installed on your Linux by running the following command: mcramon@ubuntu:∼$ dpkg --get-selections | grep -v deinstall|grep -i python Or you can run this: mcramon@ubuntu:∼$ python --version Python 2.7.3 If you do not have Python installed, you can install it by following the instructions at . If you want a quick try using version 2.7.6, you can run the following commands: mcramon@ubuntu:∼/$ wget mcramon@ubuntu:∼/$ tar -zxvf Python-2.7.6.tgz mcramon@ubuntu:∼/$ cd Python-2.7.6/ mcramon@ubuntu:∼/$ ./configure mcramon@ubuntu:∼/$ make mcramon@ubuntu:∼/$ make install After this, you are ready to compile the firmware by following the steps presented in the next section. Compiling the Firmware - 1. Extract the package. The first thing to do is go back to the base directory and decompress the file: mcramon@ubuntu $ tar -xvf Quark_EDKII_v1.0.2.tar.gz Unfortunately, it is necessary to apply patches manually, but fortunately this can be done with a single command line: mcramon@ubuntu $ ./patches_v1.0.4/patch.Quark_EDKII.sh - 2. Prepare the SVN project. The EDKII is maintained using the SVN configuration control release tool. mcramon@ubuntu:∼/$ cd Quark_EDKII_v1.0.2/ mcramon@ubuntu:∼/$ ./svn_setup.py mcramon@ubuntu:∼/$ svn update mcramon@ubuntu:∼/$ export WORKSPACE=$(pwd) - 3. Identify the GCC that you have installed. There is a compilation flag used during the firmware compilation that depends of the GCC compiler installed on your computer. To check which version you have, you can type the following command: mcramon@ubuntu:∼/$ gcc --version gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. In the example, GCC informed us that the version is 4.6.3, which means that the flag to be used in the compilation line of EDKII will be the string GCC46. The GCC and G++ compilers tested at the time that this book was written was between version 4.3 and 4.7, which means that the flags supported are GCC43, GCC44, GCC45, GCC46, and GCC47. The easiest way to keep the right flag during your compilation is to export a variable in bash with the latest character of the version: mcramon@ubuntu:∼/$ export GCCVERSION=$(gcc -dumpversion | cut -c 3) mcramon@ubuntu:∼/$ echo $GCCVERSION 4 - 4. Compile the firmware. In the folder that you extracted the EDKII, you will notice a file called quarkbuild.sh . This file is a shell script that compiles the firmware for you with the following options: quarkbuild.sh [-r32 | -d32 | -clean] [GCC43 | GCC44 | GCC45 | GCC46 | GCC47] [PlatformName] [-DSECURE_LD (optional)] [-DTPM_SUPPORT (optional)] - clean : Delete the build files/folders. -d32 : Create a DEBUG build. -r32 : Create a RELEASE build. GCC4x : GCC flags used for this build. [PlatformName] : Name of the Platform package you want to build. [-DSECURE_LD] : Create a Secure Lockdown build (optional). [-DTPM_SUPPORT] : Create EDKII build with TPM support (optional). Note This option has a one-time prerequisite described in CryptoPkg\Library\OpensslLib\Patch-HOWTO.txt in the EDKII directory that you downloaded and extracted. So you can type: mcramon@ubuntu:∼/$ ./quarkbuild.sh -r32 GCC46 QuarkPlatform Or, if you exported the GCCVERSION variable you can run: mcramon@ubuntu:∼/$ ./quarkbuild.sh -r32 GCC4$GCCVERSION QuarkPlatform Several files will be compiled, taking a few minutes to finish. The files that really matter will be in the output directory at Quark_EDKII_v1.0.2/Build/QuarkPlatform/RELEASE_GCC46/FV/FlashModules . mcramon@ubuntu:∼/BSP_1.0.4_T/ Quark_EDKII_v1.0.2/Build/QuarkPlatform/RELEASE_GCC46/FV/FlashModules $ l -1 EDKII_BOOTROM_OVERRIDE.Fv EDKII_BOOT_STAGE1_IMAGE1.Fv EDKII_BOOT_STAGE1_IMAGE2.Fv EDKII_BOOT_STAGE2_COMPACT.Fv EDKII_BOOT_STAGE2.Fv EDKII_BOOT_STAGE2_RECOVERY.Fv EDKII_NVRAM.bin EDKII_RECOVERY_IMAGE1.Fv Flash-EDKII-missingPDAT.bin RMU2.bin RMU.bin If you want to see some extra debug messages, especially during the boot, you can generate the debug releases using the -d32 flag, as follows: mcramon@ubuntu:∼/$ ./quarkbuild.sh -d32 GCC4$(GCCVERSION) QuarkPlatform - 5. Create symbolic links. If you successfully compiled the firmware, you might have the following output: mcramon@ubuntu:∼/Quark_EDKII_v1.0.2$ cd Build/QuarkPlatform/ mcramon@ubuntu:∼/BSP_1.0.4_T/Quark_EDKII_v1.0.2/Build/QuarkPlatform$ ls DEBUG_GCC46 RELEASE_GCC46 The directories DEBUG_GCC46 and RELEASE_GCC46 are the result of a debug and release compilation using GCC compiler version 4.6. It is necessary to simplify such directories using soft links, naming them DEBUG_GCC and RELEASE_GCC only because these are the names that the system image tools will search for. mcramon@ubuntu:∼/$ ln -s DEBUG_GCC46 DEBUG_GCC mcramon@ubuntu:∼/$ ln -s RELEASE_GCC46 RELEASE_GCC mcramon@ubuntu:∼/$ ls -l total 8 lrwxrwxrwx 1 mcramon mcramon 11 Nov 21 20:21 DEBUG_GCC -> ../DEBUG_GCC46 lrwxrwxrwx 1 mcramon mcramon 13 Nov 21 20:21 RELEASE_GCC -> ../RELEASE_GCC46 If you achieve this step, congratulations, you are ready to generate the next step—creating the capsule files. Troubleshooting Compiling the Firmware Python does not fetch the code . In this case, the first thing to do is check whether your Internet connection is working. You can try to test by opening a web browser or via a command line using a wget command like this: mcramon@ubuntu:∼/tmp$ wget --spider Spider mode enabled. Check if remote file exists. --2014-11-21 19:53:29-- Resolving example.com (example.com)... 93.184.216.119, 2606:2800:220:6d:26bf:1447:1097:aa7 Connecting to example.com (example.com)|93.184.216.119|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 1270 (1.2K) [text/html] Remote file exists and could contain further links, but recursion is disabled -- not retrieving. If you are behind a proxy, then you need also to configure the subversion proxy settings, editing the file located in ∼/.subversion/servers. Then search for the section [global] and set your proxy configuration as shown in the following lines: [global] http-proxy-host = <YOUR HOST IP> http-proxy-port = <YOUR PORT NUMBER> http-proxy-username = <YOUR USER NAME> http-proxy-password = <YOUR PASSWORD> A GCC compiler not supported . If you have a GCC compiler that is not supported, you can download and install one of the versions supported and change the link called gcc in the /usr/bin directory to point to the old one. For example: mcramon@ubuntu:∼/ cd /usr/bin mcramon@ubuntu:∼/ sudo ln -s /usr/bin/gcc-4.6 gcc This file contains a tool that is used to create the cap and binary files based in your SPI images. The procedure for the creation is quite simple, as explained next. Preparing layout.conf At this point, you need make the other zipped files that you have downloaded but not used until now. So, move to the base directory and type the following command line to decompress all of them, if you have not done so yet: mcramon@ubuntu:∼/BSP_1.0.4_T$ tar -zxvf spi-flash-tools_v1.0.1.tar.gz mcramon@ubuntu:∼/BSP_1.0.4_T$ tar -zxvf sysimage_v1.0.1.tar.gz mcramon@ubuntu:∼/BSP_1.0.4_T$ tar -zxvf grub-legacy_5775f32a+v1.0.1.tar.gz mcramon@ubuntu:∼/BSP_1.0.4_T$ tar -zxvf quark_linux_v3.8.7+v1.0.1.tar.gz Then run a script that will create symbolic links, making the folder names much simpler: mcramon@ubuntu:∼/BSP_1.0.4_T$ ./sysimage_v1.0.1/create_symlinks.sh See if we can: ln -s ./spi-flash-tools_* spi-flash-tools Found spi-flash-tools_v1.0.1 + ln -s spi-flash-tools_v1.0.1 spi-flash-tools See if we can: ln -s ./Quark_EDKII_* Quark_EDKII Found Quark_EDKII_v1.0.2 + ln -s Quark_EDKII_v1.0.2 Quark_EDKII See if we can: ln -s ./sysimage_* sysimage Found sysimage_v1.0.1 + ln -s sysimage_v1.0.1 sysimage See if we can: ln -s ./meta-clanton_* meta-clanton Found meta-clanton_v1.0.1 + ln -s meta-clanton_v1.0.1 meta-clanton See if we can: ln -s ./quark_linux_* quark_linux Found quark_linux_v3.8.7+v1.0.1 + ln -s quark_linux_v3.8.7+v1.0.1 quark_linux See if we can: ln -s ./grub-legacy_* grub-legacy Found grub-legacy_5775f32a+v1.0.1 + ln -s grub-legacy_5775f32a+v1.0.1 grub-legacy If this script does not work it is because you are executing from the wrong directory. Make sure that you are in the base folder where you download all tar.gz files. As you can see, the script tried to find each component of the BSP source package and create symbolic links to them using common names. For example, grub-legacy_5775f32a+v1.0.1 turns grub-legacy, and the same process is done to the other directories, as you can see if you type ls -l after the script execution. mcramon@ubuntu:∼/BSP_1.0.4_T$ ls -l total 5292 -rw-r--r-- 1 mcramon mcramon 2657072 Nov 17 23:11 board_support_package_sources_for_intel_quark_v1.0.1.7z -rw-r--r-- 1 mcramon mcramon 30720 Nov 17 22:54 BSP-Patches-and-Build_Instructions.1.0.4.tar lrwxrwxrwx 1 mcramon mcramon 27 Nov 21 20:32 grub-legacy -> grub-legacy_5775f32a+v1.0.1 drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 grub-legacy_5775f32a+v1.0.1 -rw-rw-r-- 1 mcramon mcramon 192465 May 22 2014 grub-legacy_5775f32a+v1.0.1.tar.gz lrwxrwxrwx 1 mcramon mcramon 19 Nov 21 20:32 meta-clanton -> meta-clanton_v1.0.1 drwxr-xr-x 9 mcramon mcramon 4096 Nov 18 21:58 meta-clanton_v1.0.1 -rw-rw-r-- 1 mcramon mcramon 517412 May 22 2014 meta-clanton_v1.0.1.tar.gz drwxr-xr-x 2 mcramon mcramon 4096 Oct 20 13:31 patches lrwxrwxrwx 1 mcramon mcramon 18 Nov 21 20:32 Quark_EDKII -> Quark_EDKII_v1.0.2 drwxr-x--- 21 mcramon mcramon 4096 Nov 21 18:48 Quark_EDKII_v1.0.2 drwxrwxr-x 6 mcramon mcramon 4096 Nov 21 18:40 Quark_EDKII_v1.0.2-svn_externals.repo -rwxr-xr-x 1 mcramon mcramon 1502762 Nov 21 15:20 quark_edkii_v1.0.2.tar.gz lrwxrwxrwx 1 mcramon mcramon 25 Nov 21 20:32 quark_linux -> quark_linux_v3.8.7+v1.0.1 drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 quark_linux_v3.8.7+v1.0.1 -rw-rw-r-- 1 mcramon mcramon 236544 May 22 2014 quark_linux_v3.8.7+v1.0.1.tar.gz -rw-rw-r-- 1 mcramon mcramon 480 May 22 2014 sha1sum.txt lrwxrwxrwx 1 mcramon mcramon 22 Nov 21 20:32 spi-flash-tools -> spi-flash-tools_v1.0.1 drwxr-xr-x 6 mcramon mcramon 4096 May 22 2014 spi-flash-tools_v1.0.1 -rw-rw-r-- 1 mcramon mcramon 219559 May 22 2014 spi-flash-tools_v1.0.1.tar.gz lrwxrwxrwx 1 mcramon mcramon 15 Nov 21 20:32 sysimage -> sysimage_v1.0.1 drwxr-xr-x 9 mcramon mcramon 4096 May 22 2014 sysimage_v1.0.1 -rw-rw-r-- 1 mcramon mcramon 9876 May 22 2014 sysimage_v1.0.1.tar.gz -rw-r--r-- 1 mcramon mcramon 2938 Nov 18 22:14 uart-reverse-8.patch The reason for this “simplification” is related to the sysimage directory bringing a configuration file that tells the “ingredients”—in other words, the files that will be used to compose the flash image and the version of the image. For example, check the directories that you have in the sysimage file: mcramon@ubuntu:∼/BSP_1.0.4_T$ cd sysimage mcramon@ubuntu:∼/BSP_1.0.4_T/sysimage$ ls -l total 36 drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 config -rwxr-xr-x 1 mcramon mcramon 2496 May 22 2014 create_symlinks.sh drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 grub drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 inf -rw-r--r-- 1 mcramon mcramon 1488 May 22 2014 LICENSE drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 sysimage.CP-8M-debug drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 sysimage.CP-8M-debug-secure drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 sysimage.CP-8M-release drwxr-xr-x 2 mcramon mcramon 4096 May 22 2014 sysimage.CP-8M-release-secure Note that there are four directories to generate a flash image with 8MB for debug and release compilation, and for unsecure and secure boots. In each of these directories, there is a file called layout.conf. This file must be changed to point to the correct “ingredients” of your build and the right version number. To make your life easier, there is a script that does the changes in all directories automatically for you, even if you do not need to change all of them. Running the script executes the following command in the base directory: mcramon@ubuntu:∼/BSP_1.0.4_T$ ./patches_v1.0.4/patch.sysimage.sh You might ask which changes this script really makes. Let’s assume one of the directories—sysimage.CP-8M-debug for example—and open the layout.conf file with the text editor of you preference before running the patch_sysimage.sh script. layout.conf is shown in Listing 2-7. Listing 2-7. layout.conf # WARNING: this file is indirectly included in a Makefile where it # defines Make targets and pre-requisites. As a consequence you MUST # run "make clean" BEFORE making changes to it. Failure to do so may # result in the make process being unable to clean files it no longer # has references to. [main] size=8388608 type=global [MFH] version=0x1 flags=0x0 address=0xfff08000 type=mfh [Flash Image Version] type=mfh.version meta=version value=0x01000105 [ROM_OVERLAY] address=0xfffe0000 item_file=../../Quark_EDKII/Build/QuarkPlatform/PLAIN/DEBUG_GCC/FV/FlashModules/EDKII_BOOTROM_OVERRIDE.Fv type=some_type [signed-key-module] address=0xfffd8000 item_file=config/SvpSignedKeyModule.bin svn_index=0 type=some_type in_capsule=no # On a deployed system, the SVN area holds the last known secure # version of each signed asset. # TODO: generate this area by collecting the SVN from the assets # themselves. [svn-area] address=0xfffd0000 item_file=config/SVNArea.bin type=some_type # A capsule upgrade must implement some smart logic to make sure the # highest Security Version Number always wins (rollback protection) in_capsule=no [fixed_recovery_image] address=0xfff90000 item_file=../../Quark_EDKII/Build/QuarkPlatform/PLAIN/ DEBUG_GCC /FV/FlashModules/ EDKII_RECOVERY_IMAGE1.Fv sign=yes type=mfh.host_fw_stage1_signed svn_index=2 # in_capsule=no [NV_Storage] address=0xfff30000 item_file=../../Quark_EDKII/Build/QuarkPlatform/PLAIN/ DEBUG_GCC /FV/FlashModules/ EDKII_NVRAM.bin type=some_type [RMU] address=0xfff00000 item_file=../../Quark_EDKII/Build/QuarkPlatform/PLAIN/ DEBUG_GCC /FV/FlashModules/ RMU.bin type=none_registered [boot_stage1_image1] address=0xffec0000 item_file=../../Quark_EDKII/Build/QuarkPlatform/PLAIN/ DEBUG_GCC /FV/FlashModules/ EDKII_BOOT_STAGE1_IMAGE1.Fv sign=yes boot_index=0 type=mfh.host_fw_stage1_signed svn_index=1 [boot_stage1_image2] address=0xffe80000 item_file=../../Quark_EDKII/Build/QuarkPlatform/PLAIN/DEBUG_GCC/FV/FlashModules/ EDKII_BOOT_STAGE1_IMAGE2.Fv sign=yes boot_index=1 type=mfh.host_fw_stage1_signed svn_index=1 [boot_stage_2_compact] address=0xffd00000 item_file=../../Quark_EDKII/Build/QuarkPlatform/PLAIN/DEBUG_GCC/FV/FlashModules/ EDKII_BOOT_STAGE2_COMPACT.Fv sign=yes type=mfh.host_fw_stage2_signed svn_index=3 [Ramdisk] address=0xffa60000 item_file =../../meta-clanton/yocto_build/tmp/deploy/images/image-spi-clanton.cpio.lzma sign=yes type=mfh.ramdisk_signed svn_index=7 [LAYOUT.CONF_DUMP] address=0xffcff000 type=mfh.build_information meta=layout [Kernel] address=0xff852000 item_file =../../meta-clanton/yocto_build/tmp/deploy/images/bzImage sign=yes type=mfh.kernel_signed svn_index=6 [grub.conf] address=0xff851000 item_file= grub/grub-spi.conf sign=yes type=mfh.bootloader_conf_signed svn_index=5 [grub] address=0xff800000 item_file =../../meta-clanton/yocto_build/tmp/deploy/images/grub.efi sign=yes fvwrap=yes guid=B43BD3E1-64D1-4744-9394-D0E1C4DE8C87 type=mfh.bootloader_signed svn_index=4 [Flash Image Version]: It is recommended that you make a simple change in the [Flash Image Version] because it brings version 0x01000105 in the value field. This means that when you boot your board, the version read will be 01.00.01.05, or simply 1.0.1, because 05 is omitted; and considering that the release of this example is based on 1.0.4, it is recommended to change to 0x01000400, which means 1.0.4. If you want to see the correct version number, this change is necessary. [RamDisk]: This section needs to replace the string image-spi-clanton.cpio.lzma with image-spi-galileo-clanton.cpio.lzma, because if you check the images generated in /meta-clanton/yocto_build/tmp/deploy/images/, the image generated is named image-spi-galileo.cpio.lzma. Thus, this section should be as follows: [Ramdisk] address=0xffa60000 item_file=../../meta-clanton/yocto_build/tmp/deploy/images/image-spi- galileo-clanton .cpio.lzma sign=yes type=mfh.ramdisk_signed svn_index=7 In general, the script also adjusts the path filenames, removing all PLAIN directories and pointing to valid paths. The sysimage brings the template with the old version because the tool did not require any changes since 1.0.1, and the template comes with the same version number. The other sections—like [NVM Storage], [RMU], [boot_stage1_image1], [boot_stage1_image2], and [boot_stage_2_compact]—search for the EDKII components that you created in the previous section; but pay attention to DEBUG. This explains why you created the soft links in the step 5 of the “Steps to Compile the Firmware” section of this chapter. The sections [Ramdisk], [LAYOUT.CONF_DUMP], [Kernel], [grub.conf], and [grub] try to find the elements that you generated after running the Yocto build, and the directories that you decompressed and created are simple symbolic links in this section with the create_symlinks.sh script. Thus, when you run the patch_sysimage.sh script, the changes mentioned are automatically done in the layout.conf files of each directory. Using the SPI Tool The SPI tool is on the spi-flash-tool directory that you decompressed. It is used to create the capsule files and binary files with or without platform data. In the same directory as your layout.conf file, run the following command: mcramon@ubuntu:∼/$ ../../spi-flash-tools/Makefile Flash-missingPDAT.cap: This is the expected capsule file, absent of platform data, which you can flash to your Intel Galileo. Flash-missingPDAT.bin: This is a binary file absent of platform data necessary to generate SPI images, which is discussed in the “Creating SPI Images with Platform Files” section. FVMAIN.fv: This file is used to recover your board if it does not boot anymore. This is discussed in the “What to Do If Intel Galileo Bricks" section of this chapter. Flashing the Capsule Files After a long procedure and many hours creating your capsule file, it is time to test it by flashing the SPI flash memory. In fact, there are three different ways to flash, as discussed in next sections. Flashing the Capsule File with the Intel Arduino IDE This is the easiest way to flash the capsule file with the current software provided at the time this book was published. You just need to copy the Flash-missingPDAT.cap file in a specific folder of the IDE, as explained in the “Updating the Firmware with a Different Firmware” section of Chapter 3. This procedure only requires usage of the regular USB data cable, which prevents copying the capsule files with a micro SD card or a USB pen driver. Flashing the Capsule File with a Linux Terminal Shell - 1. Connect the serial cabled to Intel Galileo and open a Linux terminal, as explained in the “Preparing Your Cables” section of Chapter 1. - 2. Check which release is being used currently in your board, checking the content of the file /sys/firmware/board_data/flash_version. It possible to check using a Linux terminal shell and typing the following command: root@clanton:∼# cat /sys/firmware/board_data/flash_version 0x01000105 - 3. Copy the Flash-missingPDAT.cap that you created in the previous sections to a micro SD card or a pen driver properly formatted with FAT or FAT32 in a single partition, as described in the “Booting from SD Card” section of this chapter. - 4. If your release is 0.7.5 or 0.8.0, run the following command: # insmod /tmp/0.7.5/efi_capsule_update.ko Or # insmod /tmp/0.8.0/efi_capsule_update.ko - 5. If your release is 0.9.0 or 1.0.0, run the following: # modprobe efi_capsule_update - 6. With newer releases, run the following: # modprobe sdhci-pci # modprobe mmc-block # mkdir /lib/firmware # cd /media/mmcblk0p1/ # cp Flash-missingPDAT.cap /lib/firmware/Flash-missingPDAT.cap # echo -n Flash-missingPDAT.cap > /sys/firmware/efi_capsule/capsule_path # echo 1 > /sys/firmware/efi_capsule/capsule_update # reboot Make sure that you really ran the command reboot ; otherwise, the process to update the capsule file will not work. Flashing the Capsule File with a UEFI Shell The idea on this procedure is to open a UEFI shell as soon the board boots and then flash your capsule file, but this only works if you have a board with a nonsecure boot; otherwise, the UEFI shell will be locked and this procedure will not work. The Flash-missingPDAT.cap file that must be present in the same directory of your layout.conf. The CapsuleApp.efi file that was generated when you compiled the EDKII firmware. It must be present in the ./Quark_EDKII/Build/QuarkPlatform/PLAIN/DEBUG_GCC/FV/Applications/ directory or the ./Quark_EDKII/Build/QuarkPlatform/PLAIN/RELEASE_GCC/FV/ Applications/ directory, depending whether you compile using release or debug flags as discussed in the “Compiling the EDKII Firmware” section of this chapter. You will need serial cables to open the terminal shell, as discussed in the “Preparing Your Cables” section in Chapter 1. This will allow you to debug the board using a serial audio jack cable for Intel Galileo or a FTDI cable for Intel Galileo Gen 2. You need to know how to use some serial terminal software. Read the “Testing Your Cables” section in Chapter 1 to understand how to use putty for Windows or minicom for Linux or Mac OSX. However, you also need to configure the serial software to recognize special characters from your keyboard. For putty, click the left panel, Terminal ➤ Keyboard, and select the SCO box from the Functions and Keys and Keypad tab. Finally, you need a micro SD card or a USB pen driver. - 1. Format the micro SD card or the USB pen drive with FAT or FAT32 in a single partition, as described in the “Booting from SD Card Images” section of this chapter. - 2. Copy the files CapsuleApp.efi and Flash-missingPDAT.cap to the micro SD card. - 3. With the board off, keep the serial cable connected and open the serial terminal software, such as putty or minicom. - 4. Power-on the board connecting the power supply. - 5. Initial screen just after the boot with the F7 option Selecting the UEFI internal shell Selecting the fs0 partition and checking the files - 8. You should be able to see the files you copied to the micro SD card. In this case. just type the following command to start the flashing procedure. fs0:\> CapsuleApp.efi Flash-missingPDAT.cap CapsuleApp: SecurityAuthenticateImage 0xD504410found. CapsuleApp: creating capsule descriptors at 0xF0DE510 CapsuleApp: capsule data starts at 0xD504410 with size 0x6F6190 CapsuleApp: capsule block/size 0xD504410/0x6F6190 Flashing the Capsule File with the Firmware Update Tool At the end of 2014, Intel provided a new tool called the Intel Firmware Update tool that allows you to select the capsule files that come internally in the application, or to browse your desktop file system to select a custom one. This is a stand-alone application, very simple to use, and it does not require you to have the Arduino IDE installed. Intel Galileo Firmware Update tool prototype Subscribe to the Intel Maker Community forum at to receive more information about this tool, as many other updates will be available. Creating SPI Images Flash Files Imagine this hypothetical situation: due to some mistake, you realized that you bricked your Intel Galileo and it does not boot anymore. Before ordering a new one, you can consider flashing using an SPI flash programmer, but you need to have the binary build patched with platform data files. In previous sections, I mentioned how to compile using Yocto and how to generate the capsule and binary files that do not contain platform data. - 1. Have successfully compiled the UEFI firmware (EDKII packages). - 2. Identified the Ethernet MAC address of your board. - 3. Have a flash programmer, such as DediProg. As explained before, platform data contains information like the Ethernet MAC address of your board and which board model you have. Thus, each board should contain a unique and exclusive platform file, because each board contains an exclusive MAC address. The white label with the exclusive Ethernet MAC address The tool responsible for generating the binary with platform data is actually a Python script named platform-data-patch.py in the .../spi-flash-tools/platform-data directory. The only thing that this script does is patch the binaries with that platform data configuration file. In this same directory there is a platform-data template called sample-platform-data.ini, as shown in Listing 2-8. Listing 2-8. sample-platform-data.ini # Every module contains: # [unique name] # id=decimal integer, data type identifier # desc=string, short description of a data; max 10 characters # data.value=[ABC | CAFEBEBA | xyz abc | /path/to/file ] # data.type=[hex.uint[8/16/32/64] | hex.string | utf8.string | file] # ver=decimal integer, version number; if not specified defaults to 0 # WARNING: the platform type data.value MUST match the MRC data.value below [Platform Type] id=1 desc=PlatformID data.type=hex.uint16 # ClantonPeak 2, KipsBay 3, CrossHill 4, ClantonHill 5, KipsBay-fabD 6, GalileoGen2 8 data.value=2 # WARNING: the MRC data.value MUST match the platform type data.value above [Mrc Params] id=6 # If you are developing MRC for a new system you can alternatively # inline the value like this: # data.type=hex.string # data.value=00000000000000010101000300000100010101017C9200001027000010270000409C000006 # The unique MAC address(es) owned by each device are typically found # on a sticker. You must find it(them) and change the bogus values # below. [MAC address 0] id=3 desc=1st MAC data.type=hex.string data.value=FFFFFFFFFF00 [MAC address 1] id=4 desc=2nd MAC data.type=hex.string data.value=02FFFFFFFF01 Make a copy of this file, saving it using another name—for example, galileo-platform-data.ini—and open this file in the text editor of your preference. As you can observe, this file is divided into sections such as [Platform Type], [Mrc Params], [MAC address 0], and [MAC address 1]; but what really matters with Intel Galileo boards are the sections [Platform Type] and [MAC address 0]. On each section there is a field called data.value= that represents the place you will need to modify the platform data. In the section [Platform Type,] there is the following comment: # ClantonPeak 2, KipsBay 3, CrossHill 4, ClantonHill 5, KipsBay-fabD 6, GalileoGen2 8 If your board is Intel Galileo Gen 2, the data.value must receive the value 8, and although the Intel Galileo is not mentioned, the value must be 6 and must be considered as KipsBay-fabD . For example, if your board is Intel Galileo Gen 2, the [Platform Type] must be changed in this way: [Platform Type] id=1 desc=PlatformID data.type=hex.uint16 # ClantonPeak 2, KipsBay 3, CrossHill 4, ClantonHill 5, KipsBay-fabD 6, GalileoGen2 8 data.value=8 The other section that you need to modify is the [MAC address 0], again changing the data.value field. For example, suppose your Ethernet MAC address white tag says MAC:984FEE014C6B; then this section must be changed to the following: [MAC address 0] id=3 desc=1st MAC data.type=hex.string data.value=984FEE014C6B Now you need to generate the binary file patching the platform data. First, take a quick look at the option offered by the platform-data-patch.py script: mcramon@ubuntu:∼/$ ./platform-data-patch.py --help Usage: platform-data-patch.py Options: --version show program's version number and exit -h, --help show this help message and exit -i ORIGINAL_IMAGE , --original-image=ORIGINAL_IMAGE input flash image [default: Flash-missingPDAT.bin] -p INPUT_FILE, --platform-config=INPUT_FILE configuration (INI) file [default: platform-data.ini] -n MODIFIED_IMAGE, --name=MODIFIED_IMAGE output flash image [default: Flash+PlatformData.bin] -u, --undefined-order By default, items are put in the same order as they come in the config file. However ordering requires python 2.7 or above. The script is very simple: the option -i indicates that the input capsule file, -p, is the platform-data file; -n is the name of your output file; and -u must be used only if your version of Python is lower than 2.7. So, before using this script, check which Python version is installed on your computer by typing python --version on your console to determine if the -u option must be used or not. If you have a recent version of Python, you just need to run the script. In the next example, the output file was named cool_binary.bin and the input files were the ones created as examples in this chapter. ./platform-data-patch.py -i ../../sysimage/sysimage.CP-8M-debug/Flash-missingPDAT.bin -p galileo-platform-data.ini -n cool_binary.bin If the script ran smoothly, you should have the output file cool_binary.bin in the same directory. Next it is time to flash your image using the SPI flash programmer described in the next section. If you make mistakes in the [Platform Type], (for example, suppose you specify that the data.value equals 6, which means Intel Galileo, but flash an Intel Galileo Gen 2 board), during the boot, the firmware will recognize the incompatibility and will ask you to select the board type, manually displaying a menu that might be seen using the Linux terminal shell in your board. Type '0' for 'ClantonPeakSVP' [PID 2] Type '1' for 'KipsBay' [PID 3] Type '2' for 'CrossHill' [PID 4] Type '3' for 'ClantonHill' [PID 5] Type '4' for 'Galileo' [PID 6] Type '5' for 'GalileoGen2' [PID 8] So, if you see this menu, the platform file on your board was not patched, or it was patched with wrong data. Flashing Using an SPI Flash Programmer It is recommended to use the flash programmer called DediProg SF100, which you can order from . Officially, the DediProg SF100 only works on Windows, but the open source community has a program called flashrom that supports DediProg SF 100 on Linux and Mac OSX as well. This section will focus on DediProg for Windows, but if you are a Linux or Mac OSX developer, you can use flashrom with DediProg by downloading from and using the following command line: flashrom -p dediprog -r biosimage.rom - 1. Connect the DediProg SF100 to Intel Galileo by using the SPI programmer terminal, as shown in Figure 2-15, but make sure that Intel Galileo is not connected to any power supply and that the DediProg SF100 is connected to your computer via a USB cable. There is no power supply involved in this process, and the DediProg SF100 uses energy from your USB port. Both the board, Intel Galileo, and Intel Galileo Gen 2 offer the SPI flash port, basically in the same position. DediProg SF100 connected to Intel Galileo Gen 2 - 2. After running the installer that comes with DediProg SF100, run the program DediProg Engineering. The first thing that this program will ask about is the SPI flash memory present in the board. If you take a quick look in the Intel Galileo schematics ( ), you will notice that Intel Galileo and Intel Galileo Gen 2 uses the same type of memory, W25Q64FV, as shown in Figure 2-16. Just select the write memory and click the OK button. Selecting the right SPI flash memory and Intel Galileo schematics Configuring the DediProg SF100 to Vcc 3.5V Selecting the binary file to program - 5. Now it is time to program. Click the Erase option to erase the SPI flash memory. Then click Prog to program the SPI flash memory. Finally, click Verify to make sure that your binary was written correctly in the memory. Figure 2-19 shows the process of each step. If the verification fails, repeat this step until you have the SPI flash memory properly programmed. This easily happens if you did not select the 3.5V mentioned in step 3. Erasing, programming, and verifying steps in DediProg SF100 What to Do If Intel Galileo Bricks You lost power during the flash. The SPI programmer flashed some part of the memory incorrectly, corrupting the SPI, and you could not detect the error because you did not verify. You made a mistake in the layout.conf file. You patched the binaries, declaring a wrong board model in the platform data. For example, you have an Intel Galileo and you incorrectly modified the platform data to Intel Galileo Gen 2. - 1. Use an SPI flash programmer to reprogram the SPI flash memory. This procedure was mentioned in the “Flashing Using an SPI Flash Programmer” section of this chapter. - 2. Use FVMAIN.fv, created with SPI flash tool from the “Using the SPI Tool” section of this chapter. - 1. Power-off Intel Galileo, removing the power supply. - 2. Copy FVMAIN.fv to a USB pen drive. - 3. Keep the serial debug cable (FTDI or serial audio) jack connected and open on a serial software terminal of your preference. Read the “Preparing Your Own Cables” section in Chapter 1 if you do not know how do to that. - 4. Connect the USB pen drive in the USB OTG port on your Intel Galileo Gen 2. If you are using Intel Galileo, you will need an adaptor like the one shown in Figure 5-20 of Chapter 5. - 5. Resistor to ground to enter the recovery mode - 6. Connect the power supply to Intel Galileo. - 7. In the serial shell, a list of platforms will be shown; choose the Galileo model. - 8. Remove the resistor from ground. - 9. In the serial shell, select the system recovery option. The system recovery will take around 6 minutes to complete. These are the two methods that you can try. I wish you sincere good luck with them. Summary In this chapter, you received an overview of how the Yocto build system works and how to generate SD card and SPI releases for Intel Galileo boards, as well as how to generate the toolchain and IPK packages. You could also tested the cross-compilers present in the toolchain, creating a simple native program, and then run it on Intel Galileo to test it. The chapter also explained the differences between capsule and binary files with platform data, how to build firmware on EDKII repositories, and how to recover your board if bad firmware was flashed.
https://link.springer.com/chapter/10.1007/978-1-4302-6838-3_2
CC-MAIN-2018-09
refinedweb
13,640
55.64
switch Statement Switch statement is used to make choices between multiple options. This is similar situation we face in our life, like, Which college to study, Which car to buy, etc. Switch statements in C language allows to handle these kind of situations more effectively. Switch statements allows to test for equality against number of choice and each choice is called as case. The syntax of switch statement is as follows: switch(expression) { Case Constant 1: Statements; Case Constant 2: Statements; . . . default: Statements; } Example : #include <stdio.h> int main() { int a = 8; switch(a) { case 1: printf(“Case1: Value is: %d”, a); case 2: printf(“Case2: Value is: %d”, a); case 3: printf(“Case3: Value is: %d”, a); dault: printf(“Default: Value is: %d”, a); } return 0; } Output : Default: Value is: 8 Points To Ponder : - The expression in switch case must be a integer or character constant. - There is no need to enclose the statements in braces (like in if-else statements). - Here, default case is optional. If there is no default case, the next instruction is executed . - The expression must be as same data type as the choices/case. - The statement inside a choice/case are executed until the break statement is found. - Each and every statement must belong to some case or the other. Report Error/ Suggestion
https://www.studymite.com/c-programming-language/switch-statement
CC-MAIN-2020-05
refinedweb
219
64.81
(Revised March 10, 2015) Starting with Xcode version 5 (released with OS X 10.9 Mavericks), Apple has removed support for gcc, such that gcc is no longer actually the GNU Compiler Collection, but is symlinked to the clang compiler. Users still hoping to access a C compiler for their projects, such as in building C-extensions using Cython, should generally not run into any problems in using the symlinked gcc (or directly using clang), as clang uses the same LLVM backend and libraries as Apple's previous gcc compiler. However, if you are building a C++ library (via clang++) that you will later link with a Python extension, or if you are building a Python extension that uses a C++ library, you need to use the older libraries (libstdc++, and not the clang++ default of libc++) via these compiler/linker flags: -stdlib=libstdc++ -mmacosx-version-min=10.6 As it may be preferable for some users, it is still possible to install and use Xcode 4.6.3 on OSX 10.9 by searching around on developer.apple.com. Please note that these developer tools do not include the 10.9 SDK. Sturla Molden writes (Thanks, Sturla, we will update ASAP): This is very misleading. From my understanding, this is what we need to know about building C extensions on Mavericks and Xcode 5 with command line tools: Apple's former llvm-gcc used LLVM as backend, and so does clang. Only the C++ standard library has changed. Do not use MacPorts or Homebrew to reinstall GCC! It will use GCC as backend, not LLVM, and will not be binary compatible with Apple's llvm-gcc! Reinstalling GCC this way would be the worst thing to do, but Weaver recommends it. Intel C++ Composer XE is binary compatible with llvm-gcc. Just export CC=icc and CXX=icpc and you will be safe. The only thing that has changed at binary level is the standard C++ library. Unless you need to link the same C++ library as Canopy, I would say just use clang from Xcode 5. It is very unlikely to break. The C compiler uses the same backend and libraries as llvm-gcc and should be binary compatible. Here are some extra information I think might be useful: To make clang++ produce binaries compatible with g++ from llvm-gcc-4.2.1, use these compiler and linker flags: -stdlib=libstdc++ -mmacosx-version-min=10.6 These will make sure we link with the 10.6 CRT and that we do not link with libc++ (the default C++ library for clang++). For the clang C compiler, note that the 10.9 SDK has crt1.10.6.o after Xcode 5 command line tools are installed. That means passing -mmacosx-version-min=10.6 to the C compiler and linker will make sure the same CRT as Canopy is linked. We do not need to use the 10.6 SDK as -isysroot for this. Two side notes: Intel released a service pack to their compilers and to MKL two days ago. They are now claiming to be Mavericks compatible. Cython got an important bugfix today. It is required for building C++ extensions with the Xcode 5 toolset. Thanks for all the information, Sturla! I've made some updates to the article based on your very helpful suggestions and my discussions with our developers. If you have problems building C++ extensions with Cython on Mavericks, make sure you have this patch in your Cython: Currently this means go and get Cython master from GitHub. There is no release with this patch yet. Explanation: Cython uses "placement new" to put C++ objects inside memory managed by Python. This means that Cython must call destructors explicitely before the memory is freed, as C++ has no "placement delete". There is a bug or C++ standard pedantry (still a matter of dispute) in the resolution of destructor calls in clang++, the most common C++ compiler on Mavericks. This means that code like std::string::~string() will not compile on Mavericks, unless we use a different C++ compiler. Cython now uses a small utility function that works around this namespace resolution bug/peculiarity with a C++ template. (BTW: Other C++ projects has been affected too, such as Mozilla Firefox.) I get tons of errors alike: expected initializer before '__AVAILABILITY_INTERNAL__MAC_10_2_DEP__MAC_10_9' on Mavericks with xcode 4.6.3 Thanks for any help
https://support.enthought.com/hc/en-us/articles/204469410-OS-X-GCC-Clang-and-Cython-in-10-9-Mavericks
CC-MAIN-2019-22
refinedweb
735
65.12
Hey All, If anyone has a fix for this it would be greatly appreciated, or just let me know if I'm missing something, but I can't seem to get a port open while running on iOS. I've read a lot of reasons for this is that the port is still in-use from a previous session of the application that failed to close the port properly, but that is simply not there case here. To test this, I've tried looping through a variety of different port ranges on application start, and this error occurs on every port attempted... import flash.net.ServerSocket; import flash.net.Socket; ... for (var port:Number = 1000; port <=1020; port++){ var serverSocket:ServerSocket = new ServerSocket(); serverSocket.addEventListener(Event.CONNECT, socketConnectHandler); serverSocket.bind(port); serverSocket.listen(); } it breaks every time on serverSocket.bind(port) with "Error #2002: Operation attempted on invalid socket."; Any help would be greatly appreciated. Also, I'm using Flash Builder 4.6 compiling to Flex 4.6.0 SDK. Thanks! Docs for ServerSocket:. html Note: This feature is supported on all desktop operating systems, but is not supported on mobile devices or AIR for TV devices. You should trace ServerSocket.isSupported and you'll see it's not supported on iOS devices. Believe me, I frikin wish they'd support that and SecureSocket.
https://forums.adobe.com/thread/997569?tstart=0
CC-MAIN-2017-34
refinedweb
224
58.99
Opened 9 years ago Closed 9 years ago #2865 closed defect (fixed) Refernce to request.META.SERVER_PORT cause httpd crush on Apache 2.2.2/mod_python 3.2.8 Description Starting from #3866, reference to request.META.SERVER_PORT is causing httpd crush on Apache 2.2.2/mod_python 3.2.8/Python 2.4.3/MacOSX 10.4.7. Here is the relevant error_log: [Tue Oct 03 12:35:05 2006] [notice] child pid 2649 exit signal Trace/BPT trap (5 ) dyld: lazy symbol binding failed: Symbol not found: _apr_sockaddr_port_get Referenced from: /usr/local/apache2/modules/mod_python.so Expected in: flat namespace dyld: Symbol not found: _apr_sockaddr_port_get Referenced from: /usr/local/apache2/modules/mod_python.so Expected in: flat namespace This seems to be caused by referencing connection.local_addr attribute of a request object. Since I could not find any alternatives to find correct SERVER_PORT, I suggest just revert #3866. Change History (2) comment:1 Changed 9 years ago by Yasushi Masuda <ymasuda@…> - Component changed from Admin interface to Core framework comment:2 Changed 9 years ago by adrian - Resolution set to fixed - Status changed from new to closed Note: See TracTickets for help on using tickets. (In [3927]) Fixed #2865 -- Reverted [3866] (problem with mod_python SERVER_PORT
https://code.djangoproject.com/ticket/2865
CC-MAIN-2015-48
refinedweb
206
55.74
neural-storytellerneural-storyteller neural-storyteller is a recurrent neural network that generates little stories about images. This repository contains code for generating stories with your own images, as well as instructions for training new models. Samim has made an awesome blog post with lots of results here. Some more results from an older model trained on Adventure books can be found here. The whole approach contains 4 components: - skip-thought vectors - image-sentence embeddings - conditional neural language models - style shifting (described in this project) The 'style-shifting' operation is what allows our model to transfer standard image captions to the style of stories from novels. The only source of supervision in our models is from Microsoft COCO captions. That is, we did not collect any new training data to directly predict stories given images. Style shifting was inspired by A Neural Algorithm of Artistic Style but the technical details are completely different. How does it work?How does it work? We first train a recurrent neural network (RNN) decoder on romance novels. Each passage from a novel is mapped to a skip-thought vector. The RNN then conditions on the skip-thought vector and aims to generate the passage that it has encoded. We use romance novels collected from the BookCorpus dataset. Parallel to this, we train a visual-semantic embedding between COCO images and captions. In this model, captions and images are mapped into a common vector space. After training, we can embed new images and retrieve captions. Given these models, we need a way to bridge the gap between retrieved image captions and passages in novels. That is, if we had a function F that maps a collection of image caption vectors x to a book passage vector F(x), then we could feed F(x) to the decoder to get our story. There is no such parallel data, so we need to construct F another way. It turns out that skip-thought vectors have some intriguing properties that allow us to construct F in a really simple way. Suppose we have 3 vectors: an image caption x, a "caption style" vector c and a "book style" vector b. Then we define F as F(x) = x - c + b which intuitively means: keep the "thought" of the caption, but replace the image caption style with that of a story. Then, we simply feed F(x) to the decoder. How do we construct c and b? Here, c is the mean of the skip-thought vectors for Microsoft COCO training captions. We set b to be the mean of the skip-thought vectors for romance novel passages that are of length > 100. What kind of biases work?What kind of biases work? Skip-thought vectors are sensitive to: - length (if you bias by really long passages, it will decode really long stories) - punctuation - vocabulary - syntactic style (loosely speaking) For the last point, if you bias using text all written the same way the stories you get will also be written the same way. What can the decoder be trained on?What can the decoder be trained on? We use romance novels, but that is because we have over 14 million passages to train on. Anything should work, provided you have a lot of text! If you want to train your own decoder, you can use the code available here Any models trained there can be substituted here. DependenciesDependencies This code is written in python. To use it you will need: For running on CPU, you will need to install Caffe and its python interface. Getting startedGetting started You will first need to download some pre-trained models and style vectors. Most of the materials are available in a single compressed file, which you can obtain by running wget Included is a pre-trained decoder on romance novels, the decoder dictionary, caption and romance style vectors, MS COCO training captions and a pre-trained image-sentence embedding model. Next, you need to obtain the pre-trained skip-thoughts encoder. Go here and follow the instructions on the main page to obtain the pre-trained model. Finally, we need the VGG-19 ConvNet parameters. You can obtain them by running wget Note that this model is for non-commercial use only. Once you have all the materials, open config.py and specify the locations of all of the models and style vectors that you downloaded. For running on CPU, you will need to download the VGG-19 prototxt and model by: wget wget You also need to modify pycaffe and model path in config.py, and modify the flag in line 8 as: FLAG_CPU_MODE = True Generating a storyGenerating a story The images directory contains some sample images that you can try the model on. In order to generate a story, open Ipython and run the following: import generate z = generate.load_all() generate.story(z, './images/ex1.jpg') If everything works, it will first print out the nearest COCO captions to the image (predicted by the visual-semantic embedding model). Then it will print out a story. Generation optionsGeneration options There are 2 knobs that can be tuned for generation: the number of retrieved captions to condition on as well as the beam search width. The defaults are generate.story(z, './images/ex1.jpg', k=100, bw=50) where k is the number of captions to condition on and bw is the beam width. These are reasonable defaults but playing around with these can give you very different outputs! The higher the beam width, the longer it takes to generate a story. If you bias by song lyrics, you can turn on the lyric flag which will print the output in multiple lines by comma delimiting. neural_storyteller.zip contains an additional bias vector called swift_style.npy which is the mean of skip-thought vectors across Taylor Swift lyrics. If you point path_to_posbias to this vector in config.py, you can generate captions in the style of Taylor Swift lyrics. For example: generate.story(z, './images/ex1.jpg', lyric=True) should output You re the only person on the beach right now you know I do n't think I will ever fall in love with you and when the sea breeze hits me I thought Hey ReferenceReference This project does not have any associated paper with it. If you found this code useful, please consider citing:). @article{kiros2015skip, title={Skip-Thought Vectors}, author={Kiros, Ryan and Zhu, Yukun and Salakhutdinov, Ruslan and Zemel, Richard S and Torralba, Antonio and Urtasun, Raquel and Fidler, Sanja}, journal={arXiv preprint arXiv:1506.06726}, year={2015} } If you also use the BookCorpus data for training new models, please also consider citing: Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler. "Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books." arXiv preprint arXiv:1506.06724 (2015). @article{zhu2015aligning, title={Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books}, author={Zhu, Yukun and Kiros, Ryan and Zemel, Richard and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, journal={arXiv preprint arXiv:1506.06724}, year={2015} }
https://reposhub.com/python/deep-learning/ryankiros-neural-storyteller.html
CC-MAIN-2020-45
refinedweb
1,197
63.19
Receive data (image , video, text) from rs232 or BNC cable and save to local directory by c# I am trying to connect the device to computer by RS-232 or BNC cable to receive video and image and save it to my computer how I can do this by c# like this program bandicam bandicam can receive video from BNC cable. - Why getScaledInstance keep returning ToolkitImage -); } - Calling Objects and Passing Variables in Python Pygame Not Working Error Says Variable Not Defined I'm a bit confused as to how objects and passing work in Python. I would like to make an object to call functions of a class, but I can't seem to get the mainWindow variable to pass. I keep getting an error saying mainWindow has not been defined I made a program in TKinter years ago and had everything within a main method and after each function was finished it would call the next function and pass variables. I wanted to see if I could just do the same thing by calling functions through an object. import pygame pygame.init class PreviewWindow: def __init__(self): mainWindow = pygame.display.set_mode((800, 600)) pygame.display.set_caption('Sprite Viewer') def loadImage(self, mainWindow): userImage = pygame.image.load('well.png') imageSize = userImage.get_rect() def drawImage(self, userImage, imageSize): mainWindow.blit(userImage, imageSize) pygame.display.flip() previewObj = PreviewWindow previewObj.loadImage(mainWindow) previewObj.drawImage(mainWindow, userImage, imageSize) I want to understand how to call functions within classes while being able to pass variables and functions to said functions. - - Search for Char Sequence in inputStream of Bytes and extract the String until a specific char is reached - I have a inputStream of Bytes that I read from a network device. - Example:- "HERE BNHKHAKJSKJAJS START 123 ABC XYZ 456 789 GHY END" - I will have to "doSomething() when "HERE" is read by the application and then when "START" is read , I have to prepare the String payload = "123 ABC XYZ 456 789 GHY" and then When "END" is read I have DoSomething2(payload). Below example extracts the number of bytes that can be read but I am looking for a characters once you encounter those keywords(HERE, START and END) as I mentioned above public class Main {); } } } New to JavaIO could you please Help ? Much appreciated. Thank you. - How to read response after an echo command on /dev/ttyACM0 device file? I am using a USB-RLY08-B relay device (datasheet). I am able to set the relay switches on/off using the following commands : echo -n -e '\x74' > /dev/ttyACM0 However to get the relay states, i need to pass Hex 5B code to the relay. The problem is I am not able to figure out how to read the response back after the echo command. I tried read X < /dev/ttyACM0 but it hangs. - Arduino Nano no serial communication SIM800C I am trying to get my SIM800C to talk with my Arduino. There is no communication happening, though. #include <SoftwareSerial.h> SoftwareSerial at(2, 3); void setup() { Serial.begin(9600); at.begin(9600); } void loop() { // try every 2 seconds delay(2000); Serial.println("sending AT ... "); at.println("AT"); while (at.available() > 0) { Serial.write(at.read()); } } I am not able to get an OKback. SIM800C is supposed to detect the baud rate by itself. I am sure there has to be a simple stupid mistake. I just don't know what to do at this point. I obviously already checked for cable break. Out of desperation I already tried to switch RXand TX. I also tried different baud rates (whatever is within the usual limitations of SoftwareSerial) but it should automatically detect it once a couple of AT commands got in anyway. - How to fix 'TimeoutError: connection failed'? I'm making an online video game, and just downloaded these files that will (maybe) help me with the server, but I'm running into a TimeoutError:[WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, ... right when I tried to test it. This is for (hopefully) getting a job, and I'm going to sell the game soon after it is done. I've tried google, but that made me fall back to a existing question that did not help. self.host = "10.11.250.207" # For this to work on your machine this must be # equal to the ipv4 address of the machine # running the server # You can find this address by typing ipconfig # in CMD and copying the ipv4 address. Again # this must be the servers ipv4 address. This # field will be the same for all your clients. self.port = 5555 self.addr = (self.host, self.port) self.id = self.connect() You see the comment? I tried doing what it says, it says it works, but the error is nothing about that. The full error is this: Traceback (most recent call last): File "C:\Users\roche\Documents\games\base-defense\network.py", line 30, in <module> n = Network() File "C:\Users\roche\Documents\games\base-defense\network.py", line 13, in __init__ self.id = self.connect() File "C:\Users\roche\Documents\games\base-defense\network.py", line 16, in connect self.client.connect(self.addr) TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond - JHipster postgresql connection fail What causes this connection error in JHipster? 2019-07-15 10:11:55.643 DEBUG 30748 --- [nfoReplicator-0] org.postgresql.Driver : Connecting with URL: jdbc:postgresql://localhost:5432/{application_name} 2019-07-15 10:11:55.643 DEBUG 30748 --- [nfoReplicator-0] o.p.core.v3.ConnectionFactoryImpl : Trying to establish a protocol version 3 connection to localhost:5432 2019-07-15 10:11:55.644 DEBUG 30748 --- [nfoReplicator-0] org.postgresql.Driver : Connection error::280) - How to connect my Visual Studio program using Visual Basic to MySQL? Good day. I'm new at connecting my Visual Basic using Visual Studio 2015 to MySQL. I would love to get a walkthrough from you guys that can help. Thanks.
http://quabr.com/56744010/receive-data-image-video-text-from-rs232-or-bnc-cable-and-save-to-local-dir
CC-MAIN-2019-30
refinedweb
1,023
56.35
Contents tagged with LINQ Create Tag-Cloud from RSS Feed in ASP.NET MVC:public class HomeController : Controller { public ActionResult Index() { //get feed var feed = SyndicationFeed.Load(XmlReader.Create("")); //get flat list of all categories/tags in the feed var categoriesList = feed.Items.SelectMany(item => item.Categories).Select(category => category.Name); var categoryGroups = categoriesList.GroupBy(category => category); decimal maxNrOfACategory = categoryGroups.Max(w => w.Count()); //build a dictionary with category/percentage of all categories var tagCloudDictionary = new Dictionary<string, int>(); foreach (var tag in categoryGroups) { var percent = (tag.Count() / maxNrOfACategory) * 100; tagCloudDictionary.Add(tag.Key, (int)percent); } return View(tagCloudDictionary); } }:@model Dictionary<string, int> @{ ViewBag.Title = "Index"; } <h2>Tag cloud</h2> <div style="width: 400px; font-size: 25px;"> @foreach (var tag in Model) { <span style="font-size: @(tag.Value / 2 + 50)%; "> @if (tag.Value > 10) { <span style=" font-weight: bold;">@tag.Key </span> } else { <span>@tag.Key </span> } </span> } </div> Obviously, to be able to click on a tag and so on you need to create a richer view-model, I just wanted to show how I grab and count the tags from the feed Please Feedback! Unity, nHibernate, Fluent, Linq... Trying New Architecture Combinations We're thinking about a new architecture for a set of pretty large WCF (and perhaps also REST) services that's going to be developed during the next year and perhaps you dear reader would like to comment! The services themselves are pretty straight forward, we're going to serve data from a huge SQL Server database which unfortunately is old and lacks relationship/fk/pk between most tables. Nothing we can do about that I'm afraid but we thought that nHibernate could be a good way to map data to domain objects. Some of the services will need to handle more complex business rules, but no heavy computing or long running stuff (not atm anyway). What more can I say... everything is going to be running on .NET 3.5, and we have a pretty good view of the business domain, we're currently modelling the information/domain together with the customer and we will probably be developing the whole thing in a DDD-ish way, using unit and integration tests, IoC... So, currently we're thinking of WCF/REST for the service layer, Unity as container and building something around nHibernate for the Repository (looking at the IRepository implementation in Fluent). We're new to nHibernate but have been looking at nHibernate.Linq which looks really nice, and I think we'll use the Fluent API and map the classes in code instead of using XML configuration (which I'm allergic against). Just have to figure out a good way to make it all fit together, especially to get a decent UnitOfWork to work with the nHibernate session and the repository. I'll see what I come up with and post it here. Ideas? Please comment or send me an email of you like. LIN ;)
http://weblogs.asp.net/jdanforth/Tags/LINQ
CC-MAIN-2015-32
refinedweb
492
55.24
. Symbian C++ Coding Standards Quick Start All successful software projects involving more than one person have a set of coding standards. Sometimes those standards are implicit, but the larger the project, the more likely they are to be written down explicitly. Contributions to Symbian itself should follow a very extensive set of coding standards and conventions (hosted on). However, these standards are too much for a new developer to learn all at once and may not all be appropriate for other projects intended to run on the platform. This guide splits up some of the more important rules in the coding standards for native development, according to their purpose. All of the coding standards exist for a reason. The most important reason is to make code safe (in other words, to prevent crashes and memory leaks). The most common reason is to make code easier to maintain (which may be more robust to change or expansion, easier to understand, or simply consistent). Another valid reason for defining a coding standard is to improve the efficiency of the code. However, efficient can mean different things – we might be talking about reducing code footprint, dynamic memory consumption or CPU usage, or increasing the productivity of developers using an API. The requirements for improved efficiency often conflict with one another and result in trade-offs which change based on the hardware that the code is running on, so coding standards in this category should be evaluated more carefully and revised more often than the others. The rules are further split by applicability. Some of the standards apply to all native code, some relate to C++ language features and others are specific to Symbian idioms. Code Formatting File templates To quickly understand formatting rules try looking at H and CPP file templates. Full list of templates for the variety of file types is here. The templates should be used when adding new files, but are also great as an illustration of what the code should look like. Follow existing formatting conventions If you are more geared towards fixing bugs, quickest way to start is to follow conventions already present in the code. Refer to the full coding standards Finally, Symbian Coding Standards and Conventions define large set of rules for code formatting. If in doubt, this page can be used as a reference. Code Safety C/C++/Symbian C++ Compile With No Warnings Set your compiler to use the highest warning level and compile your code cleanly so it emits no warnings. If you do get warnings, make sure you understand why and then alter your code to remove them, even if the code appears to run correctly despite generating a warning. Memory Allocation Failure When memory is allocated on the heap, it is essential to determine whether the allocation succeeded or failed, and act accordingly. Either the leaving overload of operator new should be used (new(ELeave)), or the pointer returned from the call to allocate the memory should be checked against NULL. In future versions of the Symbian platform, throwing operator new will also be a suitable alternative. No Side-Effects from Assertions Assertion statements should not perform ‘side effects’, for example, they should not modify the value of a variable. C++/Symbian C++ Type Casts C++-style casts should be used in preference to C-style casts. For example, static_cast<type>(expression) should be used in preference to (type)(expression). Help the compiler to help you! Note: The Symbian OS macros for the C++ cast operators (for example, STATIC_CAST(type, expression)) are deprecated. Object Destruction When de-referencing a pointer in a destructor, the validity of the pointer should first be checked in case it has not been initialized or it has already been destroyed by another destructor in the inheritance hierarchy. Pointers that are deleted outside of a destructor that can later be accessed (for example, member variable pointers) should be set to NULL or immediately re-assigned. Resource handles and heap-based member data should be cleaned up in a destructor if they are owned by a class. A destructor should not be able to leave or to throw any type of C++ exception. Strings and Buffers Proven string classes like the Symbian OS descriptor classes should be used in preference to native strings and memory buffers or alternative, hand-crafted string classes. Classes provided by the Qt application framework (QString and QByteArray) are suitable alternatives and classes provided by the STL will be suitable in future versions of the platform when used in conjunction with a throwing operator new. Be extremely careful when dealing with C-style strings and arrays. Symbian C++ Only Multiple Inheritance Multiple inheritance should be used only through interfaces defined by M classes. Inherit from one CBase-derived class and one or more M classes (the CBase-derived class should appear first in the inheritance list to ensure that destruction by the cleanup stack occurs correctly). Object Construction The code inside a C++ constructor should not leave. Construction and initialization code that is non-trivial and could fail should be implemented in the second phase of a two-phase constructor, typically in a ConstructL() method. Safe Cleanup The cleanup stack should be used to avoid orphaning of memory or resources in the event of a leave. If an object is being deleted, it should not be on the cleanup stack. An object should not be simultaneously owned by another object and on the cleanup stack. This means that class member data should never be placed on the cleanup stack if it is also destroyed in the class destructor, as is usually the case. Functions with a ‘C’ suffix on their name (such as NewLC()) automatically put a pointer to the object(s) they allocate on the cleanup stack. Therefore, those objects should not also be pushed on to the cleanup stack explicitly or they will be present twice. A TRAP may be used within a destructor of a class stored on the heap, but it is not safe to use a TRAP within any destructor that may be called as the stack unwinds. When calling code that can throw a C++ exception, catch everything and convert the exceptions to leaves. TRAP and TRAPD will panic if they receive a C++ exception which was not one raised by a Symbian leave (of type XLeaveException). Ease of Maintenance C/C++/Symbian C++ Header Files A header file should be ‘self-sufficient’. It should compile on its own, and include any headers that its contents depend upon. Code that includes the header should not also have to include others in order to compile. A header file should not include more headers than strictly necessary. Where appropriate, prefer the use of forward references to reduce the number of inclusions. Include guards should be used to protect against multiple inclusions of headers. For example: #ifndef GAME_ENGINE_H__ #define GAME_ENGINE_H__ ... // Header file code goes here #endif // GAME_ENGINE_H__ Code Style Explicit comparison should be used when testing integers: use if(aLength!=0) rather than if(aLength). Hard-coded ‘magic’ numbers should be avoided. Constants or enumerations should be preferred instead. In switch statements, the use of fall-through in a case statement should be commented to convey the intent. A switch statement should also always contain a default statement, even if this does nothing but assert that it is never reached. Use and Comment Assertions Use the __ASSERT_DEBUG assertion macro liberally to detect programming errors. It may sometimes also be appropriate to use __ASSERT_ALWAYS to catch invalid runtime input. Assertion statements should be explained using a comment in the code to document the assumptions they make and render them easier to maintain. C++/Symbian C++ Class and Function Design Enumerations should be scoped within the class or namespace to which they are relevant, and should not be global, unless they are required to be, to avoid polluting the global namespace. In class design, overloaded functions should be used in preference to default parameters. Private inheritance should be avoided except in rare cases. Prefer to use composition instead. The inline keyword should be specified explicitly for those functions that should be inlined. Note, that functions defined within the class declaration are implicitly inline. If this is intended - it is easier to write, but it's easier to read, and hence maintain, if you declare them as inline and define them separately in the header or *.inl file. Note: the compiler may, or may not decide to expand the function inline in either case. In functions, reference parameters should generally be used in preference to pointers, except where ownership is transferred or if it is valid for the parameter to be NULL. In those cases, a pointer must be used. In functions, the const specifier should always be used for parameters and return values if the data is not intended to be modified. Private Exports Private functions should only be exported if: - they are accessed by public inline members - they are virtual and the class is intended to be derived from in code which is not part of the exporting module. Symbian C++ Only Naming Conventions Follow the Symbian OS naming conventions. If all members of a team follow the same naming conventions in their code, it can be shared and maintained more easily, because it is immediately understandable. This also applies to code that has been created by a completely different team, such as a utility library or the SDKs for working on Symbian itself. Symbian and its licensees use a specific set of rules when naming classes, variable and function names, so that those using the development kits can easily understand how to use the APIs and example code provided. The concrete types defined in e32def.h should be used in lieu of the language-defined basic types. For example, TInt should be used instead of int. Interface Classes M classes should be used to specify the interface only, not the implementation, and should generally contain only pure virtual functions. Descriptor Parameters When using descriptors as function parameters, the base classes (TDes and TDesC) should be used, and should be passed by reference. Handling Date and Time Values The Symbian OS date and time classes (such as TDateTime and TTime) should be used instead of hand-crafted alternatives. Static Factory Functions Where two-phase construction is used, one or more public static factory functions (typically called NewL() and NewLC()) should be provided for object construction. The construction methods themselves should be private or protected. Check Correct Cleanup The checking methods of CleanupStack::Pop() and CleanupStack::PopAndDestroy() should be used where possible, to check that PushL() and Pop() calls are balanced. Efficiency C/C++/Symbian C++ Argument Passing In functions, pass by value should only be used for basic data types or stack-based objects (such as structures or Symbian OS T-class objects) that are small (<8 bytes). Pass-by reference (or pointer in C) should be used for larger objects (≥8 bytes). Also the number of function parameters should be as small as possible. Testing Pointers When testing pointer values for NULL, it is unnecessary to use explicit comparison, and you should prefer to use if(ptr) over if(ptr!=NULL), unless writing the comparison in full clarifies the intent of the code. Minimize Stack Use Stack use should be minimized as much as is possible, for example, use the heap to store large data structures instead of the stack and take care when declaring larger stack-based variables within loops or recursive functions. Use of the TFileName and TParse classes is a common cause of stack consumption, because objects of both types are over 512 bytes in size (remember that the default stack size is only 8KB). Avoid Writable Global Data in DLLs The use of writable global data should be avoided, where possible, in DLLs. Note: there are also issues with this in both the emulator and the free GCCE toolchain that make this good advice, beyond the small efficiency gain. C++/Symbian C++ No Inline Exports Inline functions should never be exported regardless of whether they are public, protected or private. Member Initialization For efficiency, when initializing member variables in a constructor, the use of member initialization lists should be preferred over assignment statements. The member data of a C class does not require explicit initialization to zero. Use Thin Templates Thin templates should be used for templated code, to ensure the smallest possible code size while still gaining the benefits of C++ templates. Symbian C++ Only Literals The _L macro for literal descriptors should not be used except where it is convenient to do so in test code. Prefer to use the _LIT macro for literal descriptors. Summary Consistent use of these coding standards can make your code safer, easier for others to understand and maintain, and also more efficient. You can decide on a subset of these to apply to any new project but also refactor existing code to incorporate them. In both cases the quality of your projects should improve. Related Info These are just a sub-set of standards to get you started writing better code. There is a similar subset in the C++ Coding Standards Booklet (中文(简体), 日本語). For code submitted to Symbian itself, there is more more extensive set of Coding Standards and Conventions that must be adhered to. If you found this useful then you can get more advice for good coding, complete with explanations for standards in: C++ Coding Standards: 101 Rules, Guidelines and Best Practices by Herb Sutter and Andrei Alexandrescu (2005). Pearson Education, Inc. © 2010 Symbian Foundation Limited. This document is licensed under the Creative Commons Attribution-Share Alike 2.0 license. See for the full terms of the license. Note that this content was originally hosted on the Symbian Foundation developer wiki.
http://developer.nokia.com/community/wiki/index.php?title=Symbian_C%2B%2B_Coding_Standards_Quick_Start&oldid=164250
CC-MAIN-2015-14
refinedweb
2,306
51.89
Hello! Here is my dilemma. Suppose that I've got the sources to my web application in a directory called '/home/webapp'. Inside that directory, I've got two packages: /home/webapp -> /foo -> bar.py -> /moo -> mar.py And 'mar.py' refers to some function inside 'bar.py' by doing: import foo.bar If I were using python on the command line, I could set my PYTHONPATH variable to '/home/webapp' and run 'mar.py' just fine. Now, lets say that my web application is mounted at context 'MyApp' in Webware. The import statement from above in 'mar.py' won't work now. If I change mar.py to use: import MyApp.foo.bar instead, then I can reference bar.py just fine. The problem, though, is that now when I go back to the command line with my PYTHONPATH set as above, it won't work (the import statement fails) since python will be looking for a file at: '/home/webapp/MyApp/foo/bar.py' which doesn't exist. It's a tough break for me, too, since I need to able to execute these classes from the command line for my unit testing. I've tried doing some sys.path magic inside mar.py to manually add '/home/webapp' to sys.path. That works great, but the one snag is that sometimes webware will load the same class twice. Like there will be something in sys.modules called both: 'MyApp.foo.bar', and 'foo.bar' This makes total sense, since I've introduced ambiguity into sys.path. And normally, having the classes loaded twice isn't a big deal. But I've got a number of singleton classes that will wreak havoc if loaded more than once. I suppose I could prepend 'MyApp' to every inter-package import I do, but I'd prefer to avoid hardcoding my web application context name into every class. Have any of you guys run into this sort of thing before? I was just curious about what solutions you've come up with, or if I'm going about this whole thing all wrong. :) Thanks for your time! Deepak Giridharagopal University of Texas at Austin Applied Research Laboratory On Wed, 2002-07-24 at 19:36, Deepak Giridharagopal wrote: > Hello! > > Here is my dilemma. Suppose that I've got the sources to my web > application in a directory called '/home/webapp'. <snip> From what you've described, I have the impression that the stuff contained in your "webapp" is not servlets or anything which is directly Webware-specific code. Here's what I do (and what I think might work well for you): - only put servlets/psps in the context directory - keep your other code (which the servlets may use) whereever you want (call it /home/webapp) - set PYTHONPATH to include /home/webapp in the script with which you start the application server Now you can access the modules using foo.bar from within your webapp code, and also from within your servlets. Hope this helps, Jason -- Jason D. Hildebrand jason@...
http://sourceforge.net/mailarchive/message.php?msg_id=12378141
CC-MAIN-2013-48
refinedweb
509
76.82
I am working on a problem that outputs how many terms are used to determine the amount of terms to find different amounts of Pi, such as 3.0, 3.1, 3.14. Could anyone help me on this code, especially with using a double number in a while statement. the forumla is: Pi = 4 x (1-1/3+1/5-1/7+1/9-1/11+1/13....) Edit: Didn't check code, had to delete a segment to show where I needed assistance.Edit: Didn't check code, had to delete a segment to show where I needed assistance.Code: #include <iostream> #include <cmath> #include <iomanip> using namespace std; double pif(double q); int terms=0; int main() {double q=3.1; cout << "Pi Value Terms\n--------------------------\n"; cout << fixed << setprecision(2); cout << pif(q)<< " "<< terms << endl; return 0; } double pif (double q) { int sign = -1; double n=1.0; double pi=0; while (what would I put here to compare pi to a double?) { pi = sign *(1/(2.0*n-1.0); sign = -sign n++; terms++; } pi*=4.0; return pi; } Changed the sign variation from a power to sign=-sign.
http://cboard.cprogramming.com/cplusplus-programming/84815-using-double-number-while-statement-printable-thread.html
CC-MAIN-2013-48
refinedweb
194
73.98
28 December 2012 11:27 [Source: ICIS news] By Glynn Garlick LONDON (ICIS)--Players in the European polyvinyl chloride (PVC) market are not expecting a massive improvement in 2013 compared with 2012. The industry has been adversely affected by the weakness in the downstream construction sector, which accounts for more than 50% of PVC consumption. Recovery has also been held back by government spending cuts and the eurozone debt crisis. Opinions vary on whether 2013 will be worse than 2012. Southern Europe has been more adversely affected than northwest ?xml:namespace> One seller said: “It is very obvious for everybody that 2013 will not bring the relief people are waiting for. “Building markets represent two-thirds of our sales, and the expectation is a further deterioration in 2013.” However, a buyer said it believed the market would be more or less the same as in 2012. “We really do not see any growth in our applications. However, the good news is we believe the… market will be in the same balance.” Another producer said it believed 2013 was not going to be much better than 2012, but added that it thought the first half of the year would be slower, followed by a pick-up in the second half. The producer added that weaker players in the market could struggle. Sellers’ margins have been very tight in 2012, with producers caught between high upstream costs, and buyers pushing for lower prices because of weak downstream demand. Price discussions have tended to centre on producers’ attempts to preserve or improve tight margins, and buyers’ efforts to achieve decreases because of weak demand. Producers have cut their operating rates to match demand. Upstream chlorine utilisation rates were at 77.7% in November, compared with 80.6% in April, according to industry body Euro Chlor. The lower utilisation rates have led to tightness in the European market for chlorine’s co-product, caustic soda. This has been putting upward pressure on caustic soda prices. Chlor-alkali producers are looking to increase caustic soda contract prices in the first quarter of 2013. This is a time when PVC prices are weak because of the economy and the usual downturn during the winter.. Some end-users have said they expect to cut their PVC usage in the first quarter of 2013 if demand in their downstream applications does not pick up, putting further pressure on producers. However, lower PVC operating rates have led some players to say that the market could quickly become tight if demand did pick up in 2013. At the same time, it has been said that some consumers have not been destocking as much as expected in December as they expect PVC prices might climb in the first quarter, if feedstock ethylene costs rise on the back of higher crude oil
http://www.icis.com/Articles/2012/12/28/9624746/outlook-13-massive-improvement-not-expected-for-europe-pvc.html
CC-MAIN-2015-06
refinedweb
470
61.26
must declare the default namespace and it may declare additional namespaces used in your binding. bindings contains zero or more binding elements as children. Each binding child element defines a unique binding that can be attached to elements in other documents. An element can have only one binding attached (explicetly implementation it is impossible to attach bindings to table sub-elements (rows, cells etc.) You can attach binding only to the table element itself. See {{template.Bug(83830)}} for workarounds. - Default encoding in XML documents (uncluding element,.. field A field is similar to a property, except that it should not have a getter or setter. It is useful as a simple holder for a value. The field element may?)|#PCDATA"> <.key.key.')}}. Using this element is preferred over using an XML processing instruction..
https://developer.mozilla.org/en-US/docs/XBL/XBL_1.0_Reference/Elements$revision/41846
CC-MAIN-2015-14
refinedweb
133
51.24
Introduction How. I highly recommend going through the first two parts before diving into this guide: - A Step-by-Step Introduction to the Basic Object Detection Algorithms (Part 1) - A Practical Implementation of the Faster R-CNN Algorithm for Object Detection (Part 2) Table of Contents - What is YOLO and Why is it Useful? - How does the YOLO Framework Function? - How to Encode Bounding Boxes? - Intersection over Union and Non-Max Suppression - Anchor Boxes - Combining all the Above Ideas - Implementing YOLO in Python What is YOLO and Why is it Useful?. How does the YOLO Framework Function? Now that we have grasp on why YOLO is such a useful framework, let’s jump into how it actually works. In this section, I have mentioned the steps followed by YOLO for detecting objects in a given image. - YOLO, - pc defines whether an object is present in the grid or not (it is the probability) - bx, by, bh, bw specify the bounding box if there is an object - c1, c2, c3 represent the classes. So, if the object is a car, c2 will be 1 and c1 & c3 will be 0, and so on). How to Encode Bounding Boxes?. Intersection over Union and Non-Max Suppression: - Discard all the boxes having probabilities less than or equal to a pre-defined threshold (say, 0.5) - For the remaining boxes: - Pick the box with the highest probability and take that as the output prediction - Discard any other box which has IoU greater than the threshold with the output box from the above step - Repeat step 2 until all the boxes are either taken as the output prediction or discarded There is another method we can use to improve the perform of a YOLO algorithm – let’s check it out! Anchor Boxes. Combining the Ideas: - Takes an input image of shape (608, 608, 3) - Passes this image to a convolutional neural network (CNN), which returns a (19, 19, 5, 85) dimensional output - The last two dimensions of the above output are flattened to get an output volume of (19, 19, 425): - Here, each cell of a 19 X 19 grid returns 425 numbers - 425 = 5 * 85, where 5 is the number of anchor boxes per grid - 85 = 5 + 80, where 5 is (pc, bx, by, bh, bw) and 80 is the number of classes we want to detect - Finally, we do the IoU and Non-Max Suppression to avoid selecting overlapping boxes Implementing YOLO in Python this zip file which contains the pretrained weights required to run this code. Let’s first define the functions that will help us choose the boxes above a certain threshold, find the IoU, and apply Non-Max Suppression on them. Before everything else however, we’ll first import the required libraries: import os import matplotlib.pyplot as plt from matplotlib.pyplot import imshow import scipy.io import scipy.misc import numpy as np import pandas as pd import PIL import tensorflow as tf from skimage.transform import resize from keras import backend as K from keras.layers import Input, Lambda, Conv2D from keras.models import load_model, Model from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body }) print('Found {} boxes for {}'.format(len(out_boxes), image_file)) # Generate colors for drawing bounding boxes. colors = generate_colors(class_names) # Draw bounding boxes on the image file draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors) # Save the predicted bounding box on the image image.save(os.path.join("out", image_file), quality=90) # Display the results in the notebook output_image = scipy.misc.imread(os.path.join("out", image_file)). End Notes Here’s a brief summary of what we covered and implemented in this guide: - - We filter through all the boxes using Non-Max Suppression, keep only the accurate boxes, and also eliminate overlapping boxes.You can also read this article on Analytics Vidhya's Android APP 25 Comments Thanks for article. How does YOLO compare with Faster-RCNN for detection of very small objects like scratches on metal surface? My observation was – RCNN lacks an elegant way to compute anchor sizes based on dataset… also I attempted changing scales,strides,box size results are bad. How do we custom train for YOLO? Hi, YOLO is faster in comparison to Faster-RCNN. Their accuracies are comparatively similar. YOLO does not work pretty well for small objects. In order to improve its performance on smaller objects, you can try the following things: This may give better results. Is this Yolo implemented on GPU based? How to training Yolo for customize object? Is are sany yolo code or tutorial using Tensorflow GPU base is available ? I only want to did detect vehicles. Please guide me. Hi Hamza, Yes, I have trained this model on GPU. For customized training, you can pass your own dataset and the bounding boxes and train the model to get weights. Hello Dear Pulkit Sharma, Hope you will be in good health. Kindly explain the steps for the training of Yolo on GPU for vehicles only. Where is the training code file? Explain the step of passing the dataset? Explain the step of passing the bounding box? Hi Hamza, Please refer to this GitHub link. Hi Pulkit, Link to download back-end weight is not working. kindly guide me. Hi Hamza, The link is working fine at my end. Can you please share what is the error that you are getting? Dear Pulkit, Link to download pretrained weights for backend, which is a OneDrive link, not have said file. Hi Hamza, Once you open the link, there will be an option to download the zip file on the top right side. Hi PulKit, Very nicely explained article. In your Training paragraph , you have said that the output vector for 5 anchor boxes and 5 classes will be 3X3X25, shouldnt it be 3X3X50 instead , 50 comprising of [object(y/N)(1),coordinates of boundary box(4),5 classes(1 for each class) ] X no. of anchor boxes . i.e 10 X 5. = 50 Pls correct me if wrong Hi Priya, That’s correct. I have updated it in the article. Thanks for pointing it out I can’t find any zip file of pretrained weights for backend too. So, what should I do. Hi Rak, The pretrained weights can be found in this link. I have mentioned it in the article as well. I followed the instructions as given but there are a lot of errors in classification. It is identifying objects but not correctly. Any reason why this might be happening? Hi Arijeet, Can you share some of the results? You can try to train your own model as well instead of using the trained weights. Hello Pulkit Sharma , How do you rate YOLO working for Face(face also being a type of object) detection, especially in video frames from surveillance feeds. ? Kindly also comment on the speed(real time performance) of this.What type of h/w platform would suit. ? How do you compare its performance with that of Voila Jones frame work (keeping human face as the object,in mind) regards, Thank you. Hi Valli Kumar, I have not yet tested YOLO for detecting faces. For object detection it is faster than most of the other object detection techniques so, I hope it will also work good for face detection. I am currently working on the same project. First I will try different RNN techniques for face detection and then will try YOLO as well. Then only we can compare it with the other techniques. I will share the results as soon as I am done with this project. Hi Pulkit, The article explains the concept with ease. Thanks for the nice article. I want to implement the YOLO v2 on a customized dataset. The input dimension is greater than the required size of YOLOv2(400×400 pixels). My doubt is should I resize the images and annotated them or is it fine to train the model with original annotations and resized images? Hi Ganesh, If you are resizing the image, annotations should also be resized accordingly. If you use the original annotations, you will not get desired results. how to train our own dataset with yolo v3 Hi Muhammed, You can refer to this article to learn how to train yolo on custom dataset. I’m not able to load yolo.h5. When I so the kernel crashes. What do I do in this case? Hi Shree, What is the configuration of your system? Does it have a GPU? If not, try to use Google colab which will give you access to free GPU. Hi PULKIT, I load model using my own custom pre-train instead of yolo.h5. My model has only 2 classes. I run following your code and I got error because of “yolo_head” function in keras_yolo.py . the error look ===> Traceback (most recent call last): File “C:\Python36\Food-Diary_Project\YOLO_Food-Diary_new\YOLO_food_prediction_new.py”, line 87, in yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names)) File “C:\Python36\Food-Diary_Project\YOLO_Food-Diary_new\yad2k\models\keras_yolo.py”, line 109, in yolo_head conv_index = K.cast(conv_index, K.dtype(feats)) File “C:\Python36\lib\site-packages\keras\backend\tensorflow_backend.py”, line 649, in dtype return x.dtype.base_dtype.name AttributeError: ‘list’ object has no attribute ‘dtype’ How can I solve this issue?
https://www.analyticsvidhya.com/blog/2018/12/practical-guide-object-detection-yolo-framewor-python/
CC-MAIN-2019-18
refinedweb
1,558
65.62
Details - Reviewers - - Commits - rG3cfeaa4d2c17: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests rL368119: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests rL368021: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests rGc22d9666fc3e: [yaml2obj] Move core yaml2obj code into lib and include for use in unit tests Diff Detail Event Timeline Thanks for taking this on. I look forward to being able to use this in lldb tests. I'm not an owner here, but the main question I have is about the library-readiness of the code you're moving. I see it's doing things like spewing errors to stderr and even calling exit(), neither of which is a very nice thing to do for a library (even if it's just a "test" library). Do you have any plans for addressing that? I suggested to Alex offline that he not try to do any more than the bare minimum to get this moved over. Certainly more work needs doing to it, but I think that can be done at a later point rather than upfront when moving it, given how useful it will be to have when working on the libObject code if nothing else. Changed return of convertYAML to Error. Changed name of files from yaml2X to XEmitter. Wrapped unexported functions in anonymous namespace. clang-format. git clang-format does not understand moved files it turns out, so it formatted things that I didn't touch. I figure if this is going to happen it might as well be now, though. I can change this back though. I'd like have a function which takes a (yaml) string, and get raw bytes in return. So, that would be slightly less that what you're doing in the unit test here (as you're also re-parsing those bytes into an object file), but I am guessing you're going to need the re-parsing bits for the work you're doing anyway. Also, due to how lldb's object parsers work, I'll need to save that stream of bytes into a file, but that is something that can be easily handled on the lldb side too. As for object types, my main interest is ELF files, as we already have unit tests using those (by shelling out to yaml2obj). However, having COFF and MachO support would be nice too, and if it's available people might be inclined to use it. Minidump is interesting too, but I already have a mechanism for using that. I don't know if that answers your question. If it doesn't, you'll have to ask me something more specific. :) I think it's fine to clang format the whole thing (since git clang-format isn't smart enough), but ideally it'd be a separate patch that lands first, so that this just shows the changes required for moving directories Sorry, this patch broke tests:, I reverted it in r368035. I fixed those here rG9eee4254796df1a34a0452fa91e8ce4e38b6a5bb. Could you re-land or do I need to do it?
https://reviews.llvm.org/D65255?id=211998
CC-MAIN-2020-10
refinedweb
524
75.74
Debugging native library linkage errors When using native libraries in Java, you’ll sooner or later run into linkage errors. In this blog I will go over the two most common cases in which this error can occur: - When loading a library. - When looking up a symbol in the library. I will discuss possible causes and give some tips for debugging. Linkage error when trying to load a library The first kind of linkage error we will look at typically manifests itself as an UnsatisfiedLinkError. You might commonly encounter this error when calling System.loadLibrary(String), such as: public class Main { public static void main(String[] args) { System.loadLibrary("foo"); } } The error message has the form: Exception in thread "main" java.lang.UnsatisfiedLinkError: no foo in java.library.path: <list of paths> at java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2447) at java.base/java.lang.Runtime.loadLibrary0(Runtime.java:809) at java.base/java.lang.System.loadLibrary(System.java:1893) at Main.main(Main.java:4) This error typically means one of two things: - An incorrect library name is being used. - The java.library.pathsystem property is not set correctly. 1. The incorrect library name is being used When calling System.loadLibrary(String) the name of the library that is passed as an argument will be mapped into a file name in a platform-specific manner, and then loaded. Some common examples are: - On Windows, the library name will get the .dllsuffix. e.g. foo -> foo.dll. - On Linux, the library name will get the libprefix and the .sosuffix. e.g. foo -> libfoo.so. - On Mac, the library name will get the libprefix and the .dylibsuffix. e.g. foo -> libfoo.dylib. So, for instance to load a library file called libfoo.so on Linux, you would have to use System.loadLibrary("foo"); To see how library names are mapped on other platforms, or to see how a particular library name is mapped to a file name, you can call System.mapLibraryName(String). 2. The java.library.path system property is not set correctly The library path that was used to look up the library is printed in the exception message ( <list of paths> above). It is a list of directories that was used to find the native library. One of those directories should contain the library you’re trying to load. If the library you’re trying to load is located in another directory, you can set the java.library.path system property to the directory that contains your library using the -Djava.library.path=/path/to/lib VM argument, where multiple directories can be added separated by ; on Windows, and : on other platforms, e.g. -Djava.library.path=/path1:/path2:/path3. If the error keeps occurring, check the exception message. If it does not contain the library path you expected, you might be passing the command line option as a program argument instead of a VM argument by accident (VM arguments should be passed before the main class), or it could be a problem with missing quotes in the shell you’re using (e.g. powershell requires passing system properties in quotes, such as '-Dmy.prop=val' to avoid being picked up as other syntax). Linkage error when looking up a symbol The second kind of linkage error you can commonly run into is thrown when a function in a library can not be found. For instance, if we extend the previous example program as follows: public class Main { public static void main(String[] args) { System.loadLibrary("foo"); bar(); } static native void bar(); } Even if the call to System.loadLibrary succeeds and the library is loaded, calling the function bar might fail: Exception in thread "main" java.lang.UnsatisfiedLinkError: 'void Main.bar()' at Main.bar(Native Method) at Main.main(Main.java:6) This can have one of 3 causes: - The library that is being used does not contain the function symbol. - The function is not exported. - The wrong library is being loaded. 1. The library does not contain the function The JVM will derive the name of the function symbol it looks for from the package, class, and method name. The format it will have is roughly like Java_my_package_MyClass_myMethod. Another thing to note is that underscores in any of the Java names will be translated into the symbol name as _1. For JNI a good way to make sure the name of the function in the native library is correct, is to regenerate the header file for the class with the method that fails to link (using javac -h <path>) and check to make sure that the function name in the header file matches the function name of the implementation (in the .c/.cpp file). Since the function name is derived from the package, class, and method name, moving the class to a different package, or renaming the package, class, or method, are all things that can make the name that the JVM expects go out of sync with the name of the function in the library. Another way to check which symbol the JVM is looking for, is with the -Xlog:library=info VM option (added in JDK 15). This will print out messages about which symbols the JVM tries to look up. Such as this: [0.622s][info][library] Failed to find Java_my_package_MyClass_myMethod in library with handle 0x00007ff8a24f0000 These log messages can also be used to check the name of the function that the JVM is looking for. 2. The function is not exported A second reason why a function might not be found inside the library, even if the function name is correct, is because the function is not exported from the library. For instance on Windows, using the MSVC compiler, it is important to declare functions that need to be accessible from outside of the library with __declspec(dllexport). For JNI the JNIEXPORT macro can be used for this, which will do the right thing. This macro will automatically be included in the function declaration generated by javac -h. Checking that the function you’re trying to call is exported becomes more important if you’re working with the Foreign Linker API, since the function in the library is likely not declared with JNIEXPORT. Listing the functions in a native library There are several tools that can be used to check which functions are contained in a native library, which can be useful to debug one of the above problems. - On Windows, the dumpbintool that comes with visual studio can be used with the /EXPORTS <library file>to print out exported symbols. The Dependencies tool, which also has a GUI, can also be used. - On Linux the nmtool can be used to print out the symbols in a library. - On Mac, I believe otoolcan be used (but I have no experience using this). 3. The wrong library was loaded Finally, if a symbol can not be found inside a library, but the symbol name is correct, and it is exported from the library, it might be because the wrong library is being loaded. This can happen for instance because there are 2 versions of the same library found on the java.library.path, and the wrong one is being picked up. Or because the library that you’re trying to load has the same name as one of the libraries bundled with the JDK (such as attach). Again the -Xlog:library=info VM flag comes to our rescue, because it also prints out information about which libraries are being loaded, such as: [0.066s][info][library] Loaded library C:\Program Files\Java\jdk-18\bin\jimage.dll, handle 0x00007ff8bfc00000 If the library you’re trying to use is loaded, but the path in the loading message is not what you expected, it’s likely that the wrong library file is being loaded. In the case that the same library is found multiple times on the java.library.path one solution is to remove the directories with the incorrect library files from the java.library.path, or to remove those library files. If this is not practical because something else needs those library paths or files (such as the JVM itself, in the case it is a library bundled with the JVM), it might be possible to rename the library you’re trying to use instead, so that the name no longer conflicts with others. If both changing the library path, or changing the name of the library are not an option, then you could finally manually construct the absolute path of the library you’re trying to use, and use System.load(String) to load the library directly.
https://jornvernee.github.io/java/panama-ffi/panama/jni/native/2021/09/13/debugging-unsatisfiedlinkerrors.html
CC-MAIN-2022-33
refinedweb
1,446
62.07
I can get it to function using whole numbers fine but it seems that I need to parse the input for height and I'm not really sure how to do that. I appreciate any help you can give as this has been driving me crazy and I'm sure it's something obvious. This is what I have so far: public class CalculateBMIConsole { /** * @param args the command line arguments */ public static void main(String[] args) { int height, weight; double bmi; Scanner keyboard = new Scanner(System.in); System.out.print("What is your height in ft/in. You can say for" + " example 5'9\" :" ); height = keyboard.nextInt(); System.out.print("What is your weight in lb :"); weight = keyboard.nextInt(); bmi = weight *703 / (height*height); System.out.println("Your bmi is : " + bmi); } } This post has been edited by ndc85430: 09 February 2018 - 10:06 AM Reason for edit:: Added code tags. Please do this yourself in future.
https://www.dreamincode.net/forums/topic/409217-user-needs-to-be-able-to-enter-height-like-this-5-9/
CC-MAIN-2018-34
refinedweb
156
64.71
Description Feedtosis fetches RSS and Atom feeds with an easy-to-use interface. It uses FeedNormalizer for parsing, and Curb for fetching. It helps by automatically using conditional HTTP GET requests as well as by reliably pointing out which entries are new in any given feed. Feedtosis is designed to help you with book-keeping about feed fetching details so that things like using HTTP conditional GET are trivial. It has a simple interface, and remains a lightweight component that delegates to FeedNormalizer for parsing feeds and the fantastic taf2-curb library for fetching feeds. Installation Assuming that you've followed the directions on gems.github.com to allow your computer to install gems from GitHub, the following command will install the Feedtosis library: sudo gem install jsl-feedtosis Usage Feedtosis is easy to use. Just create a client object, and invoke the “fetch” method: require 'feedtosis' client = Feedtosis::Client.new('') result = client.fetch result will be a Feedtosis::Result object which delegates methods to the FeedNormalizer::Feed object as well as the Curl::Easy object used to fetch the feed. Useful methods on this object include entries, new_entries and response_code among many others (basically all of the methods that FeedNormalizer::Feed and Curl::Easy objects respond to are implemented and can be called directly, minus the setter methods for these objects). Note that since Feedtosis uses HTTP conditional GET, it may not actually have received a full XML response from the server suitable for being parsed into entries. In this case, methods such as entries on the Feedtosis::Result will return nil. Depending on your application logic, you may want to inspect the methods that are delegated to the Curl::Easy object, such as response_code, for more information on what happened in these cases. Remember that a response code of 304 means “Not Modified”. In this case, you should expect “entries” and “new_entries” to be nil, since the resource wasn't downloaded according to the logic of HTTP conditional GET. On subsequent requests of a particular resource, Feedtosis will update new_entries to contain the feed entries that we haven't seen yet. In most applications, your program will probably call the same batch of URLS multiple times, and process the elements in new_entries. You will most likely want to allow Feedtosis to remember details about the last retrieval of a feed after the client is removed from memory. Feedtosis uses Moneta, a unified interface to key-value storage systems to remember “summaries” of feeds that it has seen in the past. See the document section on Customization for more details on how to configure this system. Customization Feedtosis stores summaries of feeds in a key-value storage system. If no options are included when creating a new Feedtosis::Client object, the default is to use a “memory” storage system. The memory system is just a basic ruby Hash, so it won't keep track of feeds after a particular Client is removed from memory. To configure a different backend, pass an options hash to the Feedtosis client initialization: url = "" f = Feedtosis::Client.new(url, :backend => Moneta::Memcache.new(:server => 'localhost:1978')) res = f.fetch This example sets up a Memcache backend, which in this case points to Tokyo Tyrant on port 1978. Generally, Feedtosis supports all systems supported by Moneta, and any one of the supported systems can be given to the moneta_klass parameter. Other options following backend are passed directly to Moneta for configuration. Implementation Feedtosis helps to identify new feed entries and to figure out when conditional GET can be used in retrieving resources. In order to accomplish this without having to require that the user store information such as etags and dates of the last retrieved entry, Feedtosis stores a summary structure in the configured key-value store (backed by Moneta). In order to do conditional GET requests, Feedtosis stores the Last-Modified date, as well as the ETag of the last request in the summary structure, which is put in a namespaced element consisting of the term 'Feedtosis' (bet you won't have to worry about name collisions on that one!) and the MD5 of the URL retrieved. It can also be a bit tricky to decipher which feed entries are new since many feed sources don't include unique ids with their feeds. Feedtosis reliably keeps track of which entries in a feed are new by storing (in the summary hash mentioned above) an MD5 signature of each entry in a feed. It takes elements such as the published-at date, title and content and generates the MD5 of these elements. This allows Feedtosis to cheaply compute (both in terms of computation and storage) which feed entries should be presented to the user as “new”. Below is an example of a summary structure: { :etag => "4c8f-46ac09fbbe940", :last_modified => "Mon, 25 May 2009 18:17:33 GMT", :digests => [["f2993783ded928637ce5f2dc2d837f10", "da64efa6dd9ce34e5699b9efe73a37a7"]] } The data stored by Feedtosis in the summary structure allows it to be helpful to the user without storing lots of data that are unnecessary for efficient functioning. The summary structure keeps an Array of Arrays containing digests of feeds. The reason for this is that some feeds, such as the Google blog search feeds, contain slightly different but often-recurring results in the result set. Feedtosis keeps complete sets of entry digests for previous feed retrievals. The number of digest sets that will be kept is configurable by setting the option :retained_digest_size on Feedtosis client initialization. HTML cleaning/sanitizing Feedtosis doesn't do anything about feed sanitizing, as other libraries have been built for this purpose. FeedNormalizer has methods for escaping entries, but to strip HTML I suggest that you look at the Ruby gem “sanitize”. Credits Thanks to Sander Hartlage (GitHub: Sander6) for useful feedback early in the development of Feedtosis. Feedback Please let me know if you have any problems with or questions about Feedtosis. Author Justin S. Leitgeb, [email protected]
http://www.rubydoc.info/gems/feedtosis/frames
CC-MAIN-2017-22
refinedweb
989
51.48
Update for ES6 modules Inside native ECMAScript modules (with import and export statements) and ES6 classes, strict mode is always enabled and cannot be disabled. Original answer of... This might be helpful if you have to mix old and new code 😉 So, I suppose it’s a bit like the "use strict" you can use it in Perl (hence the name?): it helps you make fewer errors, by detecting more things that could lead to breakages. Strict mode is now supported by all major browsers. Answer #2:. “use strict” in JavaScript- Answer #3: Answer #4: If people are worried about using use strict it might be worth checking out this article: ECMAScript 5 ‘Strict mode’ support in browsers. What does this mean?. */ Answer #5:. Answer #6: Answer #7:. Hope you learned something from this post. Follow Programming Articles for more!
https://programming-articles.com/what-does-use-strict-do-in-javascript-and-what-is-the-reasoning-behind-it-answered/
CC-MAIN-2022-40
refinedweb
140
82.75
B ) ) ... Voltage readings are obtained from an electrical substation once every hour for six hours (so there are six readings). Write a C program to perform the following checks on the substation: a) display all voltages that differ from the average by more than 10% of the average. b) display all pairs of consecutive hours where the change from the voltage at one hour to the next is greater than 15% of the average. Example 1 Enter 6 voltages: 210.1 223.2 189.6 206.2 235.1 215.0 The average is 213.2 volts. 10% = 21.3 volts. 15% = 32.0 volts. The following problems occurred: 1. Voltage at hour 3 was 189.6 volts (difference of 23.6 volts). 2. Voltage at hour 5 was 235.1 volts (difference of 21.9 volts). 3. Voltage change from hour 2 to hour 3 was 33.6 volts. Example 2 Enter 6 voltages: 233.1 201.0 221.5 240.2 222.7 208.1 The average is 221.1 volts. 10% = 22.1 volts. 15% = 33.2 volts. No problems were encountered. #include <stdio.h> #include <math.h> #include <string.h> int i; float volt[6]; float avg, avg10, avg15, total, a, b; int main () { total= 0 ; avg = 0; printf("Enter 6 Volts of Machine\n"); for ( i=0; i<6; i++) { printf("Type %d. volt", i+1); scanf("%f",&volt[i]); total = total + volt[i]; } avg = total/6; avg10 = (avg * 10) / 100; avg15 = (avg * 15) / 100; printf("------------------------------------------\n"); printf("The machine Avarage Voltage is %.2f\n", avg); printf("The Machine Avarage is%.2f\n", avg10); printf("The Machine 15 Avarage is%.2f\n\n\n", avg15); for (i=0;i<6;i++) { a = fabs(volt[i] - avg); if( a > avg10 ) { printf("\nVoltage at hour %d was %.2f volts (diffrence of %.2f volts)\n\n", i+1, volt[i], a); } } for (i=0; i<5; i++) { b = fabs(volt[i+1] - volt[i]); if( b > avg15) { printf("\nVoltage change from hour %d to hour %d was %.2f\n\n", i+1, i+2, b); } } } // add the following variable at the beginning of the program: int voltageProblem = 0; // add the following in each for loop, inside the test for voltage out of limits // loop1 code shown here: for (i=0;i<6;i++) { a = fabs(volt[i] - avg); if( a > avg10 ) { printf("\nVoltage at hour %d was %.2f volts (diffrence of %.2f volts)\n\n", i+1, volt[i], a); voltageProblem = 1; } } // after the two loops add the following: if (voltageProblem == 0) { printf("No problems were encountered.\n"); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Forums/1647/C-Cplusplus-MFC?pageflow=FixedWidth&df=90&mpp=25&sort=Position&view=Normal&spc=Relaxed&prof=True&select=5212794&fr=7601
CC-MAIN-2021-10
refinedweb
460
82.95
Hello, On Mon, 16 Nov 2020 19:47:32 +1100 Chris Angelico [email protected] wrote: def sub(): # Gimme powerz back from __future__ import const BAR: const = 2 SyntaxError: from __future__ imports must occur at the beginning of the file If it's just a normal symbol imported from a module, it would have to be from something other than __future__. Right, so the story unfolds: for some time I was thinking that it would be nice to have something like "from __present__ import ...". Or perhaps __lang__. Why, __python__ would work too, if we agree that CPython != Python. For comparison, in MicroPython, "const" lives in: from micropython import const (And originally it's a (pseudo/intrinsic/macro)function, from pre-variable-annotation times: FOO = const(1) ) []
https://mail.python.org/archives/list/[email protected]/message/UNET56ITQFOEMAX5JS5NHJG42H6PSGI4/
CC-MAIN-2021-39
refinedweb
126
52.19
You can subscribe to this list here. Showing 2 results of 2 Jacob Moen writes: >. Hi Jacob, currently, I cannot reproduce this problem. I installed Ogre3D 1.7.2. and use the following file: #include <OGRE/Ogre.h> typedef double Real; void doSomething(Real param); Ogre::Ent // complete here and I get the following completions: class Entity {} struct EntityMaterialLodChangedEvent {} struct EntityMeshLodChangedEvent {} Also, I don't get any errors during completion or from the idle highlight function. Which version of CEDET are you using? Please try the latest checkout from the bzr repo, which you can get via bzr checkout bzr://cedet.bzr.sourceforge.net/bzrroot/cedet/code/trunk/ If you still get errors with CEDET from bzr, call M-x semantic-debug-idle-function to get a backtrace from the idle function, and post it here. -David Hi Cedet'ers. Is there anything I am missing? If I don't include Ogre headers, all is fine. Qt is fine too. Cheers Jacob
http://sourceforge.net/p/cedet/mailman/cedet-semantic/?viewmonth=201101&viewday=9
CC-MAIN-2015-06
refinedweb
162
60.51
Walkthrough: Downloading Assemblies On Demand with the ClickOnce Deployment API By default, all of the assemblies included in a ClickOnce application are downloaded when the application is first run. You may, using classes in the System.Deployment.Application namespace when the common language runtime (CLR) demands them. To create a project using an on-demand assembly Open the .NET Framework SDK Command Prompt. Using Notepad or another text editor, define a class named DynamicClass with a single property named Message. Save the text as a file named ClickOnceOnLibrary.cs or ClickOnceLibraryvb, depending on the language you use. Compile the file into an assembly. Create a new file using your text editor and enter the following code, which); } } } Save the file as either Form1.cs or Form1.vb and compile it into an executable. To mark assemblies as optional in your ClickOnce application using the Manifest Generation and Editing Tool - Graphical Client (MageUI.exe) Create your ClickOnce manifests as described in Walkthrough: Deploying a ClickOnce Application Manually. Name your application ClickOnceOnDemand. Before closing MageUI.exe, select the tab containing your deployment's application manifest, and within that tab select the Files tab. On the Files tab, find OnDemandAssembly.dll in the list of application files and set its File Type column to None. For the Group column, type ClickOnceOnDemand.d!"
http://msdn.microsoft.com/en-us/library/ms228997(v=vs.85).aspx
CC-MAIN-2014-52
refinedweb
219
52.46
So let's take a look at a new plugin I just created that gets you the IMEI of your Android phone. First here is the 1.4.1 version of the code: and now the 1.5+ version: As you can see the code is very different. Really the only changes are that PhoneGap has been replaced with cordova. There is actually a shim in place so that you can still use the PhoneGap object but that is going away for the 2.0 release so we might as well get on that train now. As well the addConstructor/addPlugin methods won't really be needed come 2.0 but lets leave them in for now. Alright, so the JavaScript code wasn't much different so let's dive into the Java code now. First the 1.4.1 version: and now the 1.5+ version: again not too many changes. Simply changing the imports from: import com.phonegap.api.Plugin; import com.phonegap.api.PluginResult;to: import org.apache.cordova.api.Plugin; import org.apache.cordova.api.PluginResult;This shouldn't be necessary for your plugin as I added code in the cordova.jar file so that every class in the org.apache.cordova.api package has a sub-class in com.phonegap.api. Once again this will be going away in the 2.0 release so we should go ahead and make the change now. Easy right? Well kinda, but I'm being a bit disingenuous as my example does not address a key change between versions 1.4.1 and 1.5.0. In PhoneGap 1.4.1 the member variable this.ctx was of type PhonegapActivity. If you walked up PhonegapActivity's inheritance chain you'd see that android.content.Context is one of it's super classes. This was particularly useful in a number of Plugins. In Cordova 1.5+ the member variable this.ctx is a CordavaInterface. So plugins that are passing this.ctx into methods that are expecting a Context will complain. The fix for this in your Java code is to replace: this.ctxwith: this.ctx.getContext()or this.ctx.getIntent()with: ((DroidGap)this.ctx).getIntent()wherever required. These changes were predicated by refactoring of the code in order to enable Cordova to be an embeddable component. That is, at some point in the future you will be able to create an Android application that can embed the Cordova component. This will allow you to mix native and hybrid development more easily. Update 2012/04/23: Paul Beusterien pointed out that I forgot to mention a couple of steps to get the IMEI plugin working. First add the following line to res/xml/plugins.xml: <plugin name="imei" value="com.simonmacdonald.imei.IMEIPlugin"/>and make sure you have the proper permissions setup in your AndroidManifest.xml: <uses-permission android: 28 comments: Does it works on iPhone also? Can we get IMEI from iOS devices? what all other device information we can get (like phonenumber)? @R Ramana No, this plugin will not work on iOS. Android plugins are written in Java which does not run on iOS. You'd need to re-implement this in Objective-C. Check out this StackOverflow answer: hi how to find mobile number using phonegap..in android please help me.. @GreaterJun You would do: TelephonyManager tMgr =(TelephonyManager)mAppContext.getSystemService(Context.TELEPHONY_SERVICE); mPhoneNumber = tMgr.getLine1Number(); but there is no guarantee that the SIM will return the number. A lot of times you'll get null. Hi Simon, I am able to install condova application on my android phone to access GPS, camera etc. But i am not able to get the IMEI plugin to work. This is what i am using: 1) Android version 2.3.3 2) Cordova 18 3) IMEI Plugin version 1.5 4) I added the plugin name to the plugin xml. 5) I added the script inclusion to index.html. On calling the function window.plugins.imei.get it always goes into the failure function. Please help. Thanks Jaskaran @Jaskaran Did you add read phone state permission to your manifest.xml? That is the only thing you didn't mention. Hi Simon, Thanks for replying. I added the below permission to the manifest: thanks jaskaran @karan Sorry the XML did not come across in the comment. Hi, Sorry about that. I gave this permissiong(not including the angular brackets): uses-permission android:name="android.permission.READ_PHONE_STATE" thanks karan Hi Simon, I needed some more help :-) I needed to embedd native controls along with the html(index.html) for android. Is there any wau to do this? Thanks alot karan @karan By native controls do you mean the menu? Hi Simon, Yes native controls like: 1) Menu 2) Custom menu 3) Toolbar etc I need to do this in Android. I really appreaciate your help and time. Thanks karan @karan You can use the native implementation which is detailed here: and when you want to call from Java to JavaScript you do a: this.sendJavascript(myJScode); Hi Simon, This is GREAT. Thank you very much! But needed to ask something.... I have been able to make a mixed UI from the Activity Class itself. Half in native controls and half in Phonegap html. My code is : public class Jaskaran823Activity extends DroidGap { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); View header = View.inflate(getContext(), R.layout.main, null); root.addView(header); super.loadUrl(""); this.sendJavascript("javascript:yourFunction('load')"); super.appView.getSettings().setJavaScriptEnabled(true); super.appView.addJavascriptInterface(this, "MyCls"); } } I am now able to communicate both ways from java to javascript and vice versa. Now can i do the same using phonegap plugins? I mean can i put the native controls in a plugin? And embedd native-UI inside the phone-gap html from within the plugins? Something like this seems to be done for iPhone. But i need to do this in android... Is there any exmaple of puttinh native UI from within a plugin in android? Please help. Thanks Karan @karan I would avoid using addJavascriptInterface() as there are some problems with that code working on all versions of Android. You should switch to using a PhoneGap Plugin which works around these issues if they exist. So use a Plugin to communicate from JavaScript to Java and return a Plugin result or alternatively you can use sendJavaScript. That is what I do when I implement a native menu. You can check out Joe's new project which shows how to embed a CordovaWebView in an Android project: or look at Michael Brooks menu plugin: Thanks alot Simon. This is super. You have been extremely helpful... Hi,Simon. I am migrating the phonegap plugin to cordova 2.0. Could you tell me how to modify this line: ((DroidGap)this.ctx).getIntent() to cordova 2.0? Thanks in advance. @Wei Zhang That would be "cordova.getActivity().getIntent()" Hi guyz. pls i need to apply the "getting android IMEI" stuff on my work. But this is the issue: It's an html/css/js code, using cordova to compile it into an android app via eclipse. The Question: How do I put in the codes; where exaclty and if possible can i have verbatim code? @Yinka Adediji What about the instructions don't you understand? with cordova 2.0.0, how is "File fp = new File(this.cordova.getActivity().getFilesDir() + "/" + filename);" @supriya Sorry, I don't understand your question. Hi simon in all the android twitter OAUth applications they are using On resume method get the verifier for the Authorizing the Application . similarly for Phonegap if i use : cordova.getActivity().startActivity( new Intent(Intent.ACTION_VIEW, Uri.parse(requestToken .getAuthenticationURL()))); for starting the intent . i am not getting the intent URI as a result in OnResume method of the PluginResult. but i am getting a call to the Onresume method after the Application authentication . Thanks and regards Sangeeth Kumar V @sangeeth_LVS I believe you want to start your activity for result. You are just doing a startActivity. So I'd do: this.cordova.startActivityForResult((Plugin)this, new Intent(Intent.ACTION_VIEW, Uri.parse(requestToken .getAuthenticationURL()))); and implement the onActivityResult method in your Plugin. With Cordova 2.2 the class Plugin is deprecated, one should use CordovaPlugin from now on. Since I have problems getting this to run I would appreciate an update to the blog showing how it's done with Cordova 2.1/2.2. Thanks, Juergen @Jürgen Wahlmann The Plugin class will continue to work for quite some time. I do like your suggestion and I will make it a future blog post so stay tuned. Hi, Does it work with Phonegap 3.0+? @cureorcurse The same Android code will work but it would need to be updated to 3.0.0 style plugin.
http://simonmacdonald.blogspot.com/2012/04/migrating-your-phonegap-plugins-to.html
CC-MAIN-2019-09
refinedweb
1,469
61.33
More Signal - Less Noise Last night’s post was something of a preface, but let’s get started. [ For those of you who crave the details, the code, the feel of bits between your fingers, watch for a series of videos on this subject to be released in the next couple weeks with source in VB and C# ] As I started to say last night, the key distinction in writing custom controls in Silverlight as opposed to other GUI environments is the strict division between logic and visuals embodied in the Parts and States Model. As an aside, this is where we always point out that there is nothing in Silverlight that requires or enforces that you implement your custom control using the Parts and States model, but it is the model recommended by Microsoft, and it is the model understood and supported by Expression Blend. The fact is, I can’t imagine creating a custom control that does not conform to the P&S model except to show that it can be done. The key concept behind the P&S model is that your control will have a strict separation of logic from visuals, and the visuals will be managed by the Visual State Manager which will need to know (a) what States might the control be in (states are defined in just a moment) and (b) what parts of the control might be under VSM control. States are familiar to those who’ve worked with Templates, and in truth, if you haven’t you want to stop right here and go do that. I posted three videos on styles and templates that will get you started as well as a few useful blog entries From a P&S model perspective, a control is either in a state or transitioning from one state to another. The Visual State Manager is responsible for running the storyboard associated with your control being in a given state (such as MouseOver). If you are templating an existing control, the states have been enumerated already, you can’t add new states unless you create a custom control. More on that in a moment Controls are of course made up of many parts (little p) but from a P&S perspective they aren’t considered Parts unless they will be called by methods of the control itself. For example, the ScrollBar is a control available in the Silverlight toolbox. From the P&S view point it can be decomposed into four Parts. While there may be other elements in a Scrollbar, these are the Parts, because these elements are the only elements that other elements of the Scrollbar must address directly. Many controls, for that support the P&S model, such as Button, have no parts at all (!) When you create a custom control in Silverlight you create a “contract” stating “this part is under the domain of the VSM” and the rest is considered logic that is on the “other side of the wall.” Attributes are a mechanism to store metadata within a .NET program. You can see an example in this excerpt from a Ratings control, which can be “lit” or not depending on the user’s action. (For more on attributes see any good book on C# or VB ) = "Lit", GroupName = "RatingStates" )] 8: [TemplateVisualState( Name = "Norm", GroupName = "RatingStates" )] 9: 10: public class RatingControl : Control This snippet shows six attributes being added to a new Custom Control. The first is the only “Part” named “Core” (stolen directly from Karen Corby). The next three are the three “common states” this new control will support. Notice that they share the GroupName of “CommonStates”. Finally, on lines 7 and 8 are the two RatingStates of Lit and Norm. These few lines draw a powerful contract that the developer and designers can rely on, as can Expression Blend. They state clearly that the “Core” object (to be created in Xaml) will be under the management of internal methods as a Part, that the new control will have two state groups, and it enumerates the states within each group. Further, the class definition shows that our new control derives from the base class Control. It is up to me now to implement the contract. The steps to getting here were: Here’s where we are 1: <UserControl x:Class="BookRater1.Page" 2: xmlns="" 3: xmlns:x="" 4: xmlns:Controls="clr-namespace:ClassLibrary;assembly=ClassLibrary" 5: xmlns:vsm="clr-namespace:System.Windows;assembly=System.Windows" 6: 7: 8: <Grid x: 9: <Grid.RowDefinitions> 10: <RowDefinition Height=".5*" /> 11: <RowDefinition Height=".5*" /> 12: </Grid.RowDefinitions> 13: <Grid.ColumnDefinitions> 14: <ColumnDefinition Width=".5*" /> 15: <ColumnDefinition Width=".5*" /> 16: </Grid.ColumnDefinitions> 17: 18: <Controls:RatingControl x: 19: <Controls:RatingControl x: 20: </Grid> 21: </UserControl> A Few Things To Notice Remember that this is a view of Page.xaml – the page that is using the custom control. What is in Rating.cs and generic.xaml? generic.xaml 1: <ResourceDictionary 2: xmlns= -- Many of these --> 3: <Style TargetType="controls:RatingControl"> 4: <Setter Property="Template"> 5: <Setter.Value> 6: <ControlTemplate TargetType="controls:RatingControl"> 7: <Grid x: 8: <Grid.Resources> 9: <Storyboard x: 10: <DoubleAnimation 11: Storyboard.TargetName="Core" 12: Storyboard.TargetProperty="(UIElement.Opacity)" 13: 14: </Storyboard> 15: <Storyboard x: 16: <!-- --> 17: </Storyboard> 18: <Storyboard x: 19: <DoubleAnimationUsingKeyFrames 20: BeginTime="00:00:00" 21: Duration="00:00:01" 22: Storyboard.TargetName="Core" 23: Storyboard.TargetProperty="(UIElement.RenderTransform). 24: (TransformGroup.Children)[3].(TranslateTransform.Y)"> 25: <SplineDoubleKeyFrame KeyTime="00:00:00" Value="0"/> 26: <SplineDoubleKeyFrame KeyTime="00:00:00.25" Value="25"/> 27: <SplineDoubleKeyFrame KeyTime="00:00:00.5" Value="0"/> 28: <SplineDoubleKeyFrame KeyTime="00:00:00.75" Value="50"/> 29: <SplineDoubleKeyFrame KeyTime="00:00:01" Value="0"/> 30: </DoubleAnimationUsingKeyFrames> 31: </Storyboard> 32: <Storyboard x: 33: <!-- --> 34: </Storyboard> 35: </Grid.Resources> 36: <vsm:VisualStateManager.VisualStateGroups> 37: <vsm:VisualStateGroup x: 38: <vsm:VisualState x: 39: <vsm:VisualState x:Name="MouseOver" 40: 41: <vsm:VisualState x:Name="Pressed" 42: 43: </vsm:VisualStateGroup> 44: <vsm:VisualStateGroup x: 45: <vsm:VisualState x:Name="Norm" 46: 47: <vsm:VisualState x:Name="Lit" 48: 49: </vsm:VisualStateGroup> 50: </vsm:VisualStateManager.VisualStateGroups> 51: <Ellipse 52: x:Name="Core" 53: Width="200" 54: 55: <Ellipse.RenderTransform> 56: <TransformGroup> 57: <ScaleTransform/> 58: <SkewTransform/> 59: <RotateTransform/> 60: <TranslateTransform/> 61: </TransformGroup> 62: </Ellipse.RenderTransform> 63: <Ellipse.Fill> 64: <RadialGradientBrush> 65: <GradientStop Color="#FFFFD954" Offset="0.004"/> 66: <GradientStop Color="#FFE9F515" Offset="1"/> 67: <GradientStop Color="#FFF1F712" Offset="0.911"/> 68: </RadialGradientBrush> 69: </Ellipse.Fill> 70: </Ellipse> 71: </Grid> 72: </ControlTemplate> 73: </Setter.Value> 74: </Setter> 75: </Style> 76: </ResourceDictionary> This file has been cut down, but you can see that it looks very much like a standard template file. We begin the substantive work on line 8 creating a Resources sections. In here we create a Storyboard for each of the behaviors we might want in a given state. That is, if we have decided that the behavior when we hover over the custom control will be for it to bounce up and down hyperactively we would create the storyboard for that here in the resources area (as we do on lines 18-31). After the Resources (line 35) we define the Visual State Groups (lines 36-50) and within each of the groups, the visual states. The job here is to assign the appropriate story board to each of the states. Finally, on line 51 we create our custom control’s default appearance, including the named Part, “Core” which is the ellipse defined on lines 51 to 70. In this simplified example that happens to be the only object in the control, but more complex controls may have many unnamed elements as well. Rating.cs The code file for our class defines both the logic and the enabling (private) code for the translation of CLR events to states that the VSM will recognize. It is also here that we apply either the default look (generic.xaml) or the templated look that was requested when the control was instantiated in page.xaml. This is done, essentially by calling firing the base class’s OnApplyTemplate event. We extract the named part from the Xaml and hold onto it in a member variable, as we’ll use it quite a bit and then we tell the control to GoToState, a private helper method that checks other member variables and determines how to call the Visual State Manager’s static GoToState method, 1: public class RatingControl : Control 2: { 3: private FrameworkElement corePart; 4: private bool isMouseOver; 5: private bool isPressed; 6: public event RoutedEventHandler Click; 7: public RatingControl() 8: { DefaultStyleKey = typeof(RatingControl); } 10: public override void OnApplyTemplate() 11: { 12: base.OnApplyTemplate(); 13: CorePart = (FrameworkElement)GetTemplateChild("Core"); 14: GoToState(false); 15: } 16: 17: private void GoToState(bool useTransitions) 18: { 19: if (isPressed) 20: { VisualStateManager.GoToState(this, "Pressed", useTransitions); } 21: else if (isMouseOver) 22: { VisualStateManager.GoToState(this, "MouseOver", useTransitions); } 23: //... 24: } 25: //... 26: } The two major missing pieces are converting the CLR events to the VSM events and the sneaky fact that the setter for the private member CorePart doesn’t just set the CorePart but it also unregisters its old event handlers and registers its new event handlers for MouseEnter, MouseLeave, MouseLeftButtonDown and MouseButtonUp. This latter step lets us accomplish the former step with event handlers like this 1: void corePart_MouseEnter(object sender, MouseEventArgs e) 3: isMouseOver = true; 4: GoToState(true); 5: } 7: void corePart_MouseLeave(object sender, MouseEventArgs e) 8: { 9: isMouseOver = false; 10: GoToState(true); 11: } Even walking through it fairly carefully it can get very confusing; there are some pretty complex attachments going on. Thus, rather than add insult to injury I’ll stop here and recap and then wait until the first video where you can see the pieces working together before going any further. In a nutshell, you have 5 files working together when all is done. More here: Creating Custom Controls – A Common Starter Application Pingback from 2008 September 13 - Links for today « My (almost) Daily Links This is exactly what I've been waiting for Jesse, thank you a LOT. I want MOOOOOOOOOOOOOORRRRRRRREEEEEEEEEEEEE :)))))))))))))))))) I have a small side request/question : I've been having some fun with states and I've noticed that it is still possible to record storyboards (Blend 2.5 Beta)as we used to in SL Beta 1 . Let's assume that I've created two states : State A State B Let's assume that I've recorded 2 storyboards, 1 for each state : StoryBoard A = 2 frames(0s, 2s) StoryBoard B = 2 frames(0s, 3s) Let's assume that I've added transition lenghts of 1s each, to each state. How does one affect the other? I don't seem to understand it ? Pingback from Dew Drop - September 13, 2008 | Alvin Ashcraft's Morning Dew Maciek, First, thank you for the kind words. As for your question, let me put something together that will illustrate the question you are asking as it is an interesting one. Thanks. Awesome, I can't wait. Great article Jesse. If the xaml for the custom control needs to be in generic.xaml, how does one define multiple custom controls in a single class library? Thanks That's how : <ResourceDictionary xmlns= -- Many of these> <Style TargetType="controls:Control1"> </Style> <Style TargetType="controls:Control2"> <Style TargetType="controls:Control3"> </ResourceDisctionary> Pingback from Santiago Palladino » Controls Contract One error that I thought I fixed, but didn't. I wrote at the top of the article that the Parts are those elements under the control of the VSM. <wrong, but thanks for playing!>. What I should have said is that Parts are those elements that are called by methods of the control itself -- the cannonical example is when you click on the repeat button of a scroll bar, the thumb has to move; the scroll bar is responsible for all of this, and so both the repeat button and the thumb are Parts. Karen Corby was kind enough to point out (at my request) that I fouled the wording about OnApplyTemplate implying that the control calls this, but of course the platform does. I do owe a demo program on how transition timing works. That may take a week or two as things pile up here, but I'll see what I can do asap. could i have the source? i can't follow your instructions. thanks In  a previous post I began talking about Custom Controls, and I will continue that discussion over Where does the RatingStates and CommonStates groups come into the picture? I don't see you mentioning them anywhere besides declaring them. >> Where does the RatingStates and CommonStates groups come into the picture? I don't see you mentioning them anywhere besides declaring them. << Ahh, excellent... yes, that is coming, but we have so much to cover first :-) Chris Cavenagh has his YouCube interactive, Jesse Liberty on Custom Controls, Pete Brown with SL TechFest It will be helpful as we explore custom controls to have a common starting project. You may  remember Pingback from Amin Mahpour » Blog Archive » Silverlight post collections This is the first in a series of explorations of breaking changes in the Release Candidate for Silverlight For the past six months or so I've been experimenting with and then "preaching" the idea of     The difference between understanding a concept??? - Jesse Liberty - Silverlight Geek   Note  Starting now (Jan 1 2009) when I add an article that updates an older blog post, I Hi Jesse, Great Blog and your videos rock!!! Have the same question that Mike asked "How does one define multiple custom controls in a single class library?" Is it possible? All my searched couldn't find me any answers Can you point to any blog or video that explains the above. Thanks a lot. where does the vsm come from. No reference to any vsm object mention above. WORTHLESS POST WITH ALOT ERROR I WASTED TWO GDM HOUR TRYING TO IMPLEMENT THIS WORTHLESS CRAP ARTICLE AND THERE ARE SO MANY ERRORS WHO EVER POSTED THIS SHTFACE ARTICLE PLEASE GOT TO HELL hey dipsht face author vsm is referenced by xmlns:vsm="clr-namespace:System.Windows;assembly=System.Windows" HEY FUCK FACE AUTHOR CorePart not defined in public override void OnApplyTemplate() { base.OnApplyTemplate(); CorePart = (FrameworkElement)GetTemplateChild("Core"); GoToState(false); } WHY IN SAM HELL DID YOU NAME THE CLASS RatingControl AND NOT Rating. OH YOUR A FUCK FACE THAT WHY DMB ASS SOB The Silverlight Toolkit is innovative in many ways, not least of which is that controls are released    
http://silverlight.net/blogs/jesseliberty/archive/2008/09/12/digging-into-custom-controls.aspx
crawl-002
refinedweb
2,416
61.06
I'm cooking up some spatial examples and have decided to give Dapper a go, although EF has spatial support, and I'm loving having control over my SQL again (Thanks Sam & Marc). However, I need to be able to have POCO's that support the DbGeography class. For example : public class BuriedTreasure { public int PirateId { get; set; } public DbGeography MarksTheSpot { get; set; } } My Google foo has been letting me down and the closest match I can find is this question, though it only caters for adding spatial support as a parameter (so it's 50% there). Now as far as I can see I'm limited to the following options of which neither to me is a viable solution. Alternatives? For anyone interested, essentially I went for option 2 in the question I posted above. I have my spatial data mapped to decimals. The stored procedure does some additional checking so it was easy to construct the point therein, ie the following snippet : Yarrrr = geography::Point(@Latitude, @Longitude, 4326) Essentially, the DbGeography class is immutable, goes to show... RTFM :) No changes to dapper required at all!
https://dapper-tutorial.net/knowledge-base/18088169/dapper-spatial-geography-type
CC-MAIN-2021-21
refinedweb
187
57.1
New submission from David Watson <baikie at users.sourceforge.net>: In 3.x, the socket module assumes that AF_UNIX addresses use UTF-8 encoding - this means, for example, that accept() will raise UnicodeDecodeError if the peer socket path is not valid UTF-8, which could crash an unwary server. Python 3.1.2 (r312:79147, Mar 23 2010, 19:02:21) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu4)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> from socket import * >>> s = socket(AF_UNIX, SOCK_STREAM) >>> s.bind(b"\xff") >>> s.getsockname() Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: unexpected code byte I'm attaching a patch to handle socket paths according to PEP 383. Normally this would use PyUnicode_FSConverter, but there are a couple of ways in which the address handling currently differs from normal filename handling. One is that embedded null bytes are passed through to the system instead of being rejected, which is needed for the Linux abstract namespace. These abstract addresses are returned as bytes objects, but they can currently be specified as strings with embedded null characters as well. The patch preserves this behaviour. The current code also accepts read-only buffer objects (it uses the "s#" format), so in order to accept these as well as bytearray filenames (which the posix module accepts), the patch simply accepts any single-segment buffer, read-only or not. This patch applies on top of the patches I submitted for issue #8372 (rather than knowingly running past the end of sun_path). ---------- components: Extension Modules files: af_unix-pep383.diff keywords: patch messages: 102865 nosy: baikie severity: normal status: open title: socket: AF_UNIX socket paths not handled according to PEP 383 type: behavior versions: Python 3.1, Python 3.2, Python 3.3 Added file: _______________________________________ Python tracker <report at bugs.python.org> <> _______________________________________
https://mail.python.org/pipermail/python-bugs-list/2010-April/096177.html
CC-MAIN-2017-17
refinedweb
321
56.05
#include <hallo.h> Till Tippel wrote on Sat Jun 01, 2002 um 09:22:49 think, in the current situation it would be best to create a small repository with modified boot-floppies, needed to support particular software/hardware. Someone may cry "just as SuSE does". Well, it's better than force every user to patch and replace the kernel, causing lots of other trouble. I suggest: - SCSI optimised (i2o scsi drivers, more SCSI drivers built-in) - IDE optimised (patched to support >>160GB disks, HPT-372 controlers, etc.) - XFS filesystem (this does actually work, p.d.o/~blade/XFS-Install) - EVMS enabled (EMVS with LVM/md in the kernel, evms curses gui on the rescue disk)
https://lists.debian.org/debian-devel/2002/06/msg00008.html
CC-MAIN-2015-32
refinedweb
116
54.22
In this tutorial, we will learn the concept of data-driven testing. We will learn how to design and build a TestNG DataProvider class. The technologies used in this framework are Java and TestNG. DataProvider Class in TestNG DataProvider feature is one of the important features provided by TestNG. It is the second way of passing parameters to test methods. It allows the users to write data-driven tests in which we can run multiple times the same test method with different sets of test data. The complex parameters can be complex objects, object read from a property file, or a database, etc that are created using Java technologies. What is DataProvider? DataProvider is a method in a class that returns a two dimensional an array of object (Object [ ][ ]) to the test method. The test method will be called m times in the m*n type of object array. m ➨ The first array m represents the number of rows that has to be repeated your test m number of times. This method is annotated with @DataProvider. We can have data providers with different names which can be defined either on a test class or on other classes. @DataProvider Annotation in TestNG TestNG provides an annotation called a @DataProvider to use the DataProvider feature in our tests. This annotation is declared with a method in our test class which can then be called on test methods. @DataProvider annotation can take an attribute “name” called dataProvider in the Test annotation. It has the following general syntax/form to use. Syntax: @DataProvider(name = “myTestData“) This annotation has only one string attribute called name. If the name of the data provider is not provided, the data provider’s name will automatically be set to the method’s name. “myTestData” is the name of the function which is passing the test data. It can also be any other name. Look at the below screenshot for a complete Dataprovider annotation code. @Test Annotation TestNG uses an attribute called dataProvider in the called @Test to annotate the test methods. The following example is given below for using the annotation. Syntax: { { Where, DataProvidingClass is a name of class in which DataProvider method has been declared. TestNG DataProvider Example with Multiple Parameters Let’s see an example program and learn how to use the DataProvider feature in our tests. You follow all the steps below. 1. Open Eclipse and create a Java project named DataProviderProject. 2. Right-click on a java project and go to New option > create a package with name “testDataProvider”. 3. Now create a new java class with the name “DataProviderEx1”. Look out the below program source code. Program source code 1: package testNGDataProvider; import org.testng.annotations.DataProvider; import org.testng.annotations.Test; public class DataProviderTest1 { // Declare Test annotation with attribute dataProvider and value "getData". @Test(dataProvider = "getData") public void setData(String name, String rollNo) { System.out.println("Name: " +name); System.out.println("RollNo: " +rollNo); } // Declare DataProvider annotation with attribute name and value "getData". @DataProvider(name = "getData") // Declare a method whose return type is an array of object. public Object[][] dataProviderMethod() { // Create an object of an array object and declare parameters 3 and 2. // 3 represents number of times your test has to be repeated. // 2 represents number of parameters in test data. Here, we are providing two parameters. Object[][] data = new Object[3][2]; // 1st row. data[0][0] = "John"; data[0][1] = "23"; // 2nd row. data[1][0] = "Sanjana"; data[1][1] = "40"; // 3rd row. data[2][0] = "Deep"; data[2][1] = "01"; return data; } } Explanation of Source code: 1. The test class contains a setData() method which takes two arguments name and rollNo as input and prints it on the console when it will be executed. 2. A DataProvider method called dataProvidermethod has been declared in the same class by using DataProvider annotation of TestNG with attribute “name”. 3. The DataProvider returns a two dimensional (2D) an array of objects with three sets of data such as data one, data two, and data three. 4. The “getData” is the value of name attribute which passes the data to the dataProvider attribute which has the same value getData. 5. The DataProvider provides the values of the parameters to a setData() method which is annotated with @Test annotation, attribute dataProvider and value getData. 6. Now run the test class as TestNG Suite. You will see the following test result in the console. Output: Name: John RollNo: 23 Name: Sanjana RollNo: 40 Name: Deep RollNo: 01 Default test Tests run: 3, Failures: 0, Skips: 0 As you can see the above test result, the respective setData() method has been executed three times in the class. The execution of the setData() method is dependent upon the number of sets of data passed by the dataProviderMethod. Since the setData method is executed three times. Therefore, three different sets of data have been returned by the DataProvider. Key points: 1. It is mandatory for a DataProvider method to return the data in the form of a double array of object class (Object[ ][ ]). The first array represents a set of data whereas the second array contains values of parameters. 2. The “name” attribute of DataProvider is optional. If you don’t declare it, the name of method will use. Let’s see another example program related to this point. Program source code 2: package testNGDataProvider; import org.testng.annotations.DataProvider; import org.testng.annotations.Test; public class DataproviderTest2 { @Test(dataProvider = "getData") public void setData(String username, String password) { System.out.println("Username: "+ username); System.out.println("Password: " +password); } // Here, we are not declaring attribute "name" for DataProvider. So, it will use the name of method or function. @DataProvider public Object[][] getData() { // You can also return data in this way. Here, we are using an anonymous concept of Java to return data. return new Object[][] { {"DEEPAK"," 1234"}, {"AMIT","12345"}, {"RASHMI", " 123456"} }; } } Output: Username: DEEPAK Password: 1234 Username: AMIT Password: 12345 Username: RASHMI Password: 123456 Let’s automate a scenario using data provider feature in the test. Scenario to Automate: In this scenario, we will take a very simple example of LogIn application pixabay where the username and password are required to clear the authentication. First, make two accounts from two different email ids and passwords in the pixabay website. You can also take another website for your convenience. Now automate the following scenario below. 1. Launch the Firefox browser web browser. 2. Open the login page pixabay. 3. Login with two different sets of usernames and passwords using the DataProvider feature. 4. Logout webpage and close the browser. Now follow all steps in the below source code. Program source code 3: package parameterbyDataProv.AfterTest; import org.testng.annotations.BeforeTest; import org.testng.annotations.DataProvider; import org.testng.annotations.Test; public class ParameterDataprovider { // Create a WebDriver reference. WebDriver driver; @BeforeTest // It will be executed before any of the execution of the test method only once. public void setupDriver() { // Create an object of FirefoxDriver class. driver = new FirefoxDriver(); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); driver.manage().window().maximize(); String URL = ""; driver.get(URL); } @Test(dataProvider = "myData") public void login(String Username, String Password) { WebElement userN = driver.findElement(By.name("username")); userN.sendKeys(Username); WebElement passW = driver.findElement(By.name("password")); passW.sendKeys(Password); WebElement login = driver.findElement(By.xpath("*//input[@value='Log in']")); login.click(); WebElement profileImage = driver.findElement(By.xpath("*//img[@class='profile_image']")); profileImage.click(); WebElement logout = driver.findElement(By.linkText("Log out")); logout.click(); driver.findElement(By.linkText("Log in")).click(); } @DataProvider(name = "myData") public Object[][] loginData() { Object[][] data = new Object[2][2]; data[0][0] = "1st username"; data[0][1] = "1st password"; data[1][0] = "2nd username"; data[1][1] = "2nd password"; return data; } @AfterTest public void close() { driver.close(); } } Output: PASSED: login PASSED: login =============================================== Default test Tests run: 2, Failures: 0, Skips: 0 =============================================== TestNG Result Window: Once test execution is finished, the results will look like this in the TestNg Result window. As we are providing the test data two times. Therefore, the above test will be executed two times completely. How to Call DataProvider from another Class? Scenario to Automate: 1. Launch Firefox browser. 2. Open the Yandex search (“”). 3. Send data into search text box. 4. Close browser. Now follow the below source code to automate the above scenario. Program source code 4: package dataProviderThroughClass; DataProviderTest { WebDriver driver; @BeforeTest public void webDriversetUp() { driver = new FirefoxDriver(); driver.manage().window().maximize(); String URL = ""; driver.get(URL); driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS); } @Test(dataProvider = "getData", dataProviderClass = DataProviderClass.class) public void loginMethod(String data) { driver.findElement(By.xpath("*//input[@id='text']")).sendKeys(data); } @AfterTest public void close() { driver.close(); } } Now, create DataProviderClass. package dataProviderThroughClass; import org.testng.annotations.DataProvider; public class DataProviderClass { @DataProvider(name = "getData") public static Object[][] dataProviderMethod() { return new Object[][] { {"TestNG"}, {" DataProvider"}, {" multiple"}, {" parameters"} }; } } Now execute DataProviderTest and see the below test result on the console. Output: PASSED: loginMethod("TestNG") PASSED: loginMethod(" DataProvider") PASSED: loginMethod(" multiple") PASSED: loginMethod(" parameters") =============================================== Default test Tests run: 4, Failures: 0, Skips: 0 =============================================== As we are providing the test data one time. Therefore, the above test will be executed one time only. Advantages of TestNG Data Provider The advantages of data provider in TestNG are as follows: 1. TestNG Data provider helps to pass parameter values directly to the test method. 2. It allows users to write data-driven test where we can run multiple times the same test method with different sets of test data. Final words Hope that this tutorial has covered almost all important concept related to DataProvider annotation in TestNG with example scenarios and program source code. I hope that you will have understood this data provider in TestNG and enjoyed this topic. Thanks for reading!!! Next ⇒ How to run multiple tests using testng.xml file⇐ PrevNext ⇒
https://www.scientecheasy.com/2019/05/testng-dataprovider.html/
CC-MAIN-2020-24
refinedweb
1,631
51.04
I have to make a DOS (real DOS, not a Win32 command line app) program that parse a file a set some environment variables. I made the full program when I stuck against the apparently easy task of setting the environment variables. I know it exists a int putenv(const char*); function that should do the job. But it actually does not work. Here a little example: Try to compile this code and execute it, it won't set anything even if stat == 0 as success!Try to compile this code and execute it, it won't set anything even if stat == 0 as success!Code:#include <iostream> int main() <%using namespace std; int stat = putenv("HELLO=C:\\TEMP"); if (stat == -1) <% cout<<"failed to define environment variable"<<endl; %> %> I also tried uselessy system("set HELLO=TEST");, do anyone have an idea? Thanks.
http://cboard.cprogramming.com/cplusplus-programming/68055-setting-environment-variables.html
CC-MAIN-2014-52
refinedweb
143
69.52
F# is both a parallel and a reactive language. By this we mean that running F# programs can have both multiple active evaluations (e.g. .NET threads actively computing F# results), and multiple pending reactions (e.g. callbacks and agents waiting to react to events and messages). One simple way to write parallel and reactive programs is with F# async expressions. In this and future posts, I will cover some of the basic ways in which you can use F# async programming – roughly speaking, these are design patterns enabled by F# async programming. I assume you already know the basics of using async, e.g. see this introductory guide. We’ll start with two easy design patterns: Parallel CPU Asyncs and Parallel I/O Asyncs. - Part 3 describes lightweight, reactive, isolated agents in F# . Pattern #1: Parallel CPU Asyncs Let’s take a look at an example of our first pattern: Parallel CPU Asyncs, that is, running a set of CPU-bound computations in parallel. The code below computes the Fibonacci function, and schedules the computations in parallel: let rec fib x = if x <= 2 then 1 else fib(x-1) + fib(x-2) let fibs = Async.Parallel [ for i in 0..40 -> async { return fib(i) } ] |> Async.RunSynchronously Producing: val fibs : int array = [|] The above code sample shows the elements of the Parallel CPU Asyncs pattern: (a) “async { … }” is used to specify a number of CPU tasks (b) These are composed in parallel using the fork-join combinator Async.Parallel In this case the composition is executed using Async.RunSynchronously, which starts an instance of the async and synchronously waits for the overall result. You can use this pattern for many routine CPU parallelization jobs (e.g. dividing and parallelizing a matrix multiply), and for batch processing jobs. Pattern #2: Parallel I/O Asyncs So far we have only seen parallel CPU-bound programming with F#. One key thing about F# async programming is that you can use it for both CPU and I/O computations. This leads to our second pattern: Parallel I/O Asyncs, i.e. doing I/O operations in parallel (also known as overlapped I/O). For example, the following requests multiple web pages in parallel and reacts to the responses for each request, and returns the collected results. open System open System.Net open Microsoft.FSharp.Control.WebExtensions let http url = async { let req = WebRequest.Create(Uri url) use! resp = req.AsyncGetResponse() use stream = resp.GetResponseStream() use reader = new StreamReader(stream) let contents = reader.ReadToEnd() return contents } let sites = [“”; “”; “”; “”] let htmlOfSites = Async.Parallel [for site in sites -> http site ] |> Async.RunSynchronously The above code sample shows the essence of the Parallel I/O Asyncs pattern: (a) “async { … }” is used to write tasks which include some asynchronous I/O. (b) These are composed in parallel using the fork-join combinator Async.Parallel In this case, the composition is executed using Async.RunSynchronously, which synchronously waits for the overall result Using let! (or its resource-disposing equivalent use!) is one basic way of composing asyncs. A line such as let! resp = req.AsyncGetResponse() causes a “reaction” to occur when a response to the HTTP GET occurs. That is, the rest of the async { … } runs when the AsyncGetResponse operation completes. However, no .NET or operating system thread is blocked while waiting for this reaction: only active CPU computations use an underlying .NET or O/S thread. In contrast, pending reactions (for example, callbacks, event handlers and agents) are relatively cheap, often as cheap as a single registered object. As a result you can have thousands or even millions of pending reactions. For example, a typical GUI application has many registered event handlers, and a typical web crawler has a registered handler for each outstanding web request. In the above, “use!” replaces “let!” and indicates that the resource associated with the web request should be disposed at the end of the lexical scope of the variable. One of the nice things about I/O parallelization is scaling. With multi-core CPU-bound programming you often see 2x, 4x or 8x speedups if you work hard enough on a many-core machine. With I/O parallel programming you can perform hundreds or thousands of operations in parallel (though actual parallelization depends on your operating system and network connections), giving speedups of 10x, 100x, 1000x or more, even on a single-core machine. For example, see the use of F# asyncs in this nice sample, ultimately called from an Iron Python application. Many modern applications are I/O bound so it’s important to be able to recognize and apply this design pattern in practice. Starting on the GUI Thread, finishing on the GUI thread There is an important variation on both of these design patterns. This is where Async.RunSynchronously is replaced by Async.StartWithContinuations. Here the parallel composition is started and you specify three functions to run when the async completes with success, failure or cancellation. Whenever you face the problem “I need to get the result of an async but I really don’t want to use RunSynchronously”, then you should consider either: (a) start the async as part of a larger async by using let! (or use!), or (b) start the async with Async.StartWithContinuations Async.StartWithContinuations is very useful when starting asyncs on the GUI thread, since you never want to block the GUI thread, instead you want to schedule some GUI updates to occur when the async completes. For example, this is used in the BingTranslator examples in the F# JAOO Tutorial code. A full version of this sample is shown at the end of this blog post, but the important thing here is to note what happens when the “Translate” button is pressed: button.Click.Add(fun args -> let text = textBox.Text translated.Text <- “Translating…” let task = async { let! languages = httpLines languageUri let! fromLang = detectLanguage text let! results = Async.Parallel [for lang in languages -> translateText (text, fromLang, lang)] return (fromLang))) In the highlighted parts, the async is specified, and this includes a use of Async.Parallel to translate the input text into multiple languages in parallel. The composite async is started with Async.StartWithContinuations. This unblocks as soon as the async hits its first I/O operation, and specifies three functions to run when the async completes with success, failure or cancellation. Here is a screen shot of the operation after the task completes (no guarantees given about the accuracy of the translation…) Async.StartWithContinuations has the important property that if the async is started on the GUI thread (i.e. a thread with a non-null SynchronizationContext.Current), then the completion function is called on the GUI thread. This makes it safe to update the results. The F# async library allows you to specify composite I/O tasks and use them from the GUI thread without having to marshal your updates from background threads, a topic we’ll explore in later posts. Some notes on how Async.Parallel works: Ø When run, asyncs composed with Async.Parallel are initially started through a queue of pending computations. Ultimately this uses QueueUserWorkItem, like most async processing libraries. It is possible to use a separate queue, something we’ll discuss in later posts. Ø There is nothing particularly magical about Async.Parallel: you can define your own async combinators that coordinate asyncs in different ways by using other primitives in the Microsoft.FSharp.Control.Async library such as Async.StartChild. We’ll return to this topic in a later post. More Examples Example uses of these patterns in the F# JAOO Tutorial code are Ø BingTranslator.fsx and BingTranslatorShort.fsx: calling a REST API using F#. This is similar to any similar web-based HTTP service. A version of this sample is given below. Ø AsyncImages.fsx: parallel disk I/O and image processing Ø PeriodicTable.fsx: calling a web service, fetching atomic weights in parallel Limitations of the Patterns The two parallel patterns shown here have some limitations. Notably, an async generated by Async.Parallel is not, when run, “chatty” – for example, it doesn’t report progress or partial results. To handle that we need to build a more chatty object that raises events as partial operations complete. We’ll be looking at that design pattern in later posts. Also, Async.Parallel handles a fixed number of jobs. In later posts we’ll look at many examples where jobs get generated as work progresses. Another way to look at that is that an async generated by Async.Parallel does not immediately accept incoming messages, i.e. it is not an agent whose progress can be directed, apart from cancellation. Asyncs generated by Async.Parallel do support cancellation. Cancellation is not effective until all the sub-tasks have completed or been effectively cancelled. This is normally what you want. Conclusion The Parallel CPU Asyncs and Parallel I/O Asyncs patterns are probably the two simplest design patterns using F# async programming. As often with simple things, they are important and powerful. Note that the only difference between the patterns is that I/O Parallel uses asyncs which include (and are often dominated by) I/O requests, plus some CPU processing to create request objects and to do post-processing. In future blog posts we’ll be looking at additional design topics for parallel and reactive programming with F# async, including Ø starting asyncs from the GUI thread Ø defining lightweight async agents Ø defining background worker components using async Ø authoring .NET tasks using async Ø authoring the.NET APM patterns using async Ø cancelling asyncs BingTranslator Code Sample Here’s the sample code for the BingTranslator example. You’ll need a Live API 1.1 AppID to run it (NOTE: the samples would need to be adjusted for the Bing API 2.0, notably the language detection API is not present in 2.0, however the code should still act as a good guide) open System open System.Net open System.IO open System.Drawing open System.Windows.Forms open System.Text /// A standard helper to read all the lines of a HTTP request. The actual read of the lines is /// synchronous once the HTTP response has been received. let httpLines (uri:string) = async { let request = WebRequest.Create uri use! response = request.AsyncGetResponse() use stream = response.GetResponseStream() use reader = new StreamReader(stream) let lines = [ while not reader.EndOfStream do yield reader.ReadLine() ] return lines } type System.Net.WebRequest with /// An extension member to write content into an WebRequest. /// The write of the content is synchronous. member req.WriteContent (content:string) = let bytes = Encoding.UTF8.GetBytes content req.ContentLength <- int64 bytes.Length use stream = req.GetRequestStream() stream.Write(bytes,0,bytes.Length) /// An extension member to read the content from a response to a WebRequest. /// The read of the content is synchronous once the response has been received. member req.AsyncReadResponse () = async { use! response = req.AsyncGetResponse() use responseStream = response.GetResponseStream() use reader = new StreamReader(responseStream) return reader.ReadToEnd() } #load @”C:\fsharp\staging\docs\presentations\2009-10-04-jaoo-tutorial\BingAppId.fs” //let myAppId = “please set your Bing AppId here” /// The URIs for the REST service we are using let detectUri = “” + myAppId let translateUri = “” + myAppId + “&” let languageUri = “” + myAppId let languageNameUri = “” + myAppId /// Create the user interface elements let form = new Form (Visible=true, TopMost=true, Height=500, Width=600) let textBox = new TextBox (Width=450, Text=“Enter some text”, Font=new Font(“Consolas”, 14.0F)) let button = new Button (Text=“Translate”, Left = 460) let translated = new TextBox (Width = 590, Height = 400, Top = 50, ScrollBars = ScrollBars.Both, Multiline = true, Font=new Font(“Consolas”, 14.0F)) form.Controls.Add textBox form.Controls.Add button form.Controls.Add translated /// An async method to call the language detection API let detectLanguage text = async { let request = WebRequest.Create (detectUri, Method=“Post”, ContentType=“text/plain”) do request.WriteContent text return! request.AsyncReadResponse() } /// An async method to call the text translation API let translateText (text, fromLang, toLang) = async { let uri = sprintf “%sfrom=%s&to=%s” translateUri fromLang toLang let request = WebRequest.Create (uri, Method=“Post”, ContentType=“text/plain”) request.WriteContent text let! translatedText = request.AsyncReadResponse() return (toLang, translatedText) } button.Click.Add(fun args -> let text = textBox.Text translated.Text <- “Translating…” let task = async { /// Get the supported languages let! languages = httpLines languageUri /// Detect the language of the input text. This could be done in parallel with the previous step. let! fromLang = detectLanguage text /// Translate into each language, in parallel let! results = Async.Parallel [for lang in languages -> translateText (text, fromLang, lang)] /// Return the results return (fromLang,results) } /// Start the task. When it completes, show the))) It’s not actually all that easy to translate this to the Bing API, since that API seems to only have the Translate function, and not the Detect or GetLanguages functions. Unless I’m missing something in the API docs… About the: Pattern #1: Parallel CPU Asyncs In this sample every Fibonacci number is calculated separately? I mean there is no reuse if we have already calculated the 6th Fibonacci number, calculating the 7th starts from the beginning. I suppose the idea here is to show that calculations can be done in parallel no matter what the function is. there is some similar post in c# ? Thanks
https://blogs.msdn.microsoft.com/dsyme/2010/01/09/async-and-parallel-design-patterns-in-f-parallelizing-cpu-and-io-computations/
CC-MAIN-2016-36
refinedweb
2,207
58.89
OldZU5 Synopsis [Miscellaneous] OldZU5=n n is either 1 (true) or 0 (false). The default value is 0. Description When OldZU5 is enabled (n = 1), switching to the current namespace using the ZN command clears the global vector cache. When this parameter is not enabled, switching to the current namespace has no effect. Changing This Parameter On the Compatibility page of the Management Portal (System Administration > Configuration > Additional Settings > Compatibility), in the OldZU5 row, click Edit. Select OldZU5 to enable this setting. Instead of using the Management Portal, you can change OldZU5 OldZU5() Opens in a new window method of the %SYSTEM.Process Opens in a new window class. See the class reference for details.
https://docs.intersystems.com/healthconnectlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RACS_OLDZU5
CC-MAIN-2021-25
refinedweb
115
58.69
Java I/O Buffered Streams Java I/O Buffered Streams In this section we will discuss the I/O Buffered Streams. In Java programming when we are doing an input and output operation... a memory area called buffer. In the read operation input streams read data Classes and Interfaces of the I/O Streams Classes and Interfaces of the I/O Streams  ... then this exception to be occured. InterruptedIOException When the I/O... When the I/O operations to be failed then it occurs. NotActiveException... to read the data's byte.int read() throws IOException read(byte[] of I/O Data Streams Data streams are filtered streams that perform... will discuss the I/O Buffered Streams. Java I/O Data Streams In this tutorial we will discuss the Java I/O Data Streams. Java I/O Object... use a binary stream. The working process of the I/O streams can... What is Java I/O?   Java I/O JAVA I/O Introduction The Java Input/Output (I/O) is a part of java.io package. The java.io package contains a relatively large number of classes... in the package which are used for reading from and writing to byte streams, respectively... in two ways : Using Standard Streams And using Console Standard Streams Various of the Operating Systems have the feature of Standard Streams i/o streamas i/o streamas java program using bufferedreader and bufferedwriter Hi Friend, Try the following code: import java.io.*; class BufferedReaderAndBufferedWriter{ public static void main(String[] args) throws Post your Comment
http://roseindia.net/discussion/22891-Overview-of-I/O-Data-Streams.html
CC-MAIN-2015-48
refinedweb
256
68.26
Euler’s formula, \[e^{i \theta} = \cos \theta + i \sin \theta,\] is both a beautiful and eminently useful result about the complex numbers. It leads directly to the more widely known Euler’s identity, \[e^{i \pi} + 1 = 0,\] which shows a somewhat suprising connection between five of the most significant numbers in mathematics. There are many proofs of Euler’s formula, but as someone who has taught Calculus II for many years, the proof using Taylor series is rather close to my heart. In this post, we’ll explore this proof using sympy, a Python library for symbolic mathematics, in order to avoid using calculus ourselves. The basic idea of Taylor series is that many (though not all) functions on the real line may be represented by polynomials of infinite degree. The most well- known example of a Taylor series is the sum the geometric series, \[\frac{1}{1 - x} = 1 + x + x^2 + x^3 + x^4 + x^5 + \cdots\] for \(|x| < 1\). We can illustrate this sum in sympy as follows. from sympy.interactive import printing printing.init_printing() import sympy as sym x = sym.Symbol('x') sym.series(1 / (1 - x)) \[1 + x + x^{2} + x^{3} + x^{4} + x^{5} + \mathcal{O}\left(x^{6}\right)\] The term sym.Order(x**6) \[\mathcal{O}\left(x^{6}\right)\] here indicates that all of the omitted terms have degree greater than or equal to six. Naturally, sympy can only ever calculate finitely many terms of a functions Taylor series. (We’ll discuss the impact of this limitation on our “proof” later.) We may control the number of terms of the Taylor series calculated by sympy by passing the optional argument n to sympy.series. For example, we may calculate the first ten terms of the geometric series as follows. sym.series(1 / (1 - x), n=10) \[1 + x + x^{2} + x^{3} + x^{4} + x^{5} + x^{6} + x^{7} + x^{8} + x^{9} + \mathcal{O}\left(x^{10}\right)\] We now turn our attentation to Euler’s formula by first defining the variable theta. theta = sym.Symbol('\\theta', real=True) theta \[\theta\] We also define functions to calculate the Taylor series of \(\sin \theta\) and \(\cos \theta\) for any degree \(n\). def sin_series(n): return sym.series(sym.sin(theta), n=n) def cos_series(n): return sym.series(sym.cos(theta), n=n) The first few terms for the series of \(\sin \theta\) are n = 10 sin_series(n) \[\theta - \frac{\theta^{3}}{6} + \frac{\theta^{5}}{120} - \frac{\theta^{7}}{5040} + \frac{\theta^{9}}{362880} + \mathcal{O}\left(\theta^{10}\right)\] and the first few for \(\cos \theta\) are cos_series(n) \[1 - \frac{\theta^{2}}{2} + \frac{\theta^{4}}{24} - \frac{\theta^{6}}{720} + \frac{\theta^{8}}{40320} + \mathcal{O}\left(\theta^{10}\right)\] Let’s compare these two series to that for \(e^{i \theta}\). def exp_series(n): return sym.series(sym.exp(sym.I * theta), n=n) exp_series(n) \[1 + i \theta - \frac{\theta^{2}}{2} - \frac{i \theta^{3}}{6} + \frac{\theta^{4}}{24} + \frac{i \theta^{5}}{120} - \frac{\theta^{6}}{720} - \frac{i \theta^{7}}{5040} + \frac{\theta^{8}}{40320} + \frac{i \theta^{9}}{362880} + \mathcal{O}\left(\theta^{10}\right)\] The key idea behind this proof of Euler’s formula is that the terms of this sum containing \(i\) are identical (apart form the \(i\)) to the terms of the Taylor series for \(\sin \theta\). Similary, the terms not containing \(i\) are identical to the terms from the Taylor series of \(\cos \theta\). We can see this more clearly by asking sympy to collect the terms containing \(i\). sym.collect(exp_series(n), sym.I) \[1 + i \left(\theta - \frac{\theta^{3}}{6} + \frac{\theta^{5}}{120} - \frac{\theta^{7}}{5040} + \frac{\theta^{9}}{362880} + \mathcal{O}\left(\theta^{10}\right)\right) - \frac{\theta^{2}}{2} + \frac{\theta^{4}}{24} - \frac{\theta^{6}}{720} + \frac{\theta^{8}}{40320} + \mathcal{O}\left(\theta^{10}\right)\] Odd placement of the leading one aside, we immediately recognize this expression as \(\sin \theta + i \cos \theta\). In addition to this visual inspection, sympy makes it fairly easy to verify that the Taylor series agree to a fairly large number of terms. N = 100 sym.simplify(exp_series(N) - (cos_series(N) + sym.I * sin_series(N))) \[\mathcal{O}\left(\theta^{100}\right)\] So we see a bit more formally that Euler’s formula is quite likely to be true. The weasel work “likely” is necessary here because we really only checked that finitely many terms of the infinite Taylor series for \(e^{i \theta}\) and \(\cos \theta + i \sin \theta\) agree. Fortunately, armed with a bit of calculus knowledge, a pen, and some paper, we could verify by hand that all of the terms coincide, if we were so inclided. Discussion on Hacker News
http://austinrochford.com/posts/2014-02-05-eulers-formula-sympy.html
CC-MAIN-2017-13
refinedweb
806
60.65
GNU sed 3.95 has been released. This is an alpha release for the upcoming GNU sed 4.0 release. GNU sed 3.95 merges most of the changes in the free-software super-sed project. In particular: * Can customize line wrap width on single `l' commands * `L' command formats and reflows paragraphs like `fmt' does. * The test suite makefiles are better organized (this change is transparent however). * Compiles and bootstraps out-of-the-box under MinGW32 and Cygwin. * Optimizes cases when pattern space is truncated at its start or at its end by `D' or by a substitution command with an empty RHS. For example scripts like this, seq 1 10000 | tr \\n \ | ./sed ':a; s/^[0-9][0-9]* //; ta' whose behavior was quadratic with previous versions of sed, have now linear behavior. * Bug fix: Made the behavior of s/A*/x/g (i.e. `s' command with a possibly empty LHS) more consistent: pattern GNU sed 3.x GNU sed 4.x B xBx xBx BC xBxCx xBxCx BAC xBxxCx xBxCx BAAC xBxxCx xBxCx * Check for invalid backreferences in the RHS of the `s' command (e.g. s/1234/\1/) * Support for \[lLuUE] in the RHS of the `s' command like in Perl. * New regular expression matcher * Bug fix: if a file was redirected to be stdin, sed did not consume it. So (sed d; sed G) < TESTFILE double-spaced TESTFILE, while the equivalent `useless use of cat' cat TESTFILE | (sed d; sed G) printed nothing (which is the correct behavior). A test for this bug was added to the test suite. * The documentation is now much better, with a few examples provided, and a thorough description of regular expressions. * New option -i, to support in-place editing a la Perl. Usually one had to use ed or, for more complex tasks, resort to Perl; this is not necessary anymore. * Added new command-line options: -u, --unbuffered Do not attempt to read-ahead more than required; do not buffer stdout. -l N, --line-length=N Specify the desired line-wrap length for the `l' command. A length of "0" means "never wrap". * Documented command-line option: -r, --regexp-extended Use extended regexps -- e.g. (abc+) instead of \(abc\+\) * Added feature to the `w' command and to the `w' option of the `s' command: if the file name is /dev/stderr, it means the standard error (inspired by awk); and similarly for /dev/stdout. This is disabled if POSIXLY_CORRECT is set. * Added `m' and `M' modifiers to `s' command for multi-line matching (Perl-style); in addresses, only `M' works. * New option `e' to pass the output of the `s' command through the Bourne shell and get the result into pattern space. * Added `e' commnad to pipe the output of a command into the output of sed. * Added `Q' command for `silent quit'; added ability to pass an exit code from a sed script to the caller. * Added `R' command to read a single line from a file. * Added `W' command to write first line of pattern space to a file * Added `T' command for `branch if failed'. * Added `v' command, which is a do-nothing intended to fail on seds that do not support super-sed's extensions. * New internationalization translations added: fr ru de it el sk pt_BR sv (some of them need updating) * The s/// command now understands the following escapes (in both halves): \a an "alert" (BEL) \f a form-feed \n a newline \r a carriage-return \t a horizontal tab \v a vertical tab \oNNN a character with the octal value NNN \dNNN a character with the decimal value NNN \xNN a character with the hexadecimal value NN This behavior is disabled if POSIXLY_CORRECT is set, at least for the time being (until I can be convinced that this behavior does not violate the POSIX standard). (Incidentally, \b (backspace) was omitted because of the conflict with the existing "word boundary" meaning. \ooo octal format was omitted because of the conflict with backreference syntax.) * If POSIXLY_CORRECT is set, the empty RE // now is the null match instead of "repeat the last REmatch". As far as I can tell this behavior is mandated by POSIX, but it would break too many legacy sed scripts to blithely change GNU sed's default behavior. This release is known to have bugs. Some can remain in GNU sed 4.0, but not those that for example give problems when configuring packages, because this would seriously hinder the usability of a GNU system. Please try GNU sed 3.95 to configure your favorite packages on your favorite architecture, and report whatever goes wrong to the maintainer at [email protected] (libc-alpha CCed because the bigger test suite might interest the folks there) URLs: ----- FTP site for GNU sed 3.95 Web site for super-sed Paolo Bonzini _____________________________________________ GNU Make version 3.80 is now available for download. The `make' utility automates the process of compilation. When the source files of a large program change, Make automatically determines which pieces need to be updated and recompiles only those files. GNU make is fully compliant with the POSIX.2 standard, but also has many powerful extensions: flexible implicit pattern rules, an extensive set of text manipulation functions, conditional evaluation of makefiles, support for parallel command execution, automatic updating of makefiles, and much more. In addition to UNIX systems, it can be built for DOS, Windows (using various toolkits), VMS, and Amiga platforms. Please see the README and INSTALL files for information on building GNU make for your system. This release contains several bug fixes plus some powerful new features, including an $(eval ...) function, "order-only" prerequisites, compatibility with the odd SysV make $$@ syntax, a new command-line option -B or --always-make, ability to create recursive functions for use with $(call ...), new variables MAKEFILE_LIST and .VARIABLES, and more. See the NEWS file and the GNU Make User's Manual, contained in the distribution, for full details on user-visible changes. Bugs and problems should be reported to the mailing list, or entered into the online bug tracking system at Savannah . You can also find information for accessing the latest versions of GNU make via CVS at the Savannah site. Requests for help can be sent to , or one of the gnu.utils.bug or gnu.utils.help USENET newsgroups. The complete distribution is available from the GNU ftp site and its mirrors. Please see: for a complete list of international mirror sites. make-3.80.tar.bz/2 is 921645 bytes make-3.80.tar.gz/ is 1211924 bytes MD5 checksums: 0bbd1df101bc0294d440471e50feca71 make-3.80.tar.bz/2 c68540da9302a48068d5cce1f0099477 make-3.80.tar.gz/ Have fun! - ------------------------------------------------------------------------------- Paul D. Smith Find some GNU make tips at: "Please remain calm...I may be mad, but I am a professional." --Mad Scientist _____________________________________________ Release 2.3 of the GNU C library is now available at and (hopefully soon) and all the mirror sites around the globe. The new files are glibc-2.3.tar.bz/2 (also .gz) glibc-linuxthreads-2.3.tar.bz/2 (also .gz) glibc-2.2.5-2.3.diff.bz/2 (also .gz) and for those following the test releases glibc-2.2.94-2.3.diff.bz/2 (also.gz) This release introduces a number of new features but not too many. glibc 2.2 was already mostly complete. Instead this release focuses on making functionality compliant with standards and on performance optimizations. The user visible changes include: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Version 2.3 * Masahide Washizawa contributed iconv modules for IBM1163 and IBM1164/2/2/2 cce21237f220906c5af2a71b81db33308d37dcbe bison-1.50.tar.gz/ Please report bugs by email to . Here are the NEWS file entries for this release: * GLR parsing The declaration %glr-parser causes Bison to produce a Generalized LR (GLR) parser, capable of handling almost any context-free grammar, ambiguous or not. The new declarations %dprec and %merge on grammar rules allow parse-time resolution of ambiguities. Contributed by Paul Hilfinger. Unfortunately GLR parsing does not yet work properly on 64-bit hosts like the Alpha, so please stick to 32-bit hosts for now. * Output Directory When not in Yacc compatibility mode, when the output file was not specified, running `bison foo/bar.y' created `foo/bar.c'. It now creates `bar.c'. * Undefined token The undefined token was systematically mapped to 2 which prevented the use of 2 by the user. This is no longer the case. * Unknown token numbers If yylex returned an out of range value, yyparse could die. This is no longer the case. * Error token According to POSIX, the error token must be 256. Bison extends this requirement by making it a preference: if the user specified that one of her tokens is numbered 256, then error will be mapped onto another number. * Verbose error messages They no longer report `..., expecting error or...' for states where error recovery is possible. * End token Defaults to `$end' instead of `$'. * Error recovery now conforms to documentation and to POSIX When a Bison-generated parser encounters a syntax error, it now pops the stack until it finds a state that allows shifting the error token. Formerly, it popped the stack until it found a state that allowed some non-error action other than a default reduction on the error token. The new behavior has long been the documented behavior, and has long been required by POSIX. For more details, please see . * Traces Popped tokens and nonterminals are now reported. * Larger grammars Larger grammars are now supported (larger token numbers, larger grammar size (= sum of the LHS and RHS lengths), larger LALR tables). Formerly, many of these numbers ran afoul of 16-bit limits; now these limits are 32 bits on most hosts. * Explicit initial rule Bison used to play hacks with the initial rule, which the user does not write. It is now explicit, and visible in the reports and graphs as rule 0. * Useless rules Before, Bison reported the useless rules, but, although not used, included them in the parsers. They are now actually removed. * Useless rules, useless nonterminals They are now reported, as a warning, with their locations. * Rules never reduced Rules that can never be reduced because of conflicts are now reported. * Incorrect `Token not used' On a grammar such as %token useless useful %% exp: '0' %prec useful; where a token was used to set the precedence of the last rule, bison reported both `useful' and `useless' as useless tokens. * Revert the C++ namespace changes introduced in 1.31 as they caused too many portability hassles. * Default locations By an accident of design, the default computation of @$ was performed after another default computation was performed: @$ = @1. The latter is now removed: YYLLOC_DEFAULT is fully responsible of the computation of @$. * Token end-of-file The token end of file may be specified by the user, in which case, the user symbol is used in the reports, the graphs, and the verbose error messages instead of `$end', which remains being the default. For instance %token YYEOF 0 or %token YYEOF 0 "end of file" * Semantic parser This old option, which has been broken for ages, is removed. * New translations Brazilian Portuguese, thanks to Alexandre Folle de Menezes. Croatian, thanks to Denis Lackovic. * Incorrect token definitions When given `%token 'a' "A"', Bison used to output `#define 'a' 65'. * Token definitions as enums Tokens are output both as the traditional #define's, and, provided the compiler supports ANSI C or is a C++ compiler, as enums. This lets debuggers display names instead of integers. * Reports In addition to --verbose, bison supports --report=THINGS, which produces additional information: - itemset complete the core item sets with their closure - lookahead explicitly associate lookaheads to items - solved describe shift/reduce conflicts solving. Bison used to systematically output this information on top of the report. Solved conflicts are now attached to their states. * Type clashes Previous versions don't complain when there is a type clash on the default action if the rule has a mid-rule action, such as in: %type bar %% bar: '0' {} '0'; This is fixed. * GNU M4 is now required when using Bison. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
http://www.linuxtoday.com/developer/2002100900726NWSWRL
CC-MAIN-2017-17
refinedweb
2,041
65.22
Hello Daniel. Thanks for your interest in discussing this. Emailing this list is indeed the right way to do so. Having answered that meta question, I am changing the email Subject to reflect the substantive topic. Daniel J. Lacks, PhD wrote: > I am interested in the design topic for SavePoints. I was wondering > if anyone considered server-side stashes instead of client-side? I am currently working on Shelving and Checkpointing, features which are more or less what the SavePoints page envisions (although not necessarily the implementation it describes). References: "Commit shelving" "Commit checkpointing" These are deliberately client-side features for the main reasons of speed and offline usability. Speed is critical: one of the main use cases is to be able to switch *quickly* between small tasks, and this has to be useful for people using a relatively slow WAN connection. Have we thought of the server-side possibility? Of course. (See for example the 2nd/3rd comments in those issues.) But Subversion already supports saving work to a server-side branch, so we need to ask, what do you need for your use case, beyond the present ability to use branches? So let's explore further. > The stash would work similar to a commit > except it would check-in code perhaps in a hidden or protected branch > within the svn:stash workspace. Making namespaces of branches that are 'hidden' or 'protected' is something that can potentially be done with server authz rules, but is this important for you? Why? If you just designate '^[/project]/branches/user/USERNAME/*' as the place for USERNAME's branches, with no special protection, does that work? > This would allow developers to not only > swap workspaces, but to swap them across multiple physical machines or > VMs. It is also possible to share those changes with others as well, for > example the basic commands to show SavePoints may only show your save > points, but perhaps there can be an optional argument to show anyone’s > SavePoints either on your branch or any branch. I imagine that swapping > to a SavePoint would first work like a switch command to get you to the > same point you were (optionally), then a pseudo-merge command to grab > the changes and copy them into your local directory. It seems like such > a capability may be built reusing some existing functionality. It certainly can be done by (re)using existing functionality :-) # save my local changes svn copy -m "save" . ^/branches/save/$USER/foo # revert my local changes, now they're saved as 'foo' svn revert -R . # show my save points svn list ^/branches/save/$USER # show everyone's save points svn list ^/branches/save -R # apply save point 'foo' to my WC svn merge ^/branches/save/$USER/foo There's not even a need for separate 'switch' and 'pseudo-merge' steps, if we assume my WC is already on the same branch (e.g. trunk) from which the save-point branch 'foo' was created. Seriously, though, I can say some things. I am sure you wish for a user-friendly command-line interface to access this scheme, such as "svn shelve --list" (which would translate to "svn list ^[/project]/branches/save/$USER" if that is the underlying storage scheme). Subversion is intended to be a system which has a core part with libraries and a simple command-line interface, that is then extended upwards with third-party interfaces such as TortoiseSVN, Cornerstone, Visual Studio / IntelliJ / NetBeans IDE integrations, and others. Maybe this kind of use of branches for 'shelving' is more the job of a higher layer of software built on top of Subversion core, or an alternative command-line client. Not all features like this should be built in to the core. One of Subversion's strengths is the simplicity of its command set. Of course this cuts both ways: if this functionality is commonly wanted then there should be an easier way to access it. One difficulty here is we (this group of developers subscribed to dev@) don't really get involved much in designing Subversion features outside the part that we produce ourselves. Maybe we could change this. If we draw a diagram showing the core and third-party Subversion software, showing what is in the core and what sort of features we expect the third-party software to provide, that just might incite those third-party developers to go and build those features. Another angle, touched on by Paul Hammant in his reply, is portability of change-sets. Let's say we implement client-side shelving. The next logical request is certainly going to be a way to transfer those shelved changes easily to a branch from where they can be moved to another client machine, shared, backed up, etc. And, in his interesting case, transferred to a code review system which is separate from the Subversion server. So "standardized server-side handling of such things", as he puts it. What can we do in this direction? One thing we can do is make a Subversion 'patch' format that is a complete serialized representation of any potential Subversion commit. The 'svnadmin dump' format is a serialized representation of an actual commit, based on a specific previous revision number. For a 'patch' representing a potential commit, we don't know the eventual base revision yet, and so we need the sort of flexibility in applying it that a 'context diff' gives. We need to meld the 'context diff' idea (which originally is only defined for plain text) with the ability to specify all possible Subversion operations including copies and moves, directories, and properties. Another thing we can do is look at what sort of commands and infrastructure will be needed to refer to these change-sets and send and receive them. For example, 'shelving' should have 'import' and 'export' commands to bring change-sets into and out of the 'shelved' space, and also there should be ways to import a change-set directly to a new branch and export one from a branch. I say this because if you build a system where there are several concepts like 'shelved changes' and 'changelists' and 'commits' but for each concept there is only exactly one thing you can do with it (for a shelved change: you can unshelve into the wc, only) that system becomes limited and clumsy. Some things you can do require two or three steps to do, when logically they should only require one step. So, yes, lots of interest. What do you (all) think? - Julian
https://mail-archives.apache.org/mod_mbox/subversion-dev/201711.mbox/%[email protected]%3E
CC-MAIN-2020-16
refinedweb
1,090
58.42
IntelliJ IDEA 2021.2 introduces project-wide analysis for Java, new actions that can be triggered when you save changes, a new UI for managing Maven and Gradle dependencies, and other useful updates. For more details on the new feature highlights, you can watch our video overview below or read on! In this release, we’re introducing a new feature for tracking errors throughout the whole project before compilation – project-wide analysis. When you click the dedicated icon in the Problems tool window, you enable a check that is performed on every code change. When the process is over, the IDE displays all the found errors, including those that are detected only by scanning the whole project. The feature works with small and medium-size projects. We’ve added a number of actions that will be initiated by saving the project, including reformatting code and optimizing imports. All of them are conveniently gathered together in Preferences / Settings | Tools | Actions on Save. It is easy to configure them from there by ticking the necessary checkboxes. If you’d like to adjust the settings for any action more precisely, simply hover over an action and click the configuration link. Both IntelliJ IDEA Community Edition and IntelliJ IDEA Ultimate now include Package Search, a powerful new interface to manage your project dependencies. With Package Search, you can find new dependencies, easily add them, and manage existing ones. The new plugin will show you information about each dependency, including whether any updates are available. We have also added an inspection that lets you apply available updates directly in the editor. Currently, Package Search works with Maven and Gradle projects. Experimental support for sbt projects is also available with Scala plugin EAP versions. You can read more about it here. Our inspections and quick fixes are not just helpful for coding but also described in detail. The updated descriptions explain what changes the inspections suggest and the reasoning behind them. Some inspections come with usage examples. Check them out in Preferences/Settings | Editor | Inspections. We’ve made diagrams more informative – they now come with the Structure view containing a map of your diagram with a small preview of the selected block and its neighbors. The new Structure view supports scaling, canvas moving, magnifier mode, layout change, and exporting to an image. Your project’s copyright notice can now include both the project creation year and the current version year. The updated template that contains both dates is available in Preferences/Settings | Editor | Copyright | Copyright profile. IntelliJ IDEA 2021.2 brings some helpful updates to its Markdown support. It is possible to convert .md files from/to different formats ( .html, .docx, IntelliJ IDEA can detect Eclipse projects stored locally on your machine, and it allows you to open them from the Welcome screen. If it is your first IDE launch, select the Open existing Eclipse projects option. If not, automatically detected Eclipse projects will appear in the dedicated node among the recent projects. If you need to configure some use-case-specific options in IntelliJ IDEA, you can do it in the new Advanced Settings node in Preferences/Settings. For example, you can add a left margin in Distraction-free mode or set the caret to move down after you use the Comment with Line Comment action. It is now easier to drag and drop a tool window to the desired place within the main IDE window or in a separate window. You can drag it by clicking and holding the tool window name bar and drop it in any highlighted place. IntelliJ IDEA automatically cleans up any cache and log directories that were last updated more than 180 days ago. This process doesn’t affect system settings and plugin directories. You can initiate the process manually via Help | Delete Leftover IDE Directories. If your project uses a framework that works in IntelliJ IDEA via a plugin, the IDE will notify you and offer to enable it directly from this notification. We’ve simplified navigation in Preferences/Settings by adding arrows to the top right-hand corner of the window. They allow you to quickly jump back and forth between the sections you’ve opened. When any product updates appear in the Toolbox App, your IDE will inform you. If there is a new version available for download, you’ll be able to upgrade to it right from IntelliJ IDEA. Toolbox App 1.20.8804 or later is required to use this feature. IntelliJ IDEA has a Power Save mode to help you extend the battery life on your laptop. To make this mode easier to access, we’ve made it possible to manage it from the status bar. Right-click on the status bar and select Power Save Mode – you’ll see a new icon appear in the bottom right corner of the IDE. Click on this icon whenever you want to turn the mode on or off. IntelliJ IDEA 2021.2 includes a number of helpful updates for coding with the enabled screen reader mode on macOS. We’ve voiced available suggestions for code completion, the content of the selected combo box and combo box lists, and the results of your queries in Search Everywhere. We continue to look into better UI responsiveness and reducing unexpected freezes. In this release, we’ve managed to avoid UI blocks when using context menus, popups, and toolbars. We’ve also moved certain operations that require indices off the UI thread, which should help prevent freezes in other situations. It is now easier to distinguish between public, protected, and private Java members (methods, fields, and classes) as you can configure the color settings for them in Preferences/Settings | Editor | Color Scheme by unfolding the Visibility node. Configuring a new JavaFX project just got easier. In just two steps, you can add a project SDK, language, desired build system, test framework, and one or several frequently used libraries, which come with short descriptions. We’ve added a range of new inspections to address particular use cases in Data Flow Analysis. For example, there are inspections to track a floating-point range or a collection size on update methods. The new Write-only object inspection warns you when you modify an object but never query it for some custom classes defined in your project and the standard library. To learn more about other new and improved inspections, read our blog post. Starting from v2021.2, Kotlin code completion works based on the machine learning mechanism by default. Code suggestions are prioritized more carefully as the IDE relies on the choices of thousands of real users in similar situations. You can configure ML-assisted completion in Preferences/Settings | Editor | Code Completion. Previously you had to wait for code analysis to finish before you could start running your tests. In the current version, you can launch tests immediately after opening the file by clicking the Run test icon in the gutter. We’ve introduced some useful improvements and updates to our coroutine agent in the debugger. The coroutines agent is now available via the Coroutines tab in the Debug tool window. It works for Java run configurations with a dependency on kotlinx.coroutines, and Spring and Maven run configurations. We’ve also fixed an issue when local variables were not used after passing a suspension point and disappeared in the Variables view of the Debugger tool window. Don’t waste another minute! While the IDE is indexing a project you can run and debug your application. The buttons associated with Run/Debug Configuration are active during indexing. In IntelliJ IDEA 2021.1, we’ve introduced WSL 2 support and the Run Targets feature. In v.2021.2, you can use these features for Kotlin. In this release, we’ve added a useful inspection that helps you simplify the syntax and combine several calls into one when calling methods in a chain inside a collection. In previous versions, you manually typed buildString to customize your code. Our new intention action allows you to apply it automatically in just two clicks. The main focus of this release has been Scala 3 support, which has been significantly improved. Indexing is now fast, precise, and version-agnostic. You can now create both sbt and .idea-based Scala 3 projects, as well as Scala 3 SDKs, normally. The editor can handle significant indentation better. We've supported Scala 3 constructs in Scala 2 projects ( -Xsource:3). There are improvements in the debugger, formatter, REPL, auto-import, enums, extension methods, and many others! (That said, please keep in mind Scala 3 support is still a work in progress and not yet perfect.) As is customary in the IntelliJ Platform, the Scala plugin has built-in error highlighting. It's fast, lightweight, and supports all the standard IntelliJ IDEA features. However, because the Scala type system is so complex, the algorithm can sometimes report false errors. Although we're constantly working on improving the implementation, the ability to use the Scala compiler for error highlighting may come in useful in some code bases. Please note that, even though the compiler-based approach is more precise, it is slower, requires more resources, and doesn't support features such as type diffs, quick-fixes, and inspections. So unless there are lots of false errors in the code, the built-in error highlighting is recommended. IntelliJ IDEA lets you preview HTML files in a browser using the built-in web server. Now, it will automatically update the pages in a browser as you edit and save your HTML, CSS, and JavaScript files. To get started, open an HTML file in the editor, hover over it, and click on the icon for the browser you want to use – all browsers are supported. You will no longer need to waste time on refactoring useState values and functions one by one – IntelliJ IDEA can now rename both for you! Place the caret on a state value and press Shift+F6 or go to Refactor | Rename from the right-click context menu. require() Did you know that your IDE can add missing import statements as you complete ES6 symbols? Now it can do the same for CommonJS modules – require imports will be inserted on code completion. Async Profiler is the profiling tool of choice for many developers because of its accuracy and reliability. IntelliJ IDEA now fully supports the profiler on Windows and Apple M1, in addition to Linux and non-M1 macOS, which means you can now use it in most environments. IntelliJ IDEA has support for Async Profiler 2.0. It works via the new Async Profiler configuration, combining the power of the CPU and Allocation profilers. In the Flame Graph, Call Tree, and Method List tabs, the new Show dropdown list lets you choose whether you want to be shown CPU samples or memory allocations. The Timeline displays both of them. You can filter what to show by using the controller in the top right-hand corner. In IntelliJ IDEA 2021.2, when you double-click an item on the Classes tab, the Retained Objects tab shows data for the selected item in the form of a sunburst diagram. If you are more used to analyzing data displayed in a tree, you can now find it in the Dominator Tree tab. If you want to create a custom JDK that contains only the modules and dependencies you need when working on a Jigsaw project, you can add new JLink artifacts to your project in the Project structure window. We’ve reworked the UI of the Gradle Run/Debug Configurations. The basic parameters are now conveniently grouped in one screen. You can add more options based on your needs. For project files stored in WSL 2, we use a daemon that transfers the content of the files between Linux and Windows via a socket. This allows us to increase the indexing speed, as it depends on how fast the IDE reads the file content. Depending on the language you use, the speed increase may vary. For example, for JS-like languages, indexing now takes a third of the time. It is possible to execute Ant tasks in WSL 2. In v2021.2, we’ve expanded the list of possible pre-commit actions with the ability to execute tests. When you tick the Run Tests checkbox in the Before Commit section, your IDE will test the applied changes and notify you if anything goes wrong. We’ve also added the ability to customize the Analyze code and Cleanup options by clicking Choose profile next to them. The progress and results of all the pre-commit checks appear in the Commit area, without disturbing you with additional modal windows. IntelliJ IDEA 2021.2 offers a way to secure your commits by enabling Git commit signing with GPG. To do this, go to Preferences/Settings | Version Control | Git, click Configure GPG Key, and then select it from the drop-down list. If you’re using a GPG key for the first time, you’ll need to configure it. We will no longer use Default changelists as the name for the node that stores uncommitted changes in new projects. Starting from version 2021.2, it is called Changes. Additionally, Git operations will no longer trigger automatic creation of changelists. IntelliJ IDEA displays the difference between the initial and changed files in the editor by default, no matter where you’ve invoked the Show Diff action. If tracking changes in a separate window is more convenient to you, just drag the desired file from the editor. You can quickly find the necessary text in the Local History revisions by typing the query in the search field in the Local History dialog. The in-built terminal now allows you to select the cursor shape. It also offers support for Use Option as Meta key, which lets the Option (⌥) key on the keyboard act as a meta modifier that can be used in combination with other keys. For example, you can use the following shortcuts: Previously when you stopped at a breakpoint, stepped through the code, navigated between frames, or used the "prev/next frame" actions, the IDE opened the files in multiple tabs. In v2021.2, you can enable the preview tab feature for Debugger in Settings/Preferences | General | Editor Tabs. If it is on, these files will open successively in one tab. IntelliJ IDEA lets you display microservice interactions in a diagram, which you can build by clicking the respective icon in the Endpoints tools window. This new diagram offers the option to track which client calls a particular service and navigate to this call in your code. To do so, just click on an arrow that connects the blocks in the diagram. The diagram is available in Java and Kotlin projects if you use Spring, Micronaut, Quarkus, or Helidon. The new Migrate... refactoring helps quickly and painlessly migrate a project or module from Java EE to Jakarta EE. After you initiate it, the Refactoring Preview shows all the found usages of Java EE imports. You can then check through and finalize the process. When you create a new Spring Initializer project, the IDE will download shared indexes automatically, reducing indexing time and speeding up IDE startup. The checkbox that turns this feature on is located on the second screen of the New Project wizard. Please note it won’t work if you have disabled Shared Indexes in Settings/Preferences | Shared Indexes. In this version, we’ve introduced support for an Entity Graph which you can define with the @NamedEntityGraph annotation. Your IDE allows you to specify a unique name and the attributes ( @NamedAttributeNode) for this annotation using code completion, error detection, and navigation to the related entity by clicking on an attribute. URL navigation in JavaScript and TypeScript has been significantly improved. For client-side code (for Angular or Axios), URL references have been added for the $http service and HttpClient request method calls, and URL completion works based on available server-side frameworks and OpenAPI specifications. For server-side, Express users can see Route handlers in the Endpoints tool window and search for Express route declarations via Navigate | URL Mapping. In this version, we’ve added support for yet another framework – gRPC. We are planning to introduce more features for working with it. For now, it is possible to see the gRPC endpoints in the Endpoints tool window. Stay tuned for more updates! Ktor, a web application framework for creating connected systems, is bundled with IntelliJ IDEA Ultimate. Right from the welcome screen, you can create a new Ktor project for developing server-side or client-side applications and configure the basic project settings and various features supported by Ktor. The Protocol Buffers plugin is bundled with IntelliJ IDEA Ultimate, and the JetBrains team fully maintains it. If you are using IntelliJ IDEA Community Edition, you can still download and install Protocol Buffers via Preferences/Settings | Plugins | Marketplace. It is possible to connect to Docker via SSH. To configure an SSH connection, go to Preferences / Settings | Build, Execution, Deployment | Docker, click the On SSH machine radio button, click …, and then enter the SSH connection parameters in the window that appears. It is possible to display Docker Compose applications in the Services tool window even if they are not running. To do this, just click the cycle arrows icon in the editor window. We’ve implemented new icons for the different states of your Docker Compose services. To get accustomed to them, you can read the tooltips that appear when you hover over each icon. We’ve implemented some changes to the Docker Compose logs. Every service node features a log, and the container logs include the option to show timestamps and previous sessions. You can disable these options in Preferences/Settings | Build, Execution, Deployment | Docker | Console by unticking the Fold previous sessions in the Log console checkbox. Additional options for Docker Compose are now available in Run/Debug Configurations. You can Enable BuildKit, Enable compatibility mode, and Specify project name when you click Modify options. When you name your project, you can call it whatever you want, and it won’t inherit its name from the folder that the Docker Compose application is located in by default. Managing your Docker containers is now even easier thanks to the new buttons that allow you to start, pause, unpause, and restart your containers. In addition to this, you can apply the actions to several containers at once! When you delete Docker images with dependencies, you can specify which dependencies you want to get rid of and which should stay. We’ve added two new nodes to the Services tool window: Networks and Volumes. The first node contains all the networks that are not related to the Docker Compose application. The second includes all the Docker Volumes. It is easy to delete volumes in the same way as you would images, as described in the section above. In IntelliJ IDEA 2021.2, you can use the alias field that belongs to the dependencies section in Chart.yaml (api v2) or in requirements.yaml (api v1). This field states an alternative name of the current dependency. You may need to use alias if an existing dependency is used several times and you want to distinguish between these usages. In addition, if the chart name uses symbols not applicable in GoTemplate identifiers, alias can also help you fix it. Sometimes when you work with a Kubernetes cluster, you will be granted access to particular namespaces, but you won’t receive the list of all of the cluster namespaces. In this case, you can now specify the list of available namespaces in Preferences / Settings | Tools | Kubernetes. We have made it easier to manage multiple namespaces and quickly find the ones you need the most. It is now possible to mark your favorite namespaces with a star. They will then appear at the top of the list, while the remaining namespaces will be sorted alphabetically. It is now possible to generate a DDL data source based on a real one. The DDL files will be created on the disk and the new data source will be based on them. That way you’ll always be able to regenerate these files and refresh the DDL data source. Step-by-step instructions on how to apply this feature are available in our blog post. When a query returns no data, there’s no need for the Services tool window to appear if it was hidden already. Now you can define which operations make the Services tool window appear on your own in Preferences / Settings | Tools | Database | General. Being able to insert a random email, name, or phone number is important when developing unit tests. As a part of our Test Automation Kit, the new Test Data plugin brings a lot of useful actions that can help you generate random data. Use the Generate menu (Cmd+N) to see all available options. If you need a specific format, you can always create your own custom data format based on regular expression or Velocity templates. All custom data types are available in bulk mode and can be shared within your team. It is easy to track a job’s progress by just looking at the commits list, as we’ve introduced icons for Space job statuses in the Log tab of the Git tool window. If you click on an icon, the IDE will open a popup with the automation info for that job. If you don’t need the status information, click the eye icon above the log and select Show Columns | Space Automation. It is now more convenient to communicate with teammates in Space code reviews, as you can mention them with @ followed by the colleague’s name. This minor but helpful feature works in the timeline and in code comments. Your IDE can now show related branches in the selected code review. You can see the list of branches that contain the commits made while working on the current issue in the Details tab. You can now understand the logic behind your teammate’s actions even more precisely, as you’ll see what code completion suggestions the person you are following uses. This feature works when you are in Following mode during your Code With Me session. IntelliJ IDEA 2021.2 features a re-worked undo functionality that significantly improves the collaborative programming experience. The revamped undo logic enables both guests and the host to reverse their individual changes in the code. This means that upgrading to the 2021.2 version will allow you to avoid unpleasant situations where one developer accidentally deletes changes made by their peers. This improvement is particularly useful in pair and mob programming scenarios. One of the most eagerly-awaited features, screen sharing, is finally here. In v2021.2, participants can share an application window from their computer screen, not just your JetBrains IDE, to help participants collaborate better. The ability to share specific open ports with participants via an integrated proxy, is now available in IntelliJ IDEA 2021.2! So, if a host runs applications on a specific port, the guests can access it via a local host on their machine. Starting with this version, you can enjoy the fully localized IntelliJ IDEA UI in Chinese, Korean, and Japanese. Localization is available as a non-bundled language pack plugin, which can be easily installed in your IDE. More than 1.5 million users have started using the partially localized EAP version of our language packs. Now you can enjoy the full localization experience! The Android plugin was upgraded to v4.2.0. After analyzing how often you use several plugins, we decided to unbundle some of them, including Resource Bundle Editor, Drools, JSP Debugger Support, CoffeeScript, Spring Web Flow, Spring OSGI, Arquillian, AspectJ, Guice, Helidon, Emma, and EJB. If you still need any of these plugins, please install them manually from JetBrains Marketplace.
https://www.jetbrains.com/idea/whatsnew/?rss
CC-MAIN-2021-43
refinedweb
4,004
63.29
Hi Everybody I wrote this program to experiment with dynamic binding. One class inherits from another which inherits from another. The user creates a new object of their choice at run time and 2 functions within that object should be called but it doesn't seem to be working out that way. Maybe the problem is with the pointer? Can anyone help? Thanks for your time! #include <iostream> using namespace std; class shape { public: shape(){cout << "Shape Constructor\n" ;} ~shape(){cout << "Shape Destructor\n" ;} int calcArea(); void speak(); void setArea(int); void nameOfShape(); protected: int itsArea; }; int shape::calcArea(){return itsArea;} void shape::speak() {cout << "How can a shape speak?\n";} void shape::setArea(int Area){itsArea=Area;} void namOfShape(){cout << "shape\n";} class rectangle : public shape { public: rectangle(){cout << "Rectangle Constructor\n" ;} ~rectangle(){cout << "Rectangle Destructor\n" ;} int calcArea(); void nameOfShape(); }; void namOfShape(){cout << "rectangle\n";} int rectangle::calcArea() { int Area, width, length; cout << "What's the width?\n"; cin >> width; cout << "What's the length?\n"; cin >> length; Area = width * length; return Area; } class square:public rectangle { public: square(){cout << "Square Constructor\n" ;} ~square(){cout << "Square Destructor\n" ;} int calcArea(); void nameOfShape(); }; void namOfShape(){cout << "square\n";} int calcArea() {int c, side; cout << "How long is the side of this square?\n"; cin >> side; c = side * side; return c;} int callCalcArea(shape*); void callNameShape(shape*); int main () { shape* ptr=0; int choice; while(1) { bool fQuit = false; cout << "(0)Quit (1)Shape (2)Rectangle (3)Square:\n"; cin >> choice; switch (choice) { case 0: fQuit = true; break; case 1: ptr = new shape; break; case 2: ptr = new rectangle; break; case 3: ptr = new square; default: break; } if (fQuit == true) break; cout << "The area of the new " << callNameShape(ptr) << "is " << callCalcArea(ptr) << ".\n"; } return 0; } int callCalcArea(shape *pShape1) {pShape1->calcArea();} void callNameShape(shape *pShape2) {pShape2->nameOfShape();}
https://www.daniweb.com/programming/software-development/threads/289285/problem-with-pointing-to-an-object-created-in-run-time
CC-MAIN-2018-47
refinedweb
307
58.21
Using JDBC in Creator By jfbrown on Nov 16, 2005 There's nothing special about using JDBC from within a Creator Web Application. - just get your connection and away you go. You can google the net for all sorts of advice on using jdbc. And there's always those book thingies. Some notes: Getting the connection from the app server's connection pool.: import javax.sql.DataSource; import java.sql.Connection; ..... Connection conn = null ; // the following should be in a try-catch... javax.naming.Context ctx = new javax.naming.InitialContext() ; DataSource ds = (DataSource)ctx.lookup("java:comp/env/jdbc/Travel") ; conn = ds.getConnection() ; Setup your connection Because you are likely using your app/web server's connection pool, set your connection properties at the start. For example, to assure the connection behaves the way you desire, call setAutoCommit(). Cleanup your connection and JDBC objects when finished. Commit or rollback your transaction. Close all your JDBC objects. Close each Statement and ResultSet, then close the connection. Put this code in a finally block to make sure it's executed. Don't assume a close() on the connection will do all of the above. If the connection is from a pool, the jdbc driver never sees the close(). Can I use the same Data Source for my CachedRowSet(s) and for my own JDBC stuff? Yes. Having Creator setup your Data Source in the bundled App Server". Synchronizing a CachedRowSet with my manual JDBC changes. When should I use CachedRowSets and when should I use JDBC?. Sample Connection conn = null ; Statement sqlStatement = null ; ResultSet rs = null ; try { javax.naming.Context ctx = new javax.naming.InitialContext() ; DataSource ds = (DataSource)ctx.lookup("java:comp/env/jdbc/Travel") ; conn = ds.getConnection() ; // setup the connection conn.setAutoCommit(false) ; // execute the query sqlStatement = conn.createStatement() ; rs = sqlStatement.executeQuery("select count(\*) from TRIP" ) ; rs.next() ; int rows = rs.getInt(1) ; conn.commit() ; info("Rows in table TRIP: " + Integer.toString(rows)) ; } catch (Exception ex) { error("Error counting rows: " + ex.getMessage() ); try { if ( conn != null ) { conn.rollback() ; } } catch (SQLException sqle) { log("Error on rollback " + sqle.getMessage() ); } } finally { // close the ResultSet if ( rs != null ) { try { rs.close() ; } catch (Exception ex) { log("Error Description", ex); } } // close the statement if ( sqlStatement != null ) { try { sqlStatement.close() ; } catch (Exception ex) { log("Error Description", ex); } } if ( conn != null ) { // cleanup and close the conneciton. try { conn.close() ; } catch (Exception ex) { log("Error Closing connection ", ex); } } } Posted by David Kovach on December 11, 2005 at 10:18 PM PST # Posted by Joel Brown on December 12, 2005 at 02:00 AM PST # Posted by David Kovach on December 13, 2005 at 11:33 PM PST #
https://blogs.oracle.com/jfbrown/entry/using_jdbc_in_creator
CC-MAIN-2014-10
refinedweb
437
62.64
Contents - 1 Introduction - 2 Seaborn Heatmap Tutorial - 2.1 Syntax for Seaborn Heatmap Function : heatmap() - 2.2 1st Example – Simple Seaborn Heatmap - 2.3 2nd Example – Applying Color Bar Range - 2.4 3rd Example – Plotting heatmap with Diverging Colormap - 2.5 4th Example – Labelling the rows and columns of heatmap - 2.6 5th Example – Annotating the Heatmap - 2.7 6th Example – Heatmap without labels - 2.8 7th Example – Diagonal Heatmap with Masking in Seaborn - 3 Conclusion Introduction In this article, we’ll go tutorial of Seaborn Heatmap function sns.heatmap() that will be useful for your machine learning or data science projects. We will learn about its syntax and see various examples of creating Heatmap using the Seaborn library for easy understanding for beginners. Seaborn Heatmap Tutorial Heatmap is a visualization that displays data in a color encoded matrix. The intensity of color varies based on the value of the attribute represented in the visualization. In Seaborn, the heatmap is generated by using the heatmap() function, the syntax of the same is explained below. Syntax for Seaborn Heatmap Function : heatmap())** Parameters Information - data : rectangular dataset Here we provide the data for plotting the visualization. - vmin, vmax : floats, optional This parameter is used for for anchoring colormap. - cmap : matplotlib colormap name or object, or list of colors, optional The cmap parameter is used for mapping data values. - center : float, optional This parameter helps in changing the center position of heatmap. - annot : bool or rectangular dataset, optional This helps in annotating the heatmap with values if set to True, otherwise values are not provided. - linewidths : float, optional We can set the width of the lines that divide the cells - linecolor : color, optional This helps in setting the color of each line that divides heatmap cells. - cbar : bool, optional With this parameter, we can either set the color bar or not. - square : bool, optional Through this parameter, we can ensure that each cell is made up of equal square-shaped cell. - xticklabels, yticklabels : “auto”, bool, list-like, or int, optional Here we can plot the column names in the visualization, if passed as True. - ax : matplotlib Axes, optional Here we specify the axes on which the plot is drawn. Following this, we’ll look at different examples of creating heatmap using the seaborn library. 1st Example – Simple Seaborn Heatmap In this 1st example, we will generate the data randomly using the NumPy array and then pass this data to the heatmap() function. First of all, we have to import the NumPy library, seaborn library, and also set the theme using the seaborn library. import numpy as np np.random.seed(0) import seaborn as sns sns.set_theme() uniform_data = np.random.rand(15, 20) ax = sns.heatmap(uniform_data, cmap="YlGnBu") 2nd Example – Applying Color Bar Range In this example, we are using the same data as in 1st example, but this time we pass the vmin and vmax parameters to set the color bar range. We have restricted the color bar range from 0 to 1. ax = sns.heatmap(uniform_data, vmin=0, vmax=1, cmap="Greens") 3rd Example – Plotting heatmap with Diverging Colormap The 3rd example showcases the implementation of a heatmap that has diverging colormap. This means the center of the data is at ‘0’. As we can see in the visualization, the values above and below ‘0’ have different shades of color. normal_data = np.random.randn(16, 18) ax = sns.heatmap(normal_data, center=0, cmap="PiYG") 4th Example – Labelling the rows and columns of heatmap The current example will use one of the in-built datasets of seaborn known as flights dataset. We load this dataset and then we create a pivot table using three columns of the dataset. After this, we are using sns.heatmap() function to plot the heatmap. flights = sns.load_dataset("flights") flights = flights.pivot("month", "year", "passengers") ax = sns.heatmap(flights, cmap="BuPu") 5th Example – Annotating the Heatmap In this example, we look at the way through which annotation of cells can be done with values of each cell displayed in it. Along with this, rows and columns are also labeled. For annotation, we are using fmt parameter. With the help of this parameter, we can not just add numeric values but also textual strings. We also have the option of using annot parameter but it does not allow to add strings. We can also alter the width of lines dividing each cell in the heatmap. ax = sns.heatmap(flights, annot=True, fmt="d", cmap="YlGnBu", linewidths=.6) 6th Example – Heatmap without labels This example shows how we can build a heatmap without rows. Here we have generated random data using NumPy’s random function. For creating a heatmap without labels, we have to mark xticklabels and yticklabels parameters as False. In this example, we pass False in yticklabels parameter for plotting heatmap without labels on the y-axis. data = np.random.randn(40, 25) ax = sns.heatmap(data, xticklabels=2, yticklabels=False) 7th Example – Diagonal Heatmap with Masking in Seaborn This last example will show how we can mask the heatmap to suppress duplicate part of the heatmap. First of all, we build correlation coefficient with the help of the NumPy random function. After this, we use zeros_like function of NumPy for creating a mask. Then, we create a triangular mask with the help of triu_indices_from and pass True for building the same. At last, we use the subplots function for specifying the size of the plot. We pass our custom created mask to mask parameter. Also, the square parameter is used for creating square cells. import matplotlib as plt corr = np.corrcoef(np.random.randn(12, 150)) mask = np.zeros_like(corr) mask[np.triu_indices_from(mask)] = True with sns.axes_style("white"): f, ax = plt.pyplot.subplots(figsize=(8, 6)) ax = sns.heatmap(corr, mask=mask, vmax=.3, square=True, cmap="coolwarm") Conclusion This is the end of this seaborn tutorial, in this, we looked at the syntax of the seaborn heatmap function and different examples. We also learned about the parameters of sns.heatmap() function that is used for various purposes while plotting heatmaps. Reference:
https://machinelearningknowledge.ai/seaborn-heatmap-using-sns-heatmap-with-examples-for-beginners/
CC-MAIN-2022-33
refinedweb
1,018
58.08
In this article I will explain how to send HTTP requests with ESP8266 module. We will see how to send an HTTP GET and a POST request. I will also introduce a new library for processing JSON. Lets start. Setup wifi Include the following header. This library has all the required APIs for connecting to a WiFi network. #include <ESP8266WiFi.h> Define the your network SSID and password as constants. const char *SERVER_WIFI_SSID = "SSID"; const char *SERVER_WIFI_PASS = "mypassword"; Connect to WiFi using WiFi.begin . We need to keep checking the status to make sure that the device is connected to network. Here is the method that I have created. void setupWiFi() { Serial.print("Connecting to WiFi "); WiFi.begin(SERVER_WIFI_SSID,SERVER_WIFI_PASS); while(WiFi.status() != WL_CONNECTED) { delay(500); Serial.print("."); } Serial.println("Connected"); } HTTP Server I have created a simple servlet application. It has just two end points. Details of the end points will be explained in the following sections. But you can use any other as you wish. Servlet code will also be attached with this post. HTTP Requests – Get ESP8266 for Arduino also comes with a HTTP client library ESP8266HTTPClient. Add the following include in your code #include <ESP8266HTTPClient.h> HTTP requests should b including within http.begin and http.end calls. Here it goes HTTPClient http; http.begin(""); Above URL will return a simple JSON message mentioned at end of this section. HTTP GET request does not have a payload. All the parameters will be specified in the URL itself. So, the GET request is as simple as int httpCode = http.GET(); Return value of the GET function will be the HTTP response code. You can find details of response code at. But all we are interested is in response code 200. It is possible the httpCode will have value -1. This can happen if GET API itself is failed, for example when the server is not reachable. So, you can handle that as well. HTTP codes are defined as constants in this library. For response code 200, we have the constant as HTTP_CODE_OK. To read the response message, we call http.getString(). This will return a string object. { Serial.print("HTTP response code "); Serial.println(httpCode); String response = http.getString(); Serial.println(response); } Finally the code for GET will be as follows. http.begin(""); int httpCode = http.GET(); if(httpCode == HTTP_CODE_OK) { Serial.print("HTTP response code "); Serial.println(httpCode); String response = http.getString(); Serial.println(response); } else { Serial.println("Error in HTTP request"); } http.end(); When this code is executed, we will expect the following response. Response might changed based on the end point you will be using HTTP response code 200 {"server":"Demo Servlet","version":"1.0"} HTTP Requests – POST The only change we will do compared to GET request is the use of POST payload. This is where we use a library aJSON. You can get this library from. The project provides detailed documentation on usage. For this example, we will be using an end point PingPong, hosted on the same server. This end point just returns the JSON message sent my client in another container JSON object. Let us prepare the JSON object. aJsonObject *root; root=aJson.createObject(); aJson.addStringToObject(root,"message", "Hello from ESP8266"); char *payload=aJson.print(root); Now it is time to send the message. Call the POST method with the payload and payload length. httpCode = http.POST((uint8_t *)payload,strlen(payload)); We expect the response as HTTP response code 200 {"server":"Demo Servlet","version":"1.0"} Source code You can find the Arduino source files and server side code if you want to use the one used in this example. Demo files for HTTP Requests with ESP8266 Thank you for posting this! This is the first Arduino ESP8266 HTTP GET Request example I found. I’m trying to get my ESP to send a GET request to IFTTT. So, I commented out all of the POST code and am currently left with the GET code. However, I only get “Error in HTTP request” after having tried to send a request to both IFTTT.com and google. What else do I need to do in order to simply send a GET request? I have not used this example to connect to IFTTT. But I am going to give it a try now and update the results here. I just tried connecting to IFTTT. It works fine. I used the Maker channel. All I did in the code was change the URL of GET request and used the URL provided by the Make channel page. I changed the event parameter to my own value, say ‘temp’. I could get the proper response. Can you please provide the httpCode yo re getting. You can print the value of httpCode of GET request. Note the standard Serial baud rate is set to 1 so be sure to select that in your terminal, we want our debugging terminal as fast as possible so that we can keep up with the ESP8266 on the SoftwareSerial connection and avoid buffer overflows.
http://bitsofgyan.com/index.php/2016/05/01/http-requests-with-esp8266/
CC-MAIN-2018-47
refinedweb
843
69.79
Modular Testing November 19, 2013 Testing prime-number functions is easy, if you take the time to do it, because the functions can be checked against each other. We begin by testing primes against isPrime: def testPrime(n): ps = primes(n) for i in range(2,n): if i in ps: if not isPrime(i): print "prime not isPrime", i else: if isPrime(i): print "isPrime not prime", i Running the test causes noting to be reported, as expected, on the theory that “no news is good news:” >>> testPrime(200) >>> For any x, the inverse of x (mod m) exists and is equal to 1 whenever x is not coprime to m. That mathematical fact makes it easy to test the modular inverse function: def testInverse(n): for m in range(2, n): for i in range(0, m): if gcd(i,m) == 1: j = inverse(i,m) if (i*j)%m 1: print "inverse fails", i, m Again, no news is good news: >>> testInverse(200) >>> The third test looks at the jacobi symbol and modular square root functions. If a number a is a quadratic residue to an odd prime modulus p, the jacobi symbol will be 1 and the modular square root x will satisfy the equation (x * x) % p = a: def testQuadRes(n): for p in primes(n)[1:]: for a in range(1,p): if jacobi(a,p) == 1: x, y = modSqrt(a,p) if (x*x)%p a: print "failure", a, p, x if (y*y)%p a: print "failure", a, p, y When we run the test with testQuadRes(100), a whole cascade of error messages appears. The errors occur when p is 17, 41, 73, 89 and 97; since all of those numbers are 1 (mod 8), it is obvious that the problem is with the default case of modSqrt. Indeed, the next-to-last line is in error — m should be divided by 2 — and fixing the error makes everything work properly:/2) * pow(a, (t+1)/2, p)) % p return x, p-x >>> testQuadRes(100) >>> In my quick testing at the console, I had not tested any p ≡ 1 (mod 8), and thus missed the bug. It was hard to find but easy to fix, and would never have been an issue if I had tested properly in the first place. By the way, such testing doesn’t prove that the function is bug-free, though it does make me feel better about my work. Challenge: can anybody spot any remaining bugs? You can run the program at. def modSqrt(a, p): a = a %p if a ==0: return 0,p i = 1 while i <p/2: x =i*i%p if x == a: return x,p-x i += 1 not sure if this is the error, but probably equally as fast as your code. Here some tests for all functions. The modSqrt function fails. All other seem OK. A lot of random testing is used and for the inverse functions (inverse and modSqrt) a simple check is done, if the function does, what it promises. The example cases for the jacobi symbol were gathered on various places on the web.
https://programmingpraxis.com/2013/11/19/modular-testing/2/
CC-MAIN-2017-39
refinedweb
530
66.3
Performance Tuning Apache Spark with Z-Ordering and Data Skipping in Azure Databricks By: Ron L'Esteve | Updated: 2021-04-30 | Comments | Related: > Azure Databricks Problem When querying terabytes or petabytes of big data for analytics using Apache Spark, having optimized querying speeds is critical. There are a few available optimization commands within Databricks that can be used to speed up queries and make them more efficient. Seeing that Z-Ordering and Data Skipping are optimization features that are available within Databricks, how can we get started with testing and using them in Databricks Notebooks? Solution Z-Ordering is a method used by Apache Spark to combine related information in the same files. This is automatically used by Delta Lake on Databricks data-skipping algorithms to dramatically reduce the amount of data that needs to be read. The OPTIMIZE command can achieve this compaction on its own without Z-Ordering, however Z-Ordering allows us to specify the column to compact and optimize on, which will impact querying speeds if the specified column is in a Where clause and has high cardinality. Additionally, data skipping is an automatic feature of the optimize command and works well when combined with Z-Ordering. In this article, we will explore a few practical examples of optimizations with Z-Ordering and Data Skipping which will help with understanding the performance improvements along with how to explore these changes in the delta_logs and Spark UI. Z-Ordering We can begin the process by loading the airlines databricks-dataset into a data frame using the following script. Note that databricks-datasets are available for use within Databricks. flights = spark.read.format("csv") .option("header", "true") .option("inferSchema", "true") .load("/databricks-datasets/asa/airlines/2008.csv") Once the data is loaded into the flights data frame, we can run a display command to quickly visualize the structure of the data. display(flights) Next, the following script will create a mount point to an Azure Data Lake Storage Gen2 account where the data will be persisted. We'll need to ensure that the access key is replaced in the script below. spark.conf.set( "fs.azure.account.key.rl001adls2.dfs.core.windows.net", "ENTER-ACCESS_KEY_HERE" ) The next code block will write the flights data frame to the data lake folder in delta format and partition by the Origin column. ( flights .write .partitionBy("Origin") .format("delta") .mode("overwrite") .save("abfss://[email protected]/raw/delta/flights_delta") ) Once the command completes running, we can see from the image below that the flight data has been partitioned and persisted in ADLS gen2. There are over 300 folders partitioned by 'Origin'. Upon navigating to the delta_log, we can see the initial log files, primarily represented by the *.json files. After downloading the initial delta_log json file and opening it with Visual Studio code, we can see that over 2310 new files were added to the 300+ folders. Now that we have some data persisted in ADLS2, we can create a Hive table using the delta location with the following script. spark.sql("CREATE TABLE flights USING DELTA LOCATION 'abfss://[email protected]/raw/delta/flights_delta/'") After creating the Hive table, we can run the following SQL count script to 1) ensure that the Hive table has been created as desired, and 2) verify the total count of the dataset. As we can see, this is a fairly big dataset with over 7 million records. %sql SELECT Count(*) from flights Next, we can run a more complex query that will apply a filter to the flights table on a non-partitioned column, DayofMonth. From the results display in the image below, we can see that the query took over 2 minutes to complete. This time allows us to set the initial benchmark for the time to compare after we run the Z-Order command. %sql SELECT count(*) as Flights, Dest, Month from flights WHERE DayofMonth = 5 GROUP BY Month, Dest Next, we can run the following OPTIMZE combined with Z-ORDER command on the column that we want to filter, which is DayofMonth. Note that Z-Order optimizations work best on columns that have high cardinality. Based on the results, 2308 files were removed and 300 files were added as part of the Z-ORDER OPTIMIZE process. %sql OPTIMIZE flights ZORDER BY (DayofMonth) Within the delta_log, there is now a new json file that we can download and open to review the results. As expected, we can see the actions performed in these logs based on the removal and addition lines in the json file. As further confirmation, upon navigating to within one of the partition folders, a new file has been created. While the Z-Order command can customize the compaction size, it typically targets around 1GB per file when possible. Now when the same query is run again, this time we can see that it only took approximately 39 seconds to complete, which is around a 70% improvement in the optimized query speed. Data Skipping The previous demonstration described how to improve query performance by applying the Z-Order command on a column that is used in the Where clause of a query within the data set. In this next sample, we will deep dive in to understanding the concept of Data Skipping a little clearer. Data skipping does not need to be configured and is collected and applied automatically when we write data into a Delta table. Delta Lake on Databricks takes advantage of these minimum and maximum range values at query time to speed up queries. Data skipping is most effective when combined with Z-Ordering. Let's explore a demo that is specific to Data Skipping and we will use the NYC Taxi Databricks data set for the demonstration. Firstly, the following code will infer the schema and load a data frame with the 2019 yellow trip data.) This next code block will add few partition fields to the existing data frame for Year, Year_Month, and Year_Month_Day. These additional columns will be based on the Datetime stamp and will help with both partitioning, data skipping, Z-ordering and ultimately more performant querying speeds. from pyspark.sql.functions import * nyctaxiDF = nyctaxiDF.withColumn('Year', year(col("tpep_pickup_datetime"))) nyctaxiDF = nyctaxiDF.withColumn('Year_Month', date_format(col("tpep_pickup_datetime"),"yyyyMM")) nyctaxiDF = nyctaxiDF.withColumn('Year_Month_Day', date_format(col("tpep_pickup_datetime"),"yyyyMMdd")) After running a quick display of the data frame, we can see that the new columns have been added to the data frame. display(nyctaxiDF) This next block of code will persist the data frame to a disk in delta format and partitioned by Year. ( nyctaxiDF .write .partitionBy("Year") .format("delta") .mode("overwrite") .save("abfss://[email protected]/raw/delta/nyctaxi_delta") ) As we can see from the delta_log json file, there were ~400 new files added. Next, we can create a Hive table using the ADLS2 delta path. spark.sql("CREATE TABLE nyctaxi USING DELTA LOCATION 'abfss://[email protected]/raw/delta/nyctaxi_delta/'") After running a query on the newly created Hive table, we can see that there are ~84 million records in this table. %sql SELECT count(*) FROM nyctaxi Now its time to apply Z-Ordering to the table on the Year_Month_Day by running the following SQL code. %sql OPTIMIZE nyctaxi ZORDER BY (Year_Month_Day) As we can see from the image below, the 400 files were removed and only 10 new files were added. Also, keep in mind that this is a logical removal and addition. For physical removal of files, the VACCUM command will need to be run. To confirm that we only have 10 files being read in a query, let's run the following query and then check the query plan. %sql SELECT Year_Month_Day, count(*) from nyctaxi GROUP BY Year_Month_Day Since there are no where conditions applied to this query, the SQL query plan indicates that 10 files were read, as expected. Now we can add a where clause on the column that was Z-ORDER optimized. %sql SELECT Year_Month_Day, count(*) from nyctaxi WHERE Year_Month_Day = '20191219' GROUP BY Year_Month_Day Based on the SQL Query plan, we can now see that only 5 files were read which confirms that Data Skipping was applied at runtime. Summary It is important to note that Z-Ordering can be applied to multiple columns, however it is recommended to take caution with this approach since there will be a cost to adding too many Z-Order columns. There is also an AUTO OPTIMIZE feature that can be applied; however, the AUTO OPTIMIZE feature will not apply Z-Ordering since this will need to be done manually. Given the significant amount of time it takes to run the Z-Order command the first time, it would be recommended to consider running Z-ordering sparingly and as part of a maintenance strategy (ie: weekly etc.). Also, important to note that Optimize and Z-Ordering can be run during normal business hours and does not need to only run as an offline task. Z-Ordering can be applied incrementally to partitions and queries after the initial run, which would take much less time and would be a good practice as an on-going maintenance effort. Next Steps - Read more about Processing Petabytes of Data in Seconds with Databricks Delta. - Read more about how to Optimize performance with file management. - Take a look at more Optimization Samples from Databricks. - Explore the feature and use cases for Auto Optimize. Related Articles Popular Articles About the author View all my tips Article Last Updated: 2021-04-30
https://www.mssqltips.com/sqlservertip/6846/performance-tuning-apache-spark-z-ordering-data-skipping-azure-databricks/
CC-MAIN-2021-43
refinedweb
1,595
58.32
Dear Andrew, I went through moltemplate to create spc/e water model data file by loading the pdb file which I had created in the VMD already. I got the data file for spc/e water model. But here I am unable to get files related to spc/e water box so that I can see that water box in vmd. If you have already installed moltemplate, then all you need to do is 1) Create a file "system.lt" with these contents: ---- "system.lt" file: ---- import "spce.lt" wat = new SPCE [260] ---- end of "system.lt" file ---- (In this example, "260" is the number of water molecules in the PDB file you created by vmd. Change it for your PDB) 2) Then run moltemplate this way: moltemplate.sh -pdb YOURFILE.pdb system.lt That's all there is to it. This works because several of the most popular force fields (like "oplsaa.lt" and "gaff.lt") and water models ("spce.lt" and "tip3p_2004.lt") are included in a subdirectory that moltemplate.sh looks in whenever it can't find a file you requested. If you get the chance, you really should take a look at the contents of the the "spce.lt" file (in the "force_fields" subdirectory). It includes the settings that Steve mentioned. For one thing you may need to modify the "spce.lt" file if later you want to mix SPC/E water with molecules that use a slightly different (but equivalent) pair_style. (For example, the OPLSAA force field uses pair_style lj/cut/coul/long instead of lj/charmm/coul/long, currently used by default in spce.lt. The "force_field_OPLSAA/waterSPCE+methane/moltemplate_files/" directory has a customized version of "spce.lt" for use with OPLSAA.) Unfortunately, you would not even know these files exist or where to find them if you installed moltemplate the easy way using: pip install moltemplate ...all of the examples and documentation will be omitted or hidden from view. I'm not sure how to handle this issue. If I start getting a lot more questions like this, I'll remove the pip/pypi installation option. Cheers Andrew
https://matsci.org/t/spc-e-water-model/28856
CC-MAIN-2022-40
refinedweb
355
77.13
Common Interview Questions Page -2 Common Interview Questions Page -2  ... applicant faces. Here are certain tips to tackle interview questions particularly... for me. But I have recently purchased a PDA that can help me in keeping better java - Java Interview Questions /interviewquestions/ Here you will get lot of interview questions...java hello sir this is suraj .i wanna ask u regarding interview... simple 2 answer..but it becomes complicated wen v see questions in jumbled form help me 2 help me 2 write java program to enter five numbers and will determine the location of the number sample output: ENTER 5 NUMBERS: 1 2 3 4...]; System.out.println("Enter Array Elements: "); for(int i=0;i< Web 2 be accessed in various forms like an HTML page, Javascript, Flash, Silverlight or Java... the development of web-sites that copy personal computer applications like (M.S.... various browser based operating system that works like an application platform Collection of Large Number of Java Interview Questions! Interview Questions - Large Number of Java Interview Questions Here you.... The Core Java Interview Questions More interview questions on core Java.. Read Difficult Interview Questions Page -2 Difficult Interview Questions Page -2  ...: How do you plan and organize your work? Answer: Say like this, "I am... of the work. I also plan for that things which are not pre-decided like Struts 2 Struts 2 I am just new to struts 2 and need to do the task. I have a requirement like this : when i click on link like Customer... will have two buttons like add and edit and a data grid will have the radio concatinate 2 strings - Java Interview Questions concatinate 2 strings Write a program to concatenate two strings ? Hi Friend, Please visit the following link: Thanks java bits 2 - Java Interview Questions java bits 2 Which two classes correctly implement both the java.lang.Runnable and the java.lang.Clonable interfaces? (Choose two.) A. public class Session implements Runnable, Clonable { public void run(); public Object Core Java Interview Questions Page 2, Core Java QuestionQ Core Java Interview Questions -2  ... classes and it is highest-level class in the Java class hierarchy. Instances of the class Class represent classes and interfaces in a running Java J2EE Interview Questions -2 J2EE Interview Questions -2 Question.... Question: What do you understand by JTA and JTS? Answer: JTA stands for Java Transaction API and JTS stands for Java Transaction Service. JTA provides plz help me - Java Interview Questions " you should solve this problem in two ways: using iteration and recursion 2... . use linear search for this algorithm .then test your method in java application..., final int key) { for (int i = 0; i < data.length; ++ i hep me - Java Interview Questions LinkedList(); list1.add(1); list1.add(2); LinkedList list2=new LinkedList Interview Tips - Java Interview Questions Interview Tips Hi, I am looking for a job in java/j2ee.plz give me interview tips. i mean topics which i should be strong and how to prepare. Looking for a job 3.5yrs experience Java Interview Questions - Page 2 Java Interview Questions - Page 2  ... platforms? Answer: The Java platform differs from most other platforms... Virtual Machine (Java VM) 2. The Java Application Programming Interface Struts 2 Interceptors Struts 2 Interceptors Struts 2 framework relies upon Interceptors to do most... part of Struts 2 default stack and are executed in a specific order... are standard Java classes that prevent duplicate actions from firing. Interceptors address1 and address 2 validation address1 and address 2 validation Hi sir/Madam, I m doing One Project Quite now. I would like to validate address that comprises of door numbers,street name, area name .Please provide java-script to do the same.Thanks in advance Difficult Interview Questions Difficult Interview Questions  ... opportunity will be enhanced. Difficult Interview Questions - Page 1... for the interview, most probably this is the first question asked to interview help me - Java Interview Questions interview questions java pdf Do you have any PDF for Java Interview questions JSP Interview : JSP Interview Questions -2 JSP Interview : JSP Interview Questions -2 Page of the JSP Interview Questions...? Answer: JavaServer Pages (JSP) technology is the Java platform application? Can any one please explain me with some examples? Thanks.... This interface has two methods: 1:- public void writeExternal(ObjectOutput out) 2... class MyExternObject implements Externalizable { int i; String s Interview Questions and Answers Interview Questions and Answers  ... interview questions and some possible ways to answer them. Please remember... something like “I am aware that conflicts are quite natural things- problems do Struts 2 Training like Java Filters, Java Beans, ResourceBundles, XML etc. For the Model... Struts 2 Training The Struts 2 Training for developing enterprise applications with Struts 2 framework. Course Difficult Interview Questions Page -1 Difficult Interview Questions Page -1  ... you go for the interview, most probably this is the first question asked... history, and recent career experience. Question 2: Would you please tell me Interview Question - Java Interview Questions Interview Question I need Interview Questions on Java,J2EE Pls help me Struts 2 Tutorial ; RoseIndia Struts 2 Tutorial and Online free training helps you learn new elegant... controller framework based on many standard technologies like Java Filters, Java... on Struts 2 framework. Writing Jsp, Java hint - Java Interview Questions hint Dear roseindia, i want the java interview question... the following link: Here you will get lot of interview questions and their answers. Thanks thanks for your object - Java Interview Questions ? Hello, i know only two wayes of object instansiations 1- MyClass mc = new MyClass(); 2- MyClass mc1 = (MyClass) Class.forName... different ways: 1. Using new operator : This is the most general way Important Interview Questions in Core Java Important Interview Questions in Core Java Core Java refers... questions. Mastering in Java is a different thing and facing an interview... who are master in advanced java as whenever you go for the interview java - Java Interview Questions Java interview questions and answers for freshers Please provide me the link of Java interview questions and answers for freshers Hi... String[2]; sa[0] = "Rose"; sa[1] = "India" java - Java Interview Questions java i want to java&j2ee interview questions. Regards Akhilesh Kumar Hi friend, I am sending you a link. This link will help you. Read for more information. Threads - Java Interview Questions one is the best way to create thread .i want region plz help me? Hi... then it will make your class unable to extend other classes as java is having single inheritance.... 2)Extending the thread will give you simple code structure in comparison interview question - Servlet Interview Questions interview question What is Servlet? Need interview questions on Java Servlet Servlet is one of the Java technologies which is used... according to your experience.And for collection of Java Servlet Interview Question you power program - Java Interview Questions power program I want program of 2 to the power of 0 and 2 to the power of 1 like that i want source code plz help question - Java Interview Questions interview question hello i want technical interview question in current year Hi Friend, Please visit the following links: Thunderbird 2 Thunderbird 2 Mozilla?s Thunderbird 2 email application is more powerful than ever. It?s now... Information Organized Thunderbird 2 features many new enhancements to help interview questin of java - Java Interview Questions interview questin of java what r the common question of core & addvance java in interview? Hi Garima, I am sending you a link...:// Introduction to Struts 2 Framework . In this video tutorial I will teach you the Struts 2 framework. You will learn... Introduction. In this session I will explain you the basics of Struts 2 framework... controller framework based on many standard technologies like Java Filters Common Interview Questions Interview Questions - Page 2 What's Your Biggest Weakness? Ans - This is one... Common Interview Questions  ... research on the company, it will better to prepare answer to some common interview - JSP-Interview Questions java hi.. snd some JSP interview Q&A and i wnt the JNI(Java Native Interface) concepts matrial thanks krishna Hi friend, Read more information. - Java Interview Questions Interview interview, Tech are c++, java, Hi friend, Now get more..... and print, without using functions? Hi Friend, 1)It will print. 2 Java - Java Interview Questions Java Hi How to write java code inorder to get the comments written in a java program? Please let me know..this was asked in my interview... in java 1. /*.........*/ 2 // 3 Web 2 ?Web? that was like warehouse of information and static content. Then, as time.... It was Web 1.5. But gradually, Web-based applications act like local applications... the revolution and many more web applications replacing the most common and popular Programming: Chapter 2 Exercises Programming Exercises For Chapter 2 THIS PAGE CONTAINS programming exercises based on material from Chapter 2 of this on-line Java textbook...", then the output would look something like Synchronized - Java Interview Questions Synchronized i want synchronized programs ? in java plz help me?any site Hi Friend, If you want to know about the synchronized method,please visit the following link: java - Java Interview Questions ). So JVM is an Interpreter for bytecode. most modern languages like c...java how java is platform independent? hai friend.... the platform independence of java is juz bcoz of THE BYTECODE,can be called java - Java Interview Questions java THANKS FOR UR REPLAY.. but i think u have given the following explanation in high level...please can u give me the real time example... to quickly connect classes or subsystems together and it also helps to isolate bugs Java Interview - Java Interview Questions Java Interview Please provide some probable Java interviewe Question. Thanking you. Pabitra kr debanth. Hi friend, I am sending.... Thanks interview - Java Interview Questions interview kindly guide me some interview questions of Java que 2 - Java Beginners que 2 WAP to input a string & print out the text withthe uppercase & lowercase letters reversed,but all other characters should remain the same... sb=new StringBuffer(); char ch[]=str.toCharArray(); for(int i=0;i represents a base class, from which other classes can inherit functions. 2)All... It helps us to organize our classes based on common methods. An abstract class 2's Complement 2's Complement hi Can anyone plsss tell me the 2's complement java program... Thanks in advance Interview question - Java Interview Questions Interview question Hi Friends, Give me details abt synchronization in interview point of view. I mean ow to explain short and neat. Thanks Java - Java Interview Questions Java HI, This is Ramesh. can u please tell me the difference... is server side component. 2)Applets run on web browser whereas servlets runs... the following links: http java - Servlet Interview Questions java servlet interview questions Hi friend, For Servlet interview Questions visit to : Thanks Java: String Exercise 2 Java: String Exercise 2 Name ______________________ Assume the following: String s, t, h, a; String n, e; int i; h = "Hello"; s = " How are you..., or illegal. 1__________h.length() 2__________h.substring(1) 3__________h.toUpperCase Strurt 2 TLD - WebSevices Strurt 2 TLD pls any one can send me either link to download or actual TLD files of struts 2 i nead the TLD files but i could not found
http://roseindia.net/tutorialhelp/comment/2687
CC-MAIN-2014-35
refinedweb
1,878
57.47
- .8. Decision Making: Equality and Relational Operators A condition is an expression that can be either true or false. This section introduces a simple version of C#’s if statement that allows an app.17. The two equality operators (== and !=) each have the same level of precedence, the relational operators (>, <, >= and <=) each have the same level of precedence, and the equality operators have lower precedence than the relational operators. They all associate from left to right. Using the if Statement Figure 3.18 uses six if statements to compare two integers entered by the user. If the condition in any of these if statements is true, the assignment statement associated with that if statement executes. The app uses class Console to prompt for and read two lines of text from the user, extracts the integers from that text with the ToInt32 method of class Convert, and stores them in variables number1 and number2. Then the app compares the numbers and displays the results of the comparisons that are true. Fig. 3.18 Comparing integers using if statements, equality operators and relational operators. 1 // Fig. 3.18: Comparison.cs 2 // Comparing integers using if statements, equality operators 3 // and relational operators. 4using System; 5 6public class Comparison 7{ 8 // Main method begins execution of C# app 9public static void Main( string[] args ) 10{ 11int number1; // declare first number to compare 12int number2; // declare second number to compare 13 14 // prompt user and read first number 15Console.Write( "Enter first integer: "); 16number1 = Convert.ToInt32( Console.ReadLine() ); 17 18 // prompt user and read second number 19Console.Write( "Enter second integer: "); 20number2 = Convert.ToInt32( Console.ReadLine() ); 21 22 if ( number1 == number2 ) 23 Console.WriteLine( "{0} == {1}", number1, number2 ); 24 25 if ( number1 != number2 ) 26 Console.WriteLine( "{0} != {1}", number1, number2 ); 27 28 if ( number1 < number2 ) 29 Console.WriteLine( "{0} < {1}", number1, number2 ); 30 31 if ( number1 > number2 ) 32 Console.WriteLine( "{0} > {1}", number1, number2 ); 33 34 if ( number1 <= number2 ) 35 Console.WriteLine( "{0} <= {1}", number1, number2 ); 36 37 if ( number1 >= number2 ) 38 Console.WriteLine( "{0} >= {1}", number1, number2 ); 39} // end Main 40} // end class Comparison Class Comparison The declaration of class Comparison begins at line 6 public class Comparison The class’s Main method (lines 9–39) begins the execution of the app. Variable Declarations Lines 11–12 int number1; // declare first number to compare int number2; // declare second number to compare declare the int variables used to store the values entered by the user. Reading the Inputs from the User Lines 14. Comparing Numbers Lines 22–23 if ( number1 == number2 ) Console.WriteLine( "{0} == {1}", number1, number2 ); compare the values of the variables number1 and number2 to determine whether they’re equal. An if statement always begins with keyword if, followed by a condition in parentheses. An if statement expects one statement in its body.. No Semicolon at the End of the First Line of an if Statement There—called the empty statement—is the statement to execute if the condition in the if statement is true. When the empty statement executes, no task is performed in the app. The app then continues with the output statement, which always executes, regardless of whether the condition is true or false, because the output statement is not part of the if statement. Whitespace Note the use of whitespace in Fig. 3.18. Recall that whitespace characters, such as tabs, newlines and spaces, are normally ignored by the compiler. So statements may be split over several lines and may be spaced according to your preferences without affecting the meaning of an app. It’s incorrect to split identifiers, strings, and multicharacter operators (like >=). Ideally, statements should be kept small, but this is not always possible. Precedence and Associativity of the Operators We’ve Discussed So Far Figure 3.19.
https://www.informit.com/articles/article.aspx?p=2142910&seqNum=8
CC-MAIN-2021-21
refinedweb
632
55.84
Non-static ThreadLocal variablesradiatejava Oct 22, 2012 5:54 AM I want to know what is the actual problem in having non-static ThreadLocal variables ? I am looking for some insights. Java doc says that typically ThreadLocal variables are private and static but does not mention that it has to be mandatorily static and does not provide the reason behind the recommendation for being static. Can anyone please clarify ? This content has been marked as final. Show 5 replies 1. Re: Non-static ThreadLocal variablesKayaman Oct 22, 2012 6:20 AM (in response to radiatejava)Since ThreadLocal variables already guarantee that each thread will have their own copy, adding more instances will only complicate things. You can make it non-static, but there is no reason for it and it would imply a very bad and error-prone design. 2. Re: Non-static ThreadLocal variablesradiatejava Oct 22, 2012 6:26 AM (in response to Kayaman)What if I have a code like this i.e. a singleton object having reference to a non-static ThreadLocal variable ? public class ThreadLocalTest { private ThreadLocal<Integer> tlocal; private static final ThreadLocalTest tester = new ThreadLocalTest(); private ThreadLocalTest() { this.tlocal = new ThreadLocal<Integer>(); log("Created ThreadLocalTest instance"); } public static ThreadLocalTest getInstance() { return tester; } public void setValue(int i) { this.tlocal.set(new Integer(i)); log("Set value = " + i); } public int getValue() { int i = this.tlocal.get(); log("Got value = " + i); return i; } public static void main(String[] args) { for(int i = 0; i < 2; i++) { Thread t = new Thread(new Runner()); t.setName("Thread-" + i); t.start(); } } public static void log(String msg) { String thread = Thread.currentThread().getName(); String date = new java.util.Date().toString(); String prefix = "[" + thread + "] " + date + ": "; System.out.println(prefix + msg); } static class Runner implements Runnable { public void run() { for (int i = 0; i < 100; i++) { ThreadLocalTest tester = ThreadLocalTest.getInstance(); tester.setValue(i); tester.getValue(); } } } } Edited by: radiatejava on Oct 22, 2012 4:56 PM 3. Re: Non-static ThreadLocal variablesEJP Oct 22, 2012 6:28 AM (in response to radiatejava) What if I have a code like this i.e. a singleton object having reference to a non-static ThreadLocal variable ?What if you do? Why would you do it? You already get one instance per thread. Do you need any more? Do you need it to be thread-local at all? You'e mixing up two different kinds of scoping. Why? 4. Re: Non-static ThreadLocal variablesradiatejava Oct 22, 2012 6:44 AM (in response to EJP)My requirement is to have a singleton class/object and the same class having ThreadLocal variables. I need the singleton class for known reasons and at the same time I also need a ThreadLocal variable in that class. How do I achieve that ? 5. Re: Non-static ThreadLocal variablesDrClap Oct 22, 2012 11:33 AM (in response to radiatejava)You make the singleton in whatever way you like. You make the ThreadLocal variable static. Why is that a problem?
https://community.oracle.com/thread/2457036?tstart=30&messageID=10650478
CC-MAIN-2015-27
refinedweb
495
50.02
TL;DR Developers have a lot of options when deciding to build mobile apps. Xamarin allows developers to build cross-platform apps using C#. Learn how to build a cross-platform Android and iOS application utilizing a single codebase and add user authentication with Auth0. The importance of having a great mobile app cannot be understated. The good news is developers have a lot of options on how to build mobile apps. Native, hybrid, responsive web - and the frameworks that come with them - give developers a lot of flexibility and they all have pros and cons. Native development will likely grant the best performance, but you'll have to maintain each platform separately. Hybrid development will allow you to target multiple platforms with one codebase, but you won't get the same performance. Responsive web will allow you to convert your web app to a mobile friendly version quickly, but you won't be able to access the native device features and APIs nor the app stores. While learning new technologies is great, many developers like to stick to the rule of thumb which states that you should build with the technology you are most familiar with and use the right tool for the job. Building a native iOS app for the "performance" benefits when you have zero Objective-C knowledge will likely yield a negative result. Xamarin is a framework for building cross-platform applications in C#, so if you've ever worked with the Microsoft technology stack, you should feel right at home. In today's post, we'll build a cross-platform iOS and Android app with Xamarin. Why Xamarin? Xamarin allows you to build native iOS, Android, Windows and Mac applications in C#. The company was founded by the creators of Mono and recently acquired by Microsoft. Over 1 million developers are using Xamarin to build apps like CineMark, MixRadio and Bastion. Xamarin is a great fit for companies who are already embedded in the Microsoft stack and have developers with extensive C# knowledge. The Xamarin Platform runs on both Windows and Mac so you truly are not limited to the Windows ecosystem but have the full strength of C# at your disposal. "Xamarin allows you to build native iOS, Android, Windows and Mac applications in C#." C# is an excellent programming language. Statically typed, object-oriented, garbage collected and asynchronous are just some of the benefits. With Xamarin, your entire app will be written in C# and then compiled to it's native binary. The Xamarin Platform exposes platform specific APIs when needed, interfaced again with C#, but generally you will be able to write the majority of your code once and have it run everywhere. Getting Started with Xamarin Let's dive in and set Xamarin up. The first thing we'll need is the actual Xamarin Platform installed on our machine. Head over the the Xamarin website and download it. Once downloaded, the installer will ask you which platforms you are planning on developing for and will download additional dependencies automatically. If you are developing on a Windows machine, you can alternatively use Visual Studio, but as this is a tutorial introducing Xamarin, we'll stick to Xamarin Studio. Setting Up the Solution With Xamarin Studio installed, we are ready to create our first project. Open up Xamarin Studio and click the New Solution button in the top left corner to get started. The first screen we're presented with will ask us what type of application we're building. We'll select Cross-Platform App as we're building an application that we want available on iOS and Android. We'll select a Xamarin.Forms app and on the next screen we'll name our app - CloudCakes, select our target platforms (Android and iOS) and select the Use Shared Library in the sharing section. The Shared Code selection does not matter much for us in this demo but to learn about the differences and when you might use one over the other check out the Xamarin docs. Finally, we'll select a directory for our project and we'll be ready to go. Project Structure With a new solution created, let's explore how Xamarin setup the project for us. There are three root level directories: CloudCakes, CloudCakes.Droid, CloudCakes.iOS - or if you named your app something different it'll follow the pattern {AppName}, {AppName}.Droid, {AppName}.iOS. We will spend the majority of our time in the main directory as any code written here will be shared between the Android and iOS versions of the app. The more code we can write in the main directory the better. The platform specific directories are for writing code that will only execute on the target platform such as NFC on Android or different implementations of touch and gestures between platforms. Xamarin has already taken care of the code required to launch our apps on both Android and iOS so we should be able to just start writing code in the shared directory, build our solution and see the results. Before we start, if you open the AppDelegate.cs file on the iOS side and the MainActivity.cs on the Droid side, you'll be able to see platform specific code implemented to launch the app. If you've developed native iOS or Android apps in the past, these file names should be familiar as they match the app instantiation files for the native platforms. Hello World in Xamarin Before we write our app, let's quickly open up the CloudCakes.cs or {AppName}.cs in the shared code directory. This is the entry or main function into our application. Xamarin has added some boilerplate code when it created the solution that just displays a message once the app is opened. To make sure this runs, let's build and run the solution. To do this, simply click the play icon in the top left corner of the screen and the iOS Simulator should launch with our Xamarin app and display the message "Welcome to Xamarin Forms!" Now let's build our app. Since we are developing on a Mac, we have defaulted to using the iOS simulator for a majority of our examples, if you are on a Windows machine you can use the Android or Windows Mobile emulators and should have the same experience. CloudCakes Goes Mobile The app we'll be building today is called CloudCakes. If you've been keeping up with the Auth0 blog, you may remember CloudCakes from our post a few weeks ago where we migrated users from Parse to Auth0. CloudCakes is an app that allows you to get cake on demand. Up until now, you'd have to use your computer or the web responsive version of the CloudCakes website to place orders. Due to overwhelming requests, you've decided that a native mobile application is needed. Let's see how we can accomplish this with Xamarin. CloudCakes is a fairly simple app. We need a way to authenticate a user, allow users to view cakes and place orders, an about page for newcomers and a signup page to convert new users into "Cake Enthusiasts." Let's start with the About page as it is the simplest and will give us a chance to familiarize ourselves with how Xamarin works. The About Us page will give some general information about our app. Let's start by creating a new file in the shared directory. Right click on the main shared directory and navigate to the Add menu, in the submenu select New File. A new dialog box will appear giving you options for the type of file you'd like to create. As we are building a Xamarin.Forms app, we'll stay in the Forms section, and here we'll have 4 options for the type of files we'd like to create. We can create a Forms ContentPage, Forms ContentPage Xaml, Forms ContentView or Forms ContentView Xaml. A ContentPage defines the entire view of the screen. ContentView on the other hand can be viewed as a section of the page. You can use multiple ContentViews to build out a ContentPage. Additionally, you have two choices in how you want to build the UI for your application. The first option is to build your UI with C# and the second option is to use XAML to build out the UI and hook up functionality via an underlying C# file. You can build the same UI's with either option but we've chosen to go with building all of our UI's in C# as there is a lot of content to cover and adding XAML on top of it would make for a very long tutorial. We'll select the Forms ContentPage template and name it AboutPage. Xamarin will now create the template and add some boilerplate code we can work with. Let's see what Xamarin generated. using System; using Xamarin.Forms; namespace CloudCakes { public class AboutPage : ContentPage { public AboutPage () { Content = new StackLayout { Children = { new Label { Text = "Hello ContentPage" } } }; } } } This is fairly standard C# code. The page was created in the CloudCakes namespace, a new class called AboutPage was created that inherits ContentPage which will allow us to build our our UI. Finally, the main method for this class was created called AboutPage(). The Content variable is how we build the UI using Xamarin Forms and C#. Here we declare the type of layout we'd like, global page parameters such as padding or line spacing and then we include any components that we want the user to see. In the boilerplate example, we simply display a text label that says "Hello ContentPage." Let's write the code for the AboutUs page and examine it closer. using System; using System.Collections.Generic; using Xamarin.Forms; namespace CloudCakes { public class AboutPage : ContentPage { public AboutPage () { var title = new Label { Text = "About Us", FontSize = Device.GetNamedSize (NamedSize.Large, typeof(Label)), HorizontalOptions = LayoutOptions.CenterAndExpand, }; var description = new Label { Text = "CloudCakes aims to revolutionize the cake delivery business by delivering on-demand cakes and other treats with the click of a button. No longer will you need to go to the store or even remember your favorite type of cake - just signup for an account and you're good to go!" }; var blogTitle = new Label { Text = "In The News", HorizontalOptions = LayoutOptions.CenterAndExpand }; List<string> articles = new List<string> { "CloudCakes raises $50m and leads the Cake-as-a-Service models", "Top 10 Cities With the Best Cakes", "CloudCakes CEO awarded for Food Innovation Award 2016" }; ListView articlesView = new ListView { ItemsSource = articles }; Content = new StackLayout { Padding = 30, Spacing = 10, Children = { title, description, blogTitle, articlesView } }; } } } We've added a lot of code here. The first thing we did was add a label, which is just a text area, for the title. Labels can have many different properties such as font size, color, position and in the title label we've decided to increase the font and center the text horizontally. The description label does not have any special characteristics so we just inserted the text. Next, we created a third label for a subheading and then created a list of news articles. We first created an array of articles and then created a Xamarin Forms ListView component and populated it with the array of articles. Finally, to display all this to the user we created a StackLayout, which simply stacks components one on top of the other in the order provided, and populated it with the four components we created. To provide some better styling we added some padding and vertical spacing between the components. To see our About Us page in action, let's go to the CloudCakes.cs file in the shared code directory and have the About Us page load on app startup. To accomplish this, we can remove the boilerplate code that displays the "Welcome to Xamarin Forms!" and instead instantiate the public App () { MainPage = new AboutPage(); } Launch the iOS simulator and you should see the newly created About Us page. Orders Page Next, let's build the UI for the Orders Page. On this page, users will place orders for different types of cakes. For the UI here, we'll want to display the user information and a list of cakes a user can order. The user will be able to click on a cake they want we'll display a message saying that the cake has been ordered. Let's take a look at the code. We'll add comments and explanations inline with the code from here on out. using System; using System.Collections.Generic; using Xamarin.Forms; namespace CloudCakes { public class OrdersPage : ContentPage { public OrdersPage () { // We've created a new class for holding user data. User user = new User { Email = "[email protected]", Avatar = "" }; // We've also created a new class to hold the cake information. List<Cake> cakes = new List<Cake> { new Cake ("Chocolate Overload", 15.99), new Cake ("Red Velvet Cake", 19.99), new Cake ("Strawberry Fusion", 14.99) }; // Instead of using the StackLayout like in the about page // we've opted to use a GridLayout to have more flexibility Grid grid = new Grid{ Padding = 30 }; // The GridLayout adds elements slightly differently // The first parameter is the element we are adding - // in this case we create an anonymous Image. // Additionally, we specify where we want the image to // be displayed in the grid. Here we've chosen 0, 0 which // means the image will be placed in the top left corner grid.Children.Add(new Image { Aspect = Aspect.AspectFit, WidthRequest = 60, HeightRequest = 60, Source = ImageSource.FromUri(new Uri(user.avatar)), }, 0,0 ); grid.Children.Add(new Label { Text = user.Email, HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center }, 1, 0); // We loop through the available cakes and create a // button for each. for (var i = 0; i < cakes.Count; i++) { Button button = new Button { Text = cakes [i].Name + " ($" + cakes [i].Price + ")", VerticalOptions = LayoutOptions.Center, }; // We implement a general click event that displays a popup // saying that an order was placed // Note: In a real world scenario you would likely use a closure // if you were implementing this functionality - but for simplicity // sake we just kept it simple. button.Clicked += (object sender, EventArgs e) => { DisplayAlert ("Order Placed", "Your cake order has been placed and will arrive in 15 minutes", "OK"); }; // We add the buttons to the grid. In this case // we pass additional parameters to have greater // control over how the content is displayed grid.Children.Add(button, 0, 2, i+1, i+3); } // Finally we pass our grid into the Content variable Content = grid; } } // This is where we create our two custom classes for the User and Cake objects public class User { public string Email { get; set;} public string Avatar { get; set;} } public class Cake { public string Name { get; set;} public double Price { get; set;} public Cake(string name, double price){ this.Name = name; this.Price = price; } } } To see the newly created Orders page, let's go back into the CloudCakes.cs file and instead of loading up the About Us Page, let's load up the Orders page. Simply replace the MainPage = new OrdersPage(); and launch the simulator. If all went well you should see the orders page. Note: We haven't implemented user authentication yet - so we've added some static content for the user for now. The Signup Page will allow users to create new accounts. We will omit any email validation and to make signups as easy as possible just require an email and password. Let's build the UI. using System; using Xamarin.Forms; namespace CloudCakes { public class SignupPage : ContentPage { public SignupPage () { // A new element we're creating here - the Entry element // Entry allows us to capture user input // We are adding a Placeholder attribute to tell the user // which data we want them to enter var email = new Entry { Placeholder = "Email" }; // Similar to the email entry button, we capture the // users password here. To hide the password from being // displayed we set the `IsPassword` attribute to true var password = new Entry { Placeholder = "Password", IsPassword = true }; var signupButton = new Button { Text = "Sign Up" }; signupButton.Clicked += (object sender, EventArgs e) => { }; Content = new StackLayout { Padding = 30, Spacing = 10, Children = { new Label { Text = "Signup for a CloudCakes Account", FontSize = Device.GetNamedSize (NamedSize.Large, typeof(Label)), HorizontalOptions = LayoutOptions.Center }, email, password, signupButton } }; } } } Now that we've implemented the UI, let's update the CloudCakes.cs file and set the MainPage to our newly created signup page to make sure that it works and looks how we expect it to. We haven't added the implementation for the signup so clicking on the signup button will not do anything yet. Let's run the simulator and see how are signup page will look. The homepage of our application will also act as the login page. As users cannot place orders without first logging in, it wouldn't make sense to give them access to anything else before they are authenticated. In this section we will create the UI for the login page - we'll add the functionality later on in this tutorial. Let's look at the code that is going to make up our homepage/login page. using System; using Xamarin.Forms; namespace CloudCakes { public class HomePage : ContentPage { public HomePage () { var title = new Label { Text = "Welcome to CloudCakes", FontSize = Device.GetNamedSize (NamedSize.Large, typeof(Label)), HorizontalOptions = LayoutOptions.CenterAndExpand, }; var aboutButton = new Button { Text = "About Us" }; var signupButton = new Button { Text = "Signup" }; // Here we are implementing a click event using lambda expressions // when a user clicks the `aboutButton` they will navigate to the // About Us page. aboutButton.Clicked += (object sender, EventArgs e) => { Navigation.PushAsync(new AboutPage()); }; // Navigation to the Signup Page (Note: We haven't created this page yet) signupButton.Clicked += (object sender, EventArgs e) => { Navigation.PushAsync(new SignupPage()); }; var email = new Entry { Placeholder = "E-Mail", }; var password = new Entry { Placeholder = "Password", IsPassword = true }; var login = new Button { Text = "Login" }; // With the `PushModalAsync` method we navigate the user // the the orders page and do not give them an option to // navigate back to the Homepage by clicking the back button login.Clicked += (sender, e) => { Navigation.PushModalAsync(new OrdersPage()); }; Content = new StackLayout { Padding = 30, Spacing = 10, Children = {title, email, password, login, signupButton, aboutButton} }; } } } We're getting there! Let's go back to the CloudCakes.cs file and set the MainPage to our newly created homepage. For the navigation to work, we will additionally need to wrap the ContentPage with a NavigationPage class which will allow us to perform native navigation. The code should look like MainPage = new NavigationPage(new HomePage());. If all went according to plan, you should be able to launch the simulator and navigate to the About Us page, Signup page and Orders page. We purposefully built our UI backwards so that we could introduce the different Xamarin.Forms elements in smaller chunks. Aside: Xamarin Authentication with Auth0 With the UI and navigation in place we are ready to add the login and signup functionality to our application. We will use Auth0's RESTful Authentication API to authenticate and create new users. Additionally, we will use the RestSharp and Newtonsoft JSON libraries to interface with the API easier. As we won't be using a datastore for this simple demo, we will store the user credentials in the Application.Current namespace so that we can access the user data throughout our application. You are not required to use Auth0 for Xamarin authentication. We've used the Auth0 REST API for brevity but if you are interested in building your own check out this great example on how you could implement JWT authentication with NodeJS. First we will implement the login functionality. When a user enters their Email and Password, we will send this data to the Auth0 oauth/ro API endpoint which will validate the credentials and if successful return a id_token and access_token which we will use to get the user data. If we do not get an access_token back we will let the user know that they have entered invalid credentials. using System; using Xamarin.Forms; using RestSharp; using Newtonsoft.Json; namespace CloudCakes { public class HomePage : ContentPage { public HomePage () { // We have ommitted code we already went over var login = new Button { Text = "Login" }; login.Clicked += (sender, e) => { // We implemented a login function that accepts // two strings, the first being the users email // and the send the users password. We get this // data from the entry forms we created earlier Login(email.Text, password.Text); }; } // The Login function makes a call to the Auth0 REST API // and attempts to authenticate the user. public void Login(string username, string password){ // We are using the RestSharp library which provides many useful // methods and helpers when dealing with REST. // We first create the request and add the necessary parameters var client = new RestClient("https://{YOUR-AUTH0-DOMAIN}.auth0.com"); var request = new RestRequest("oauth/ro", Method.POST); request.AddParameter("client_id", "{YOUR-AUTH0-CLIENT-ID"); request.AddParameter("username", username); request.AddParameter("password", password); request.AddParameter("connection", "{YOUR-CONNECTION-NAME-FOR-USERNAME-PASSWORD-AUTH}"); request.AddParameter("grant_type", "password"); request.AddParameter("scope","openid"); // We execute the request and capture the response // in a variable called `response` IRestResponse response = client.Execute(request); // Using the Newtonsoft.Json library we deserialaize the string into an object, // we have created a LoginToken class that will capture the keys we need LoginToken token = JsonConvert.DeserializeObject<LoginToken> (response.Content); // We check to see if we received an `id_token` and if we did make a secondary call // to get the user data. If we did not receive an `id_token` we can safely assume // that the authentication failed so we display an error message telling the user // to try again. if (token.id_token != null) { Application.Current.Properties ["id_token"] = token.id_token; Application.Current.Properties ["access_token"] = token.access_token; GetUserData (token.id_token); } else { DisplayAlert ("Oh No!", "It's seems that you have entered an incorrect email or password. Please try again.", "OK"); }; } // If we did get an `id_token` we make a secondary call to the Auth0 REST API // This time we call the `tokeninfo` endpoint which requires an `id_token` // The endpoint then verifies the token is valid and returns user data. public void GetUserData(string token){ var client = new RestClient("https://{YOUR-AUTH0-DOMAIN}.auth0.com"); var request = new RestRequest("tokeninfo", Method.GET); request.AddParameter ("id_token", token); IRestResponse response = client.Execute (request); User user = JsonConvert.DeserializeObject<User> (response.Content); // Once the call executes, we capture the user data in the // `Application.Current` namespace which is globally available in Xamarin Application.Current.Properties ["email"] = user.email; Application.Current.Properties ["picture"] = user.picture; // Finally, we navigate the user the the Orders page Navigation.PushModalAsync (new OrdersPage ()); } public class LoginToken{ public string id_token { get; set; } public string access_token {get; set;} public string token_type { get; set; } } public class User { public string name { get; set;} public string picture { get; set;} public string email { get; set; } } } } Now that we have our login functionality in place - let's go back to the orders page and update the user object to use real data. Open up the OrdersPage.cs file and make the following adjustments: User user = new User { Email = Application.Current.Properties["email"] as string, Avatar = Application.Current.Properties["picture"] as string }; Now when a user logs in and is redirected to the Orders page, they will see their email and avatar instead of the static placeholder we had initially setup. Let's build our solution and launch the simulator. Let's first try to login with invalid credentials to ensure that we get the correct error message. Next, login with a real account and you should be taken to the orders page. In our last topic on Xamarin authentication, let's allow users to signup for a CloudCakes account. Here, we will utilize the dbconnections/signup Auth0 API endpoint and pass the credentials the user will authenticate with. Let's see the implementation in code: using System; using Xamarin.Forms; using RestSharp; using Newtonsoft.Json; namespace CloudCakes { public class SignupPage : ContentPage { public SignupPage () { // Omitted code we already went over var signupButton = new Button { Text = "Sign Up" }; signupButton.Clicked += (object sender, EventArgs e) => { // We have created a function that takes the captured email // and password and attempts to create a new user account Signup(email.Text, password.Text); }; } // The Signup function calls the `dbconnections/signup` API and attempts // to create a new user account public void Signup(string username, string password){ var client = new RestClient("https://{YOUR-AUTH0-DOMAIN}.auth0.com"); var request = new RestRequest("dbconnections/signup", Method.POST); request.AddParameter("client_id", "{YOUR-AUTH0-CLIENT-ID}"); request.AddParameter("email", username); request.AddParameter("password", password); request.AddParameter("connection", "{YOUR-DATABASE-CONNECTION-NAME"); IRestResponse response = client.Execute(request); // Once the request is executed we capture the response. // If we get a `user_id`, we know that the account has been created // and display an appropriate message. If we do not get a `user_id` // we know something went wrong, so we ask the user if they already have // an account and if not to try again. UserSignup user = JsonConvert.DeserializeObject<UserSignup> (response.Content); if (user.user_id != null) { DisplayAlert ("Account Created", "Head back to the hompage and login with your new account", "Ok"); } else { DisplayAlert ("Oh No!", "Account could not be created. Do you already have an account? Please try again.", "Ok"); } } public class UserSignup { public string user_id { get; set; } } } } With this code implemented, let's build and launch the iOS simulator then navigate to the signup page. Let's first create a new account so that we can observe the success state. Now, let's try to recreate that same account, and observe the failure state. Finally, navigate back to the homepage and login with your newly created account. If all went well - you should be navigated to the Orders page and be able to place an order! Putting It All Together We've written a lot of code today. Up until now, we have been testing everything using the iOS simulator - so we know our app runs well in iOS. Since we're building a cross-platform app, let's build and deploy our solution for Android. The easiest way to accomplish this is to right click on the CloudCakes.Droid directory, navigate to Run With, and select XamarinAndroid&&API&&15 (API 15). This will compile the app for Android and run it in the Android Emulator. It may take a few minutes for everything to compile but once ready the CloudCakes app will open in the Android Emulator. Since we've written all of our code in the shared CloudCakes directory, our application will function exactly as it did in the iOS simulator. To test, navigate to the About Us page, go back and navigate to the signup page, create a new account, login and place an order. You should see the exact results as you did with the iOS simulator. The only difference is that instead of the native iOS UI, you see the app in the native Android UI, and since we targeted SDK 15, you see the app in the Holo design specification. Today, we were able to build a cross-platform app that works on both Android and iOS. The app adopted the design language of it's target platform and behaved as a native app. The end user would not be able to tell that our app was built with C# instead of Objective-C or Java. Furthermore, we've impelemented Xamarin authentication with the Auth0 RESTful API which gave us total control of the UI. Xamarin allowed us to build the app without knowing how the Android or iOS runtime worked and showcased how hybrid app development may be viable option for your apps. To close out the tutorial, we've created an infographic that highlights the strengths of native, hybrid and responsive web development. Native and responsive web have clear strengths and weaknesses while hybrid app development seems to be the best of both worlds. A word of caution though, the popular phrase "jack of all trades, master of none" tends to apply to hybrid apps so consider all of your requirements before selecting a platform to build your mobile applications on.
https://auth0.com/blog/xamarin-authentication-and-cross-platform-app-development/
CC-MAIN-2020-40
refinedweb
4,704
55.44
Ext.data.DirectStore: root property issues Hi all, I'm trying to write down a simple piece of code to learn how to use a Ext.data.DirectStore, but I don't figure out what's going on. I explain better my problem: What i want to do is really simple, i want to populate a GridPanel using a directFn. I wrote and included in my test page a .ashx that returns me the remoting api as follow Code: Ext.app.REMOTING_API = {"type":"remoting","url":"/TestHandler.ashx","namespace":"MyApp","actions":{"Test":[{"name":"getAll","len":0}]}}; Code: var store = new Ext.data.DirectStore({ storeId: 'fileStore', autoLoad: true, directFn: MyApp.Test.getAll, paramsAsHash: false, totalProperty: 'count', root: 'testData', fields: [ {name:'name', type: 'string'} ], listeners: { dataChanged: function(store){ Ext.Msg.alert('Store Load','The Data has been loaded!'); }} }); var gp = new Ext.grid.GridPanel({ id: 'gp', store: store, renderTo: Ext.getBody(), height: 300, columns: [ {id: 'name', header: 'Name', sortable: true, dataIndex: 'name'} ], sm: new Ext.grid.RowSelectionModel({ }) }); }); Code: {"type":"rpc","tid":2,"action":"Test","method":"getAll","result":"{count:11,'testData':[{\"name\":\"kids_hug.jpg\"},{\"name\":\"kids_hug2.jpg\"},{\"name\":\"sara_pink.jpg\"},{\"name\":\"sara_pumpkin.jpg\"},{\"name\":\"sara_smile.jpg\"},{\"name\":\"up_to_something.jpg\"},{\"name\":\"zack.jpg\"},{\"name\":\"zacks_grill.jpg\"},{\"name\":\"zack_dress.jpg\"},{\"name\":\"zack_hat.jpg\"},{\"name\":\"zack_sink.jpg\"}]}","message":null} Note: If i change the DirectStore root property as follow Code: root: '' Does anyone knows what i'm missing? I read a lot of post and everyone solved with the config Code: paramsAsHash: false Thanks a lot!Sencha Inc Andrea Cammarata, Solutions Engineer Owner at SIMACS @AndreaCammarata github: TUX components bundle for Sencha Touch 2.x.x Does Anyone has a solution for this issue? Any help / suggest would be appreciated. Thanks!Sencha Inc Andrea Cammarata, Solutions Engineer Owner at SIMACS @AndreaCammarata github: TUX components bundle for Sencha Touch 2.x.x - Join Date - Apr 2007 - Location - Sydney, Australia - 17,902 - Vote Rating - 786 Get the latest version of the router.Evan Trimboli Sencha Developer Twitter - @evantrimboli Don't be afraid of the source code! Hi Evant, i follow your suggest and i download the latest version of your router, you posted today at, but it doesn't work too with my example, so i suppose it's a bug. I'm no moving forward so i'm planning to continue using my classic version of ux.FileView without integrating the new Ext.Direct development API. I post a screen of my "new born" ux.FileView. It comes with the full locale support to translate it in every language you want and fully customizable file icons using a single sprite. I will make a new post in the appropriate "User Extenctions" forum section when i'm done! If someone can help me about using Ext.Direct API i will appreciate it! Thanks to everybody!Sencha Inc Andrea Cammarata, Solutions Engineer Owner at SIMACS @AndreaCammarata github: TUX components bundle for Sencha Touch 2.x.x
https://www.sencha.com/forum/showthread.php?74331-Ext.data.DirectStore-root-property-issues
CC-MAIN-2015-40
refinedweb
486
50.94
Quick Links RSS 2.0 Feeds Lottery News Event Calendar Latest Forum Topics Web Site Change Log RSS info, more feeds Topic closed. 624 replies. Last post 5 years ago by yolie. 974. 100 bucks on it mid/eve for for 2 days starting tonite! wish me luck - im going in! woowow lucky .. Goodluck ma boy .. hope you nail big .. PETS#=105-215 Going for the kill. Then again may come out with just nothing....lol Eve picks 947 420 423 789 977 488 444 888 def playing that for sure .. also playing 717 for tonight ... goodluck players.. congrats getmoney. Enroll me in your class! lol. What nums do you like for tonight? add 384 add 384 484 Winning Numbers : 6 - 2 - 3 MissFeeport what happen???!!!!!!! I think we all need to eat some black-eyed peas 'n rice with a side order of greens to bring our luck back..lol I know right....lol $$"Practice makes perfect.....keep at it and you will succeed one day"$$ Thanks guys! Classes in session.... A winner plays at all cost....Go for broke! It's usally that last dollar that brings home the win! 2/2/12 M/E ekem6078,s TTT Double Maniaic TTT +1 678 229 103 God Bless America how about these for midday and evening drawing 74x 86x try these if you wanna 78x 76x 48x 46x Thanks for posting Gunjack,looking good pairs nlsa Scisclo 630,100,540,431,460,103,305,330,070,316 wtg Gunjack!!!!!! Thanks so much for the.
https://www.lotterypost.com/thread/241972/2
CC-MAIN-2016-50
refinedweb
255
87.11
How do I get the picture size with PIL? from PIL import Imageim = Image.open('whatever.png')width, height = im.size According to the documentation. You can use Pillow (Website, Documentation, GitHub, PyPI). Pillow has the same interface as PIL, but works with Python 3. Installation $ pip install Pillow If you don't have administrator rights (sudo on Debian), you can use $ pip install --user Pillow Other notes regarding the installation are here. Code from PIL import Imagewith Image.open(filepath) as img: width, height = img.size Speed This needed 3.21 seconds for 30336 images (JPGs from 31x21 to 424x428, training data from National Data Science Bowl on Kaggle) This is probably the most important reason to use Pillow instead of something self-written. And you should use Pillow instead of PIL (python-imaging), because it works with Python 3. Alternative #1: Numpy (deprecated) I keep scipy.ndimage.imread as the information is still out there, but keep in mind: imread is deprecated! imread is deprecated in SciPy 1.0.0, and [was] removed in 1.2.0. import scipy.ndimageheight, width, channels = scipy.ndimage.imread(filepath).shape Alternative #2: Pygame import pygameimg = pygame.image.load(filepath)width = img.get_width()height = img.get_height() Since scipy's imread is deprecated, use imageio.imread. - Install - pip install imageio - Use height, width, channels = imageio.imread(filepath).shape
https://codehunter.cc/a/python/how-do-i-get-the-picture-size-with-pil
CC-MAIN-2022-21
refinedweb
225
53.37
Due by 11:59pm on Thursday, 9/11 Instructions: Download, complete, and submit the quiz1.py: Implement a function two_equal that takes three integer arguments and returns whether exactly two of the arguments are equal and the third is not. def two_equal(a, b, c): """Return whether exactly two of the arguments are equal and the third is not. >>> two_equal(1, 2, 3) False >>> two_equal(1, 2, 1) True >>> two_equal(1, 1, 1) False >>> result = two_equal(5, -1, -1) # return, don't print >>> result True """ "*** YOUR CODE HERE ***" Implement same_hailstone, which returns whether positive integer arguments a and b are part of the same hailstone sequence. A hailstone sequence is defined in Homework 1. Solutions to Homework 1 (which you may reference) have been posted. def same_hailstone(a, b): """Return whether a and b are both members of the same hailstone sequence. >>> same_hailstone(10, 16) # 10, 5, 16, 8, 4, 2, 1 True >>> same_hailstone(16, 10) # order doesn't matter True >>> result = same_hailstone(3, 19) # return, don't print >>> result False """ "*** YOUR CODE HERE ***" A near-golden rectangle has width w and height h where h/w is close to w/h - 1. For example, in an 8 by 13 rectangle, the difference between 8/13 and 5/8 is less than 0.01. Implement near_golden, which returns the height of a near-golden rectangle. The near_golden function takes an even integer perimeter greater than 3 as its argument. It returns the integer height h of a rectangle with the given perimeter that has the smallest possible difference between h/w and w/h - 1. NOTE: The words "greater than 3" replaced the word "positive" in the paragraph above at 6:50pm Wednesday 9/10. Hints: The perimeter of a rectangle is w+w+h+h. Since h and perimeter are integers, w must be an integer as well. The built-in abs function returns the absolute value of a number. def near_golden(perimeter): """Return the integer height of a near-golden rectangle with PERIMETER. >>> near_golden(42) # 8 x 13 rectangle has perimeter 42 8 >>> near_golden(68) # 13 x 21 rectangle has perimeter 68 13 >>> result = near_golden(100) # return, don't print >>> result 19 """ "*** YOUR CODE HERE ***"
https://inst.eecs.berkeley.edu/~cs61a/fa14/hw/released/quiz1.html
CC-MAIN-2018-05
refinedweb
367
61.26
So the method would be to add the argument: - Code: Select all {"__builtins__":None} but then you guys say that is not enough. although all i could think of right now is __import__() and open() are pretty nasty. So what if you just removed the ability to use packages names, builtins, and keywords. At that point would it be a back door still? I wouldnt think there is anything left: - Code: Select all import sys import pkgutil import keyword pkg_names = [] for i in pkgutil.iter_modules(): pkg_names.append(i[1]) forbidden = pkg_names + dir(__builtins__) + keyword.kwlist while True: ok = True s = input(': ') for word in forbidden: if word in s: print('{} is not allowed'.format(word)) ok = False if s and ok: try: print(eval(s)) except: print(sys.exc_info()) and my example outputs: : __import__('subprocess').Popen('ls') imp is not allowed subprocess is not allowed __import__ is not allowed open is not allowed import is not allowed or is not allowed : : 1/0 (<class 'ZeroDivisionError'>, ZeroDivisionError('division by zero',), <traceback object at 0x7fa1f78e6ab8>) : 1+1 2 : 1+1+1*10 12 : 2*2(2) (<class 'TypeError'>, TypeError("'int' object is not callable",), <traceback object at 0x7fa1f78e6a28>) : a + 1 (<class 'NameError'>, NameError("name 'a' is not defined",), <traceback object at 0x7fa1f78c5ef0>) : a = 1 (<class 'SyntaxError'>, SyntaxError('invalid syntax', ('<string>', 1, 3, 'a = 1')), <traceback object at 0x7fa1f78c5ef0>) : a + 1 (<class 'NameError'>, NameError("name 'a' is not defined",), <traceback object at 0x7fa1f78e6a28>) : class Test:;self.a=0 as is not allowed class is not allowed : open('test.txt,'w').write('hello') test is not allowed open is not allowed : open('test.txt).read() test is not allowed re is not allowed open is not allowed : open('test.txt').read() test is not allowed re is not allowed open is not allowed well 'test' being in there because i have a module name test in home directory.
http://python-forum.org/viewtopic.php?p=5441
CC-MAIN-2014-52
refinedweb
314
60.24
Forward-Looking Tail Risk Exposures at U.S. Bank Holding Companies Abstract This paper develops a simple method for quantifying banks’ exposures to large (negative) shocks in a forward-looking manner. The method is based on estimating banks’ share prices sensitivities to (market) put options and does not require the actual observation of tail risk events. We find that estimated (excess) tail risk exposures for U.S. Bank Holding Companies are negatively correlated with their share price beta, suggesting that banks which appear safer in normal periods are actually more crisis prone than their beta would suggest. We also study the determinants of banks’ tail risk exposures and find that their key drivers are uninsured deposits and non-traditional activities that leave assets on banks’ balance sheets. KeywordsTail risk Forward-looking Banks Systemic crisis JEL ClassificationG21 G28 1 Introduction A systemic banking crisis—a situation in which many banks are in distress at the same time—can induce large costs for the economy. The task of supervisors and regulators is to avoid and mitigate, as far as possible, such crises. For this they need advance information about how banks are exposed to shocks to the economy. This allows them to identify weak banks and put them under increased scrutiny but also to monitor general risks in the financial system. When evaluating the exposure of banks it is also of paramount importance to distinguish between exposures to normal market shocks, and exposures to large shocks. For example, a financial institution that follows a tail risk strategy (such as writing protection in the CDS market) may appear relatively safe in normal periods as it earns steady returns but may actually be very vulnerable to significant downturns in the economy. Supervisors and regulators obtain their information to a large extent from information generated by the bank itself, such as its accounts. While these sources are a crucial ingredient of the evaluation process they are not free from drawbacks. For example, most of this information is under the discretion of banks and may be used strategically.1 Moreover, this data is typically backward looking and available only at relatively low frequency. Accounting information also misses important aspects such as informal knowledge (e.g., CEO reputation) or information contained in analysts’ reports. In recent years there has been growing interest in using market-based measures of bank risk. This is on the back of evidence that market signals contain valuable information about banks’ risks (see Flannery (1998, 2001) for surveys). Some of the measures explicitly take into account systemic and tail risk aspects (e.g., Acharya et al. 2009; Adrian and Brunnermeier 2009; De Jonghe 2010). They typically use information from historical tail risk events to compute realized tail risk exposures over a certain period. In this paper we develop a forward-looking measure of bank tail risk. We define a bank’s (systemic) tail risk as its exposure to a large negative market shock. We measure this exposure by estimating a bank’s share price sensitivity to changes in far out-of-the-money put options on the market, correcting for market movements themselves. As these options only pay out in very adverse scenarios, changes in their prices reflect changes in the perceived likelihood and severity of market crashes. Banks that show a high sensitivity to such put options are hence perceived by the market as being severely affected should such a crash materialize. As this sensitivity reflects perceived exposures to a hypothetical crash, it is truly forward-looking in nature. This property is important to the extent that bank risks change quickly and hence historical tail risk exposures become less informative. Another advantage of this method is that it does not require the actual observation of any crashes, as the method relies on changes in their perceived likelihood. We use our methodology to estimate tail risk exposures of U.S. bank holding companies. We find that the estimated exposures are inversely related to their CAPM beta. Since our methodology estimates tail risk over and above beta risk, this implies that low beta-banks have more tail risk than their beta would suggest. Thus, banks which appear safe in normal times are actually more exposed to a crash. Conversely, of course, high beta banks have lower tail exposure than their normal risk suggests. In other words, banks’ risk exposures converge in the tails. This has interesting implications for financial regulation and we discuss various interpretations of this finding in the paper. We also use our methodology to understand the main drivers of bank tail risk. Understanding these drivers is important for regulators as it gives them information about which activities should be encouraged and which not. There is so far very little research on this question (a notable exception is De Jonghe 2010). Our main findings are that variables which proxy for traditional banking activities (such as lending) are associated with lower perceived tail risk. Several non-traditional activities, on the other hand, are perceived to contribute to tail risk. In particular, we find that securities held for-sale, trading assets and derivatives used for trading purposes are associated with higher tail risk. These findings are consistent with the experience of the crisis of 2008 and 2009. Interestingly, securitization, asset sales and derivatives used for hedging are not associated with an increase in tail risk exposure. This suggests that a transfer of risk itself is not detrimental for tail risk, but that non-traditional activities that leave risk on the balance sheet are. On the liability side we find that leverage itself is not related to tail risk but that large time deposits (which are typically uninsured) are. We also find that perceived tail risk falls with size, which is indicative of bail-out expectations due to too-big-to-fail policies. The remainder of this paper is structured as follows. In Section 2 we briefly review existing measures of tail risk. Section 3 develops the methodology for measuring tail risk exposure using put option sensitivities. Section 4 contains the estimation of tail risks. Section 5 studies the determinants of tail risk. Section 6 concludes. 2 Existing tail risk measures The Value-at-Risk (VaR) has for many years been the standard measure used for risk management. VaR is defined as the worst loss over a given holding period within a fixed confidence level.2 A shortcoming of the VaR is that it disregards any loss beyond the VaR level. The expected shortfall (ES) is an alternative risk measure that addresses this issue. The ES is defined as the expected loss conditional on the losses exceeding the VaR level. Another frequently used measure is Moody’s KMV. Essentially, Moody’s KMV is a distance to default measure that is turned into an expected default probability with the help of a large historical dataset on defaults.3 While these measures focus on individual bank risk, there has been a growing interest in recent years in systemic measures of bank risk. One strand of the literature focuses on tail-betas (e.g., De Jonghe 2010). This concept applies extreme value theory to derive predictions about an individual bank’s value in the event of a very large (negative) systematic shock. Loosely speaking, this method uses information from days where stock market prices have fallen heavily and considers the covariation with a bank’s share price on the same day. It thus focuses on realized covariances conditional on large share price drops. A difficulty encountered when applying this method is that tail risk observations are rarely observed, and hence a large number of observations are needed to get accurate estimates (De Jonghe (2010) suggests at least six years of daily data). Acharya et al. (2009) develop a measure similar to the concept of market dependence, which is based on expected shortfalls instead of betas. They propose measuring the Marginal Expected Shortfall (MES), which is defined as the average loss by an institution when the market reaches a certain quantile of its left tail. Huang et al. (2010) propose a related measure focusing instead on a threshold loss for a portfolio of large banks as the tail risk event. Adrian and Brunnermeier (2009) consider a different aspect of systemic risk. They estimate the contribution of each institution to the overall system risk. A bank’s CoVaR is defined as the VaR of the whole financial sector conditional on the bank being at its own VaR level. The bank’s marginal contribution to the overall systemic risk is then measured as the difference between the bank’s CoVaR and the unconditional financial system VaR. An advantage of the CoVaR is that it is relatively simple to estimate, as it is based on quantile regressions. In terms of its informational properties it is similar to the tail risk beta in that it focuses on realized tail risk. Our measure is most similar to the tail risk betas as we also measure bank exposures to large market swings. A difference that is important for the interpretation of the estimates, however, is that while the tail risk beta relates to large daily market drops, we estimate exposures to a large prolonged downturn in the market (e.g., several months). There is literature on hedge funds performance which uses a methodology similar to ours. This literature estimates tail exposures for various styles of hedge funds with a non-linear market factor that takes the shape of an out-of-the-money put option (see, for example, Agarwal and Naik 2004). However, the focus of this literature is different. While we are interested in estimated tail risk exposures per se (i.e., regulators want to know which banks are more exposed to tail risk), the hedge fund literature looks at whether tail risk exposures can be used to forecast fund performance. 3 Measuring tail risk using put option sensitivities In this section we present our methodology for measuring banks’ tail risk exposures. We define the latter to be the bank’s exposure to a general market crash (that is, a severe downturn in the economy). If the market crashes, a bank may suffer large, simultaneous losses on its assets, which may push it close to or into bankruptcy. Crucially, the extent to which it is exposed to crashes may differ from its normal market sensitivity. Consider two banks, A and B. Bank A invests mostly in traditional banking assets such as, for example, loans to businesses and households. Moreover, it invests in assets that are mainly exposed to normal period risk, such as, for example, junior tranches of securitization products (which lose value for modest increases in defaults, but are insensitive to defaults that go beyond the first loss level). In addition to these assets, bank A insures itself against default by buying protection on its assets (such as by buying credit default swaps on its loans). Bank A’s equity value will thus depend less on the market in times of crisis, compared to normal times. Bank B follows a different business strategy. It invests in traditional assets as well. However, in addition, it also follows investment strategies that return a small and steady payoff in normal periods but incur large losses when the market crashes. Examples of such a strategy is selling protection in the credit default swap (CDS) market or buying senior tranches of securitization products (which lose value only when all other tranches have already incurred a total loss). Thus, even though bank B’s equity value may behave similarly to bank A’s in normal periods, it tends to fall relatively more when the market crashes. In our empirical implementation we will identify tail risk sensitivities (γ in Eq. 3) by adding a put option (on the market) to a standard market regression and interpreting the sign of the put option coefficient. Tail risk sensitivities will thus be estimated through changes in put option prices, which (loosely speaking) arise from changes in either the likelihood of a market crash or its severity.5 3.1 A discussion of the methodology We believe that this method has several attractive features. First, the method is forward-looking in nature, that is, it captures expected tail risk exposure at banks. This contrasts with other popular methods for measuring tail risk, such as tail risk betas or the CoVAR. These methods essentially compute correlations (or covariation) of banks with the market (or other banks) at days of large share price drops. They thus draw inferences from historical tail risk distributions and hence measure realized tail risk. The difference between forward and backward-looking measures is likely to be limited when banks only undergo small changes in their risks over time, but is potentially important in a dynamically evolving financial system. Second, our measure identifies banks’ tail risk exposure through changes in expected market tail risk, as measured by put option prices. This has the advantage that for our estimation we do not need tail risk events to materialize. Such events, by definition, occur only very infrequently and hence it is difficult to estimate their properties. Existing measures that rely on the historical distribution of tail risk events reduce this problem by relying on a large time series and by looking at modest tail risk realizations that occur more frequently. Our method allows the measurement of exposure to extreme forms of tail risk (for this one simply includes a very far out-of-the-money put option). Since we estimate exposures to market crashes, our measure captures system tail risk exposure. This is desirable since externalities from banking failures are typically associated with system events, and not isolated bank failures. It should, however, be kept in mind that a bank that has a low estimated systematic tail risk may still be individually very risky to the extent that it pursues activities that are uncorrelated with the market. In addition, one should also keep in mind that market risk is not identical to banking sector risk. Even though banks’ market exposures have probably increased in recent decades, credit risk is still the major source of risk for banks. However, market and credit exposures are highly correlated in practice: when economic conditions deteriorate, the default risk of firms increases and stock values decline at the same time. For example, during our sample period the correlation between the S&P 500 and the CDX crossover index was − 0.77. Due to this high correlation, our estimates will (indirectly) also capture credit risks at banks.6 In our empirical implementation of Eq. 3 we measure tail risk exposures by the (negative) coefficient of a put-option return ( γ) in a regression on bank stock returns. This, however, is in a regression where we also separately control for the market return. Conditional on the market, a key driver of put-option returns is market volatility. We can thus expect the γ to give us similar information as the (negative) coefficient in a regression of bank returns on market volatility.7 This provides us with an alternative interpretation of the γ. If a bank is symmetrically exposed to upwards and downwards movements in the market, an increase in volatility will not affect its value and the γ should be zero. However, if a bank is more exposed to downward than to upward movements (e.g., Bank B), its value will decrease when volatility increases. It then obtains a positive γ. The tail-risk estimate can thus also be interpreted as a measure of how much more a bank is exposed to downturns than to upturns. Total tail risk is then a combination of the symmetric dependence on the market (given by the standard β-risk) and the asymmetric sensitivity to downturns (the γ-risk). It should be noted that our measure, like other market-based measures, is net of any bailout expectations. If, for example, markets anticipate that governments may bail out certain banks (for example because they are too-big-to-fail) then these banks may have a low perceived tail risk even if their underlying activities are relatively risky (Kane (1985), for example, shows that the expected value of these bail-out subsidies can be significant). Thus, while our estimates are important for regulators and supervisors in that they quantify a bank’s effective failure risk, there are less suitable for being used as a base for regulation that aims at reducing risk-shifting (for example, by conditioning capital requirements on tail risk exposures). 4 Empirical analysis 4.1 Data We collect daily data on bank share prices and the S&P 500 (our proxy for the market) for the period 4 October 2005 until 26 September 2008 from Datastream. Put option data on the S&P 500 for the same period is from IVolatility.8 In addition, various balance sheet data are collected from the FR Y-9C Consolidated Financial Statements for Bank Holding Companies (BHCs). We focus on U.S. BHCs which are classified as commercial banks and for which data is fully available. We focus on the BHC instead of the commercial bank itself, as typically it is the BHC that is listed on the stock exchange. Excluded are those banks whose share price change is zero in more than 10% of the cases in order to mitigate problems arising from illiquidity. Foreign banks (even when listed in the U.S.) and pure investment banks are also excluded. The final sample contains 209 Bank Holding Companies. An important question is the choice of the option strike price. Ideally we would choose an option such that on each day it represents the same crash probability. Simply taking an option with a constant strike price is hence not appropriate as market prices change over time and hence the moneyness of the option will change. Taking the strike price to be a (fixed) fraction of the S&P500 is also not desirable as this ignores that the likelihood of tail risk realizations is also driven by the volatility. We hence decided to construct a series of options such that their option price does not vary. Specifically, each day we adjust the strike of the option such that the previous day price of the options is fixed over time. For this we use an option price of 0.5$, which translates into an implied strike that was on average 33% below the S&P 500 during our sample period.9 We have checked the S&P500 over the last 25 years and have found three periods with stock market declines of this magnitude: the 1987 stock market crash (maximum decline: 32%), the burst of the dotcom bubble (maximum decline: 23%) and the subprime crisis (maximum decline: 41%). Thus, such a decline materialized about once every eight years. In order to compute the option price change for, say day 1, we proceed as follows. We first identify among all traded options the two strike prices that give day 0 prices closest to 0.5. We then calculate the weight that makes their average price 0.5. Given this weight, we calculate the weighted average of their prices at day 1 and calculate from this the change of the price, dP, from day 0 to day 1. Effectively, we compute price changes of options whose (hypothetical) strike price varies from day to day. We initially considered all out-of-the-money puts. A first inspection, however, revealed that the 100er strikes (i.e. 500, 600, 700 etc.) are much more liquid than put options with other strike prices. We therefore use only these puts. For each day an option’s strike price and its price change are then calculated according to the procedure described above. In order to mitigate the influence of changes in the remaining time to maturity on our analysis, we use for this an “on-the-run” series, where each quarter we jump to more recently issued options with longer maturity. As a result, the remaining time to maturity is limited to an interval of between three and six months. 4.2 Estimated tail risk exposures We expect α 1 in Eq. 4 to be close to one in case banks display similar properties as other firms in the market. We do not hold any priors about the sign of α 2 (note that γ in the model (3) relates to − α 2 in the estimation; high α 2 thus indicates less tail risk). If a bank is similar to the average firm in the market, its γ should be zero. This is because the bank will then react one-to-one to market movements. Its market dependence in the tail is then not dif/ferent from its market dependence in normal times and hence gamma is zero. The existence of systemic risk in the financial system may suggests that banks display excess market dependence in the tail, that is, γ is positive (negative α 2). However, bail-out expectation may also limit the perceived exposures of the stocks of banks to a market downturn, in which case we obtain a negative γ (positive α 2). What can be said about the economic significance of the γ-estimates? For the β-estimates it is straightforward to interpret their value since a drop in the market by x% translates into a drop in the stock price by βx%. Such a simple relationship does not exist for the γ because the return on a put-option is not proportional to the return on the market. However, in order to get a sense of the economic significance of the γ-estimates one can do the following exercise. We can consider different scenarios for (instantaneous) drops in the market index of, say 5, 10, 15, 20%. For these drops, we can calculate implied put-option price changes (using an option price formula). We can then use Eq. 3 to calculate the share price return for the average gamma implied by the put-option change, which is given by the term \(-\gamma \frac{dp}{p+\overline{x}} (=\alpha _{2}\frac{dp}{p+\overline{ x}})\). Finally, we can compare this return to the share price return implied by the average beta. Some caveats apply to this method. First, when we compute the implied option price change we keep constant all the other determinants of the option price, while in reality a significant drop in the index value, for example, is likely to be associated with a change in volatility as well (most likely an increase in the volatility). Thus, our calculations may over- or underestimate the real impact on the put-option price. Second, the OLS coefficients for beta and gamma relate to small changes in the explanatory variables, while we simulate large changes in these variables. For the beta-coefficient this issue may be relatively innocent—as it only requires the relationship between the bank and stock return to be linear over a wide range of index returns. However, for the gamma this assumption is more problematic. This is because the price of an out-of-the-money put-option responds non-linearly to changes in the market. In particular, it becomes more sensitive to the market as the market gets closer to the strike price (less out-of-the money). Our OLS estimates of the gamma are based on the less sensitive range (where the index value is far away from the strike) but for the simulations we make inferences about the more sensitive range (where the put-option is less out-of-the-money). This may introduce an additional source of error in our exercise. We proceed as follows. In order to calculate the expression \(-\gamma \frac{dp }{p+\overline{x}}\), we assume an index value equal to the average of the S&P 500 during our sample period. From this we calculate the strike of the option using the average discount used in our analysis. We then calculate the implied volatility (using the Black–Scholes formula) which makes the option price equal to 0.5 (the price used in our regressions). Holding this volatility constant we can then compute the option price change if the market drops by a certain amount. Using the sample mean γ, we can then calculate the term \(-\gamma \frac{dp}{p+\overline{x}}\), which gives us the share price return induced by the γ-risk. The share price return induced by the β-risk of the average bank is simply given by the (mean) β times the drop in the market. Economic significance of γ-estimates. The table calculates the share price reaction to various market crash scenarios. The first column gives the share price reaction implied by the average beta (= 1.56). The second column gives the share price reaction implied by the average gamma (= − 7.82). The third column gives the combined share price reaction. The last column gives the difference in the share price response of a bank at the 25% quantile and at the 75% quantile of the gamma-distribution Besides this exercise (which compares the average γ-risk with the average β-risk), it is also informative to study how important the cross-sectional variations in γ-risk are in economic terms. This matters for a regulator who wants to know whether a bank that has a high γ-risk relative to its peers really has much more tail risk. For this we have calculated in the last column of the table the difference in the implied share price return for a bank that has a gamma equal to the 25% quantile of the distribution with the one of a bank at the 75% quantile of the distribution. As before, it turns out that for smaller index drops the gamma does not matter a lot (for a 5% drop the difference in the return is about 0.7% between the two banks). However, for larger drops, the difference becomes important. For example, when the market drops by 20%, a bank with a gamma at the 75% quantile of the γ-distribution drops by 12% more than a bank at the 25% quantile. Gamma and bank size Gamma vs. beta This negative relationship has the following interpretation. Since the gamma is estimated from a regression that also includes the market, it measures tail risks over and above the tail-dependence implied by the beta. In other words: a positive gamma implies that a bank has more tail risk than its beta would suggest. A negative relationship between beta and gamma thus means that low beta banks have more tail risk than their beta would indicate (and vice versa for high beta banks). Thus, there is convergence in the tail: banks’ risk exposures in the tail are more similar than the ones in normal times. A potential explanation for the negative relationship is so-called tail risk strategies, which produce steady returns in normal periods but actually expose the banks to severe downturns. For example, an institution that writes protection in the CDS market receives in normal periods a relatively safe stream of insurance premia. However, in a large recession many exposures will simultaneously default and large losses may materialize for this institution. Many trading strategies, such as the ones exploiting apparent arbitrage relations, create similar pay-off distributions. If banks that have low beta (and hence also low tail risk) tend to source additional tail-risk through such means, this could explain our finding. Another explanation for this negative correlation is that highly profitable institutions that operate in risky environments protect their franchise, for example by buying protection in the CDS market or by imposing a less fragile capital structure. Yet another interpretation is that it is simply difficult for banks to avoid exposure to systemic events. Thus while banks may differ substantially in their normal business risk, their tail exposure may be rather similar. - 1. Upper-right quadrant (Quadrant I): Banks with high normal times ( β > 1) and high tail risk (γ > 0). - 2. Upper-left quadrant (Quadrant II): Banks with high normal times ( β > 1) and low tail risk (γ < 0). - 3. Lower-left quadrant (Quadrant III): Banks with low normal times ( β < 1) and low tail risk (γ < 0). - 4. Lower-right quadrant (Quadrant IV): Banks with low normal times ( β < 1) and high tail risk (γ > 0). 5 Determinants of bank tail risk In this section we study whether and how a bank’s business activities relate to its tail risk. The most obvious way to do this is by regressing (estimated) gammas upon a number of balance sheet variables that represent various banking activities. This two step method, however, has the disadvantage that the estimation is not efficient as information from the first step (estimating the gammas) is not used in the second step. Relationship between gamma and bank characteristics. This table reports the negative coefficients of the interaction terms between the adjusted put option and the respective balance sheet variables. It represents the effect of the respective balance sheet item on a bank’s tail risk exposure where a positive value implies a larger exposure to tail risk Column two focuses on banks’ lending activities by including proxies for loan quality and profitability. Among the loan quality proxies only the loan growth variable is significant, indicating a positive relationship with tail risk. This is consistent with the idea that a bank may only grow faster at the cost of lowering lending quality, and hence may become more exposed in a downturn.12 We also find that a higher interest rate on the loans is associated with less tail risk, which can be explained by the fact that this indicates a higher profitability of banks, thus exposing them less to a crash in the market. Additionally, we include the return of assets (ROA) to capture the returns from other (partly non-traditional) asset activities. We find a positive relationship with tail risk, which is consistent with other recent findings (e.g., Demirgüç-Kunt and Huizinga 2010).13 Next, we turn to the influence of other assets. In column three we include held-to-maturity securities, for-sale securities and trading assets (all scaled by total assets). Only trading assets turn out significant, and only at the 10% level. At this point, one has to keep in mind that non-traditional activities are likely to be negatively correlated with traditional activities (banks may specialize in either), which may create multicollinearity problems and hence affect the estimates. Therefore, in column four we use the ratio of commercial and industrial loans to total assets (C&I Loans/TA) instead of the loan-to-asset ratio (the traditional activity) as it is less correlated with the non-traditional activities. The result is that trading assets and for-sale securities now contribute very significantly to tail risk. Held-to-maturity securities have a positive coefficient as well, but its magnitude and significance is lower. The C&I-loans-to-asset ratio is insignificant, similar to the loan-to-asset ratio in column three. It is often argued that non-traditional activities increase (tail) risk exposure. In columns five and six, we will analyze which role financial innovations play among the non-traditional activities. First, we investigate securitization and asset sales activities. In addition to the total value of securitization and asset sales (both scaled by total assets) we also include the internal and external credit exposure arising from these activities. The internal credit exposure arises from a bank’s own securitization or asset sale activities via recourse and other credit enhancing agreements between the bank and its special purpose vehicle (SPV). An external credit exposure can arise if a bank provides credit enhancements to other banks’ securitization structures. Column five shows that only the external credit exposure variable is significant and positive. This is in line with our prior findings as external credit exposure is new credit exposure taken on in addition to existing exposure. Moreover, such exposure (for example, from credit enhancements) only materializes under relatively adverse scenarios, and hence should be related to tail risk. The insignificance of a bank’s own securitization and asset sale activities may indicate that opposing forces are at work. On the one hand, securitization and asset sales are, by themselves, of course a mean of off-loading risk to other market participants, making a bank less risky.14 In particular, if the bank keeps the equity tranche but sells senior tranches it sheds tail risk relative to normal period risk. On the other hand, recent experience has shown that these activities induced banks to take on more risk.15 In addition, although the credit exposure seemingly disappeared from the balance sheet to the SPV (which is legally independent), the market might expect that this separation does not survive if the SPV encounters large losses. A bank might buy back assets from its SPV in order to protect its reputation and customer base (as happened in the case of Bear Stearns). Therefore, the credit exposure (which is mostly tail risk exposure) may not be effectively removed by securitization. Column six focuses on banks’ derivatives activities. Based on the available data, we can make the distinction between derivatives that are held for trading purposes and derivatives that are held for other purposes (most likely hedging). A priori one would expect that the latter would reduce tail risk. The expected effect for derivative trading is less clear cut. Resulting counterparty risk (which tends to materialize in tail risk scenarios) may, for example, create an increase in tail risk exposure. The results in column six show that derivatives held for trading contribute to tail risk, while the other derivatives do not seem to affect it. The latter is somewhat surprising but may be explained by the fact that only some of these derivatives are used for hedging and that they create counterparty risk as well. The last column takes a closer look at the importance of capital structure for tail risk. In column one we found that the leverage ratio does not contribute to tail risk exposure. We now include information on the share of deposits and the composition of deposits. In the last column of Table 5, in addition to the variables from column one, we consider the deposit-to-liabilities ratio and the ratio of time deposits above $100,000 to domestic deposits.16 Time deposits above $100,000 were not insured during our sample period, which makes them similar to wholesale funding, as both funding sources might be prone to runs. The results in column seven show that the leverage ratio is again not significant. Insignificance also obtains for the deposit-to-liabilities ratio. However, the time deposits above $100,000 do contribute positively and significantly to tail risk. Since these deposits are subject to withdrawal risks similar to wholesale funding, this result is consistent with Demirgüç-Kunt and Huizinga (2010) who find that wholesale funding increases bank risk.17 6 Conclusion In this paper we propose a forward-looking method to measure tail risk exposures at banks. Tail risk is defined as a bank’s exposure to a large negative market shock and it is measured by estimating a bank’s share price sensitivity to changes in far out-of-the-money put options on the market, correcting for market movements themselves. Because far out-of-the-money put options on the market only pay out if the market crashes, changes in their prices reflect changes in the perceived likelihood and severity of a crash. The estimated sensitivities, in turn, represent the market’s perception of exposures to a hypothetical crash, making them a truly forward-looking measure. Another attractive feature of this measure is that it does not require the actual observation of tail risk events since it identifies banks’ tail risk exposure through changes in expected market tail risk. Our measure is also relatively easy to estimate as it basically comes from an amended market regression. The application to U.S. bank holding companies yields several interesting facts about their tail risk exposures. For example, (excess) tail risk seems to be negatively correlated with the CAPM share price beta. This suggests that banks which appear relatively safe in normal times (that is, have a low beta) are actually riskier than their beta would suggest. We also find that the impact of non-traditional activities on tail risk depends on whether they leave assets on the balance sheets or not. In the former case they increase tail risk, while in the latter they do not. Our results also suggest that leverage itself does not increase tail risk, but will do so if it comes through uninsured deposits. Footnotes - 1. For evidence on such strategic use see, for example, Wall and Koch (2000) and Hasan and Wall (2004) for the reporting of loan losses and Laeven and Majnoni (2003) and Bushman and Williams (2009) for the provisioning of loan losses. Huizinga and Laeven (2009) also provide evidence that banks have used accounting discretion to overstate the value of their distressed assets in the current crisis. - 2. - 3. (Subordinated) debt and CDS spreads are an alternative and attractive measure of a bank’s default risk. A shortcoming of these measures is that these spreads are not available for many banks (in the case of CDS spreads) and often not very liquid (in the case of bonds). - 4. The correct term here is indeed \(\frac{dp}{p+\overline{x}}\) and not, as one might think, \(\frac{dp}{p}\). The bank-market relationship consistent with \( \frac{dp}{p}\) would be \(y=\frac{x^{\beta }}{(\overline{x}-x)^{\gamma }}\) for \(x<\overline{x}\) as one can easily verify, which is not a sensible one as for \(x=\overline{x}\) the denominator would then be infinite. - 5. The estimation of γ is akin to estimating the factor-loadings in the asset pricing literature (see, for instance, Ang et al. (2006) and the references therein). While in the asset pricing literature the factor loadings are often used to predict returns in a second step, we are interested here in the cross-sectional distribution of the factor-loadings. More precisely, we propose using the cross-sectional variation to identify banks that are perceived as being prone to a market crash. - 6. An alternative to using put-options on the market are senior tranches of securitization products. These tranches only lose value in extreme circumstances and hence represent tail credit risk. However, the pricing of such tranches in financial markets is rather imperfect at present; hence they are not suitable for estimating tail exposures. - 7. The two coefficients will obviously not provide identical information since put-option prices (conditional on market returns) can also change due to other factors, such as interest rates, dividends and (most importantly in our context) the skewness of the distribution. Overall, it is preferable to use put-options (instead of volatility) as regressor as this will also capture variations in tail risk arising from changes in skewness. - 8. We also considered using put options on a banking index (the BKX index) instead of the market. There are two disadvantages to this. First, the banking sector index by itself will already reflect tail risk in the financial system, thus the interpretation of the γ-estimates is not straightforward. Second, put option prices on the index are fairly illiquid. - 9. In the more tranquil (low volatility) times of 2006, the average implied strike was around 28% below the S&P 500 while after June 2007 it was on average around 38% below the S&P 500. - 10. The two-step method, however, yielded very similar results. - 11. - 12. - 13. Note that the interest income from loans is a part of the ROA so that potential multicollinearity issues could affect the results. However, tests in which we split the ROA into returns from loans and returns from remaining assets revealed that this is not a problem. - 14. - 15. - 16. The FR Y-9C reports do not contain information on deposits in foreign subsidiaries, hence we scale by domestic deposits. - 17. Demirgüç-Kunt and Huizinga do not distinguish between normal times risk and tail risk but focus instead on the Z-score. Notes Acknowledgements We thank Bob de Young, an anonymous referee and participants at the Bank of International Settlements/JFI conference on systemic risk and financial regulation 2010, the ENTER Jamboree at Toulouse University, the Hasliberg financial intermediation conference 2010, the 2010 Chulalongkorn Accounting and Finance Symposium and the 10th Annual Bank Research Conference as well as seminar participants at the Bank of England, Tilburg University and the University of Innsbruck for comments. The authors gratefully acknowledge financial support from NCCR Trade Regulation. References - Acharya V, Pedersen LH, Philippon T, Richardson M (2009) Restoring financial stability, 1st edn. Wiley, New YorkCrossRefGoogle Scholar - Adrian T, Brunnermeier MK (2009) CoVaR. Mimeo, Princeton UniversityGoogle Scholar - Agarwal V, Naik N (2004) Risks and portfolio decisions involving hedge funds. Rev Financ Stud 17:63–98CrossRefGoogle Scholar - Brent A, LaCour-Little M, Sanders A (2005) Does regulatory capital arbitrage, reputation, or asymmetric information drive securitization? J Financ Serv Res 28:113–133CrossRefGoogle Scholar - Ang A, Hodrick RJ, Xing Y, Zhang X (2006) The cross-section of volatility and expected returns. J Finance 51:259–299CrossRefGoogle Scholar - Bushman R, Williams C (2009) Accounting discretion, loan loss provisioning and discipline of banks’ risk-taking. Mimeo, University of North Carolina at Chapel HillGoogle Scholar - De Jonghe O.G. (2010) Back to the basics in banking? A micro-analysis of banking system stability. J Financ Intermed 19(3):387–417CrossRefGoogle Scholar - Demirgüç-Kunt A, Huizinga H.P. (2010) Bank activity and funding strategies: the impact on risk and return. J Financ Econ 98(3):626–650CrossRefGoogle Scholar - Flannery MJ (1998) Using market information in prudential bank supervision: a review of the U.S. empirical evidence. J Money, Credit Bank 30(3):273–305CrossRefGoogle Scholar - Flannery MJ (2001) The faces of “market discipline”. J Financ Serv Res 20(2–3):107–119CrossRefGoogle Scholar - Foos D, Norden L, Weber M (2009) Loan growth and riskiness of banks. Working paper, Bank of International SettlementsGoogle Scholar - Franke G, Krahnen JP (2007) Default risk sharing between banks and markets: the contribution of collateralized debt obligations. In: Carey M, Stulz R (eds) The risks of financial institutions. National Bureau of Economic Research conference reportGoogle Scholar - Hovakimian A, Kane E (2000) Effectiveness of capital regulation at U.S. commercial banks, 1985 to 1994. J Finance 55:451–468CrossRefGoogle Scholar - Hasan I, Wall LD (2004) Determinants of the loan loss allowance: some cross-country comparisons. Financ Rev 39:129–152CrossRefGoogle Scholar - Huang X, Zhou H, Zhu H (2010) Systemic risk contributions.Working paper, Bank of International SettlementsGoogle Scholar - Huizinga H, Laeven L (2009) Accounting discretion of banks during a financial crisis. IMF Working Paper WP/09/207Google Scholar - Jorion P (2006) Value at risk, 3rd edn. McGraw-Hill, New YorkGoogle Scholar - Laeven L, Majnoni G (2003) Loan loss provisioning and economic slowdowns: too much, too late? J Financ Intermed 12:178–197CrossRefGoogle Scholar - Standard and Poor’s (2005) Chasing their tails: banks look beyond value-at-risk: RatingsDirect.. Accessed 26 Nov 2008 - Stiroh K (2006) New evidence on the determinants of bank risk. J Financ Serv Res 30:237–263CrossRefGoogle Scholar - Wall LD, Koch TW (2000) Bank loan loss accounting: a review of theoretical and empirical evidence. Fed Reserve Bank Atlanta Econ Rev 85(2):1–19Google Scholar - Wu D, Yang J, Hong H (2011) Securitization and banks’ equity risk. J Financ Serv Res 39:95–117CrossRefGoogle Scholar
https://link.springer.com/article/10.1007%2Fs10693-012-0131-5
CC-MAIN-2018-13
refinedweb
7,262
51.78
What is Vector? Suppose we want to move from point A to point B where point A is situated at (33.0, 35.0) and point B is at (55.0, 45.0) then Vector AB will be the different between these two points, or the x and the y distance between this two points is (x2-x1, y2-y1) or (55.0-33.0, 45.0-35.0). Why do we need to create a vector class? Vector module helps game developer to perform various operations, for example moves an object from point A to point B as well as find out the vector magnitude of that object, therefore it is always better if we can create a Vector module before we create our game. Create a Vector2D class in python Vector class is just like any other module class with methods that we can use to move an object or modify the property of that object. import math class Vector2D(object):): # find the unit vector) A few methods above are the overload methods where they will be called when the Vector class instance performs certain operation, for example the __div__, __mul__, __sub__ and __add__ method will be called when we divide, multiply, subtract and add two vectors together. The __neg__ method will be called if we want to point a Vector in the opposite direction. The __init__ method will be called at the moment we initialized the Vector2D’s instance and __str__ will be called when we print that object with the python print function. The get_magnitude method will return the magnitude of the Vector and the normalize method will divide the x and the y length of the Vector with it’s magnitude. Finally next_vector will take in the combine value of two tuples and return a new Vector2D object. Create a separate python module with below script. from vector2d import Vector2D if __name__ == "__main__": A = (10.0, 20.0) B = (30.0, 35.0) C = (15.0, 45.0) AB = Vector2D.next_vector(A+B) BC = Vector2D.next_vector(B+C) AC = AB+BC print(AC) AC = Vector2D.next_vector(A+C) print(AC) If you run the above module then you can see that when you add up two vectors AC = AB + BC the overload __add__ method of vector AB will be called which will then return a new Vector2D object. AC = Vector2D.next_vector(A+C) will create the same outcome as AC = AB + BC when we print the vector out with the print function. In this example the result is (5.0, 25.0). The above Vector2D function will get you started where you can now include more methods into the Vector2D module for the future expansion purposes.
http://gamingdirectional.com/blog/2016/08/31/create-a-vector-class-in-pygame/
CC-MAIN-2019-18
refinedweb
451
70.13
Ruby: Tap that method Recently I came across a really cool method which will make our ruby code more readable and it is tap. The feature is coded in ruby as shown below class Object def tap yield self self end end So, How tap is used ? user = User.new.tap do |u| u.username = "kartik" u.save! end Isn’t it pretty and simple than the old conventional method of creating a object. # Old Dusty Traditional Method user = User.new user.username = "kartik" user.save! So, What it does ? In Simple words, it just allows you do something with an object inside of a block, and always have that block return the object itself. It was created for tapping into method chains. Another overkilling extension of above example can be as below so get what I meant to say, Everything in a block and readable. user = User.new.tap do |u| u.build_profile u.process_credit_card u.ship_out_item u.send_email_confirmation u.blahblahblah end Tap for debugging method chains: If you see most of the time it’s pretty much difficult to debug chained methods. Lets see it with below example. If I have to debug the below expression : (1..10).to_a.select {|x| x%2 == 0}.map {|x| x*x} I would generally add pry debugger above the expression and execute each chain one by one in order to see whats happening. Which is somehow boring and tiresome when you have many one liners like this. Tap to the rescue : Same thing can be done using tap without disturbing the chain of expressions and getting immediate results after each chain execution. (1..10).tap { |x| puts "original: #{x.inspect}" }.to_a. tap { |x| puts "array: #{x.inspect}" }. select { |x| x%2 == 0 }. tap { |x| puts "evens: #{x.inspect}" }. map { |x| x*x }. tap { |x| puts "squares: #{x.inspect}" } Conclusion : Just like any other ruby syntactic sugar , I would say tap is a pretty cool ruby method which can not just be used for readability but also to debug chained methods. Give it a shot.. Reference :
https://medium.com/aviabird/ruby-tap-that-method-90c8a801fd6a?source=collection_home---4------17-----------------------
CC-MAIN-2019-35
refinedweb
344
68.16
Hi everyone, I’m having this problem that I’m just don’t understand why this is happening. I tried to Google and StackOverflow for this problem and didn’t have a clear answer. Here’s the code: def ground_shipping(weight): if weight <= 2: print(float(20 + 1.5*weight)) elif weight > 2 and weight <= 6: print(float(20 + 3*weight)) elif weight > 6 and weight <= 10: print(float(20 + 4*weight)) else: print(float(20 + 4.75*weight)) premium_ground_shipping = float(120.0) def drone_shipping(weight): if weight <= 2: print(float(4.5*weight)) elif weight > 2 and weight <= 6: print(float(9*weight)) elif weight > 6 and weight <= 10: print(float(12*weight)) else: print(float(14.25*weight)) def cheapest_shipping(weight): ground = ground_shipping(weight) premium = premium_ground_shipping drone = drone_shipping(weight) if ground < premium and ground < drone: print("Ground shipping is the cheapest method, it costs $" + str(ground)) elif drone < ground and drone < premium: print("Drone shipping is the cheapest method, it costs $" + str(drone)) else: print("Premium ground Shipping is the cheapest, it costs $" + str(premium)) cheapest_shipping(41.5) The problem is that when I run it, the error is: Traceback (most recent call last): File “script.py”, line 36, in cheapest_shipping(41.5) File “script.py”, line 29, in cheapest_shipping if ground < premium and ground < drone: TypeError: ‘<’ not supported between instances of ‘NoneType’ and ‘float’ I don’t understand why using a defined function to another function change the defined function to NoneType. My code should be really straightforward and I thought it should work just fine, but I can’t understand the problem. Thanks for answering and have a great day!
https://discuss.codecademy.com/t/python-control-flow-assignment/459474
CC-MAIN-2020-16
refinedweb
274
64
The PolyModel class lets an application define models that can be superclasses for other data model definitions. A query produced from a PolyModel class can have results that are instances of the class or any of its subclasses. It is defined in google.appengine.ext.ndb.polymodel. from google.appengine.ext import ndb from google.appengine.ext.ndb import polymodel class Contact(polymodel.PolyModel): phone_number = ndb.PhoneNumberProperty() address = ndb.PostalAddressProperty() class Person(Contact): first_name = ndb.StringProperty() last_name = ndb.StringProperty() mobile_number = ndb.PhoneNumberProperty() class Company(Contact): name = ndb.StringProperty() fax_number = ndb.query(): print 'Phone: %s\nAddress: %s\n\n' % (contact.phone_number, contact.address) Contact.query() returns Person and Company instances; if Contact derived from Model instead of from PolyModel, each class would have a different kind and Contact.query() would not return instances of proper subclasses of If you wish to retrieve only Person instances, use Person.query(). You could also use Contact.query(Contact.class_ == 'Person'). In addition to the regular Model methods, PolyModel has some interesting class methods: _get_kind(): the name of the root class; e.g. Person._get_kind() == 'Contact'. The root class, Contact in this example, may override this method to use a different name as the kind used in the datastore (for the entire hierarchy rooted here). _class_name(): the name of the current class; e.g. Person._class_name() == 'Person'. A leaf class, Person in our example, may override this method to use a different name as the class name and in the class key. A non-leaf class may also override this method, but beware: its subclasses should also override it, or else they will all use the same class name, and you will soon be very confused. _class_key(): a list of class names giving the hierarchy. For example, Person._class_key() == ['Contact', 'Person']. For deeper hierarchies, this will include all bases between PolyModeland the current class, including the latter, but excluding PolyModel itself. This is the same as the value of the class_property. Its datastore name is 'class'. Since the class name is used in the class_ property and this property is used to distinguish between the subclasses, the class names (as returned by _class_name()) should be unique among those subclasses.
https://cloud.google.com/appengine/docs/standard/python/ndb/polymodelclass
CC-MAIN-2018-05
refinedweb
364
51.14
t_sndv - send data or expedited data, from one or more non-contiguous buffers, on a connection #include <xti.h> int t_sndv( int fd, const struct t_iovec *iov, unsigned[0], iov[1], through iov[iovcount-1]. iovcount contains the number of non-contiguous data buffers which is limited to T_IOV_MAX (an implementation-defined value of at least 16). If the limit is exceeded, the function fails with [TBADDATA]. -. The argument flags specifies any optional flags described below:v() calls. Each t_sndv() with the T_MORE flag set indicates that another t_sndv() (or t_snd()) will follow with more data for the current TSDU (or ETSDU). The end of the TSDU (or ETSDU) is identified by a t_sndv(). - Note: - The communications provider is free to collect data in a send buffer until it accumulates a sufficient amount for transmission. By default, t_sndv() operates in synchronous mode and may wait if flow control restrictions prevent the data from being accepted by the local transport provider at the time the call is made. However, if O_NONBLOCK is set (via t_open() or fcntl()), t_sndv() executes in asynchronous mode, and will fail immediately if there are flow control restrictions. The process can arrange to be informed when the flow control restrictions are cleared via either t_look() or the EM interface. On successful completion, t_sndv() returns the number of bytes accepted by the transport provider. Normally this will equal the total number of bytes to be sent, that is, (iov[0]()). - iovcount is greater than T_IOV_MAX. - v() returns the number of bytes accepted by the transport provider. Otherwise, -1 is returned on failure and t_errno is set to indicate the error. - Notes: - In synchronous mode, if more than INT_MAX bytes of data are passed in the iov array, only the first INT_MAX bytes will be passed to the provider. -_rcvv(), t_rcv(), t_snd(). It is important to remember that the transport provider treats all users of a transport endpoint as a single user. Therefore if several processes issue concurrent t_sndv() orv() fails with [TBADDATA].
http://pubs.opengroup.org/onlinepubs/007908775/xns/t_sndv.html
CC-MAIN-2015-18
refinedweb
334
55.03
A Pythonic query language for time series data Project description TimeSeriesQL A Pythonic query language for time series data Table of Contents - About the Project - Getting Started - Usage - Plotting Libs - Roadmap - Contributing - License - Acknowledgements About The Project There are many time series databases and each have their own query language. Each platform takes time to invest in learning the structure and keywords of that language and often the skills learned don't translate to other platforms. The goal of this project is to create a time series specific library that can be used across many different time series databases as well as easy to learn because it uses Python syntax. Built With Getting Started To get a local copy up and running follow these simple steps. Prerequisites The requirements are in the requirements.txt file. Installation pip pip install timeseriesql manual - Clone the timeseriesql git clone - Install library cd timeseriesql python setup.py install Usage The way this project works is to provide a general framework for querying a time series with pluggable backends that communicate with specific time series databases. The queries are created using Python generators, a formatt familiar to Pythonistas. data = Query(x for x in "metric.name" if x.some_label = "some_value").by("a_label")[start:end:resolution] The return value is a TimeSeries object that uses a Numpy array as backend. That object can have ufuncs and other numpy functions applied against it. More examples to come. There are defaults for start and resolution that are controlled by environment variables. That helps avoid fetching all measurements from the beginning of time by accident. DEFAULT_START_OFFSET #defaults to 3600 seconds DEFAULT_RESOLUTION #defaults to 60 seconds CSV Backend Usage Often time series data is loaded from a CSV file. The backend expects the first column to be the time index in either a numerical timestamp or strings in ISO 8601 date or datetime format. The filters are applied to the headers of the CSV. If labels are not in the CSV and are supplied as part of the query, then filters will not be applied. If any columns are empty or don't contain a numeric value, the value becomes a np.nan. Basic CSV Usage from timeseriesql.backends.csv_backend import CSVBackend data = CSVBackend(x for x in "path/to.csv")[:] Basic CSV Filtering For CSV files the labels are the column headers. If there are columns that are not needed, they can be filtered out. from timeseriesql.backends.csv_backend import CSVBackend data = CSVBackend(x for x in "path/to.csv" if x.label == "A")[:] data = CSVBackend(x for x in "path/to.csv" if x.label != "B")[:] data = CSVBackend(x for x in "path/to.csv" if x.label in ["B", "C", "G"])[:] data = CSVBackend(x for x in "path/to.csv" if x.label not in ["B", "C", "G"])[:] Set the Labels from timeseriesql.backends.csv_backend import CSVBackend data = CSVBackend(x for x in "path/to.csv").labels( [ {"label": "one"}, {"label": "two"}, {"label": "three"}, {"label": "four"}, {"label": "five"}, {"label": "six"}, {"label": "seven"}, ] )[:] TimeSeries Usage The TimeSeries object is allows for manipulation of the time series data after the it's been queried from the backend. In the following examples, the variables starting with ts_ are assumed to be queried data from a backend. TimeSeries Operations # Basic mathematical operations (+, -, /, *) ts_1 + 5 # will return a new series ts_1 += 5 #will perform operation in place ts_1 += ts_2 #add together two TimeSeries TimeSeries Time Index The time index is a array of floats but there is a built in method to convert the floats into np.datetime64. ts_1.time # array of floats ts_1.time.dt #array of np.datetime64 TimeSeries Merging TimeSeries objects can be combined but the ending time indexes must be the same. This may require empty values to be created where the indexes don't align. new_t = ts_1.merge([ts_2, ts_3]) TimeSeries Grouping/Reducing If there are multiple streams, they can be grouped and merged by the labels. reduced = ts_1.group(["hostname", "name"]).add() reduced = ts_1.group("env").mean() reduced = ts_1.group("env").mean(axis=None) #setting the access to None will get the mean of the entire object TimeSeries Special Indexing import numpy as np beg = np.datetime64('2019-02-25T03:00') end = np.datetime64('2019-02-25T04:00') ts_1[beg:end] # set a time range ts_1[beg : np.timedelta64(3, "m")] # fetch from beginning + 3 minutes ts_1[np.timedelta64(3, "m") :] #start from beginning + 3 minutes ts_1[: np.timedelta64(3, "m")] #end at the end - 3 minutes ts_1[{"hostname": "host2"}] # by labels TimeSeries Rolling Windows The rolling_window method assumes that the data is filled and at a fixed resolution. Number of periods is an integer and not a time range. rolling_cum_sum = ts_1.rolling_window(12).add() #rolling cumsum rolling_mean = ts_1.rolling_window(12).mean() # rolling mean rolling = ts_1.rolling_window(12).median() #rolling median TimeSeries Resample The resample method allows a smaller period to be aggregated into a larger period. resampled = ts_1.resample(300).mean() #resamples to 5 minutes and takes the mean TimeSeries to Pandas The conversion returns 2 pandas DataFrames, one for the labels and the other for the data. data, labels = ts_1.to_pandas() Plotting Libs Available Creating a custom backend Start by extending the Plot class. from timeseries.plot import Plot class NewPlottingLib(Plot): pass There is a list of functions that can be extended for as different plots. Also there are functions that generate titles, xlabel, ylabel, and legend labels. Use those to grab default information. They can be overridden to provide more custom logic around the those fields. Roadmap See the open issues for a list of proposed features (and known issues). Contributing Distributed under the MIT License. See LICENSE for more information. Michael Beale - [email protected] Project Link: Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/timeseriesql/
CC-MAIN-2022-21
refinedweb
979
58.18
Fl_Preferences #include <FL/Fl_Preferences.H> Fl_Preferences provides methods to store user setting between application starts. It is similar to the Registry on WIN32 and Preferences on MacOS, and provides a simple configuration mechanism for UNIX. Fl_Preferences uses a hierarchy to store data. It bundles similar data into groups and manages entries into those groups as name/value pairs. Preferences are stored in text files that can be edited manually. The file format is easy to read and relatively forgiving. Preferences files are the same on all platforms. User comments in preference files are preserved. Filenames are unique for each application by using a vendor/application naming scheme. The user must provide default values for all entries to ensure proper operation should preferences be corrupted or not yet exist. Entries can be of any length. However, the size of each preferences file should be kept under 100k for performance reasons. One application can have multiple preferences files. Extensive binary data however should be stored in separate files; see the getUserdataPath() method. The constructor creates a group that manages name/value pairs and child groups. Groups are ready for reading and writing at any time. The root argument is either Fl_Preferences::USER or Fl_Preferences::SYSTEM. The first format creates the base instance for all following entries and reads existing databases into memory. The vendor argument is a unique text string identifying the development team or vendor of an application. A domain name or an EMail address are great unique names, e.g. "researchATmatthiasm.com" or "fltk.org". The application argument can be the working title or final name of your application. Both vendor and application must be valid relative UNIX pathnames and may contain '/'s to create deeper file structures. The second format is used to create a preferences file at an arbitrary position in the file system. The file name is generated as path/application.prefs. If application is 0, path must contain the full file name. The third format generates a new group of preference entries inside the group or file p. The groupname argument identifies a group of entries. It can contain '/'s to get quick access to individual elements inside the hierarchy. The destructor removes allocated resources. When used on the base preferences group, the destructor flushes all changes to the preferences file and deletes all internal databases. Removes a single entry (name/value pair). Deletes a group. Returns the number of entries (name/value) pairs in a group. Returns the name of an entry. There is no guaranteed order of entry names. The index must be within the range given by entries(). Returns non-zero if an entry with this name exists. Write all preferences to disk. This function works only with the base preference group. This function is rarely used as deleting the base preferences flushes automatically. Creates a path that is related to the preferences file and that is usable for application data beyond what is covered by Fl_Preferences. Reads an entry from the group. A default value must be supplied. The return value indicates if the value was available (non-zero) or the default was used (0). If the 'char *&text' or 'void *&data' form is used, the resulting data must be freed with 'free(value)'. 'maxLength' is the maximum length of text that will be read. The text buffer must allow for one additional byte for a trailling zero. Returns the name of the Nth group. There is no guaranteed order of group names. The index must be within the range given by groups(). Returns non-zero if a group with this name exists. Groupnames are relative to the Preferences node and can contain a path. "." describes the current node, "./" describes the topmost node. By preceding a groupname with a "./", its path becomes relative to the topmost node. Returns the number of groups that are contained within a group. Sets an entry (name/value pair). The return value indicates if there was a problem storing the data in memory. However it does not reflect if the value was actually stored in the preferences file. Returns the size of the value part of an entry. 'Name' provides a simple method to create numerical or more complex procedural names for entries and groups on the fly, i.e. prefs.set(Fl_Preferences::Name("File%d",i),file[i]);. See test/preferences.cxx as a sample for writing arrays into preferences. 'Name' is actually implemented as a class inside Fl_Preferences. It casts into const char* and gets automatically destroyed after the enclosing call.
http://www.fltk.org/doc-1.1/Fl_Preferences.html
CC-MAIN-2017-51
refinedweb
756
60.51
Search Criteria Package Details: caffeine 2.9.4) - python-ewmh - python-gobject (python-gobject-git) - python-setuptools - python-xlib - libappindicator-gtk3 (libappindicator-gtk3-ubuntu, libappindicator-bzr) (optional) – caffeine-indicator (tray applet) support Latest Comments Pulec commented on 2018-08-08 08:20 I needed to install setuptools_scm for sucessfull build pacman -S python-setuptools-scm Seems like this is missing dependency? Worth a update of PKGBUILD? Might be just my issue: Things broke for me probably after the big python update few days ago and this probably some problem on my part, but the PKGBUILD didn't mentioned I am missing python-ewmh, if that helps anyone. Stephen304 commented on 2017-10-31 20:49 @tkh23 The developers haven't released a tarball in ages, so this is very out of date. And I think it would be wrong of me to make my own tarball. Since I don't understand bzr that much, I've only been able to successfully make the caffeine-bzr package. Which works if you want to use that one instead. Anyone is welcome to take over this package if they want to fix it. tkh23 commented on 2017-10-31 18:39 /usr/lib/python3.6/site-packages/caffeine/__init__.py:22: PyGIWarning: Gtk was imported without specifying a version first. Use gi.require_version('Gtk', '3.0') before import to ensure that the right version gets loaded. from gi.repository import Gtk Traceback (most recent call last): File "/usr/bin/caffeine", line 40, in <module> import caffeine File "/usr/lib/python3.6/site-packages/caffeine/__init__.py", line 55, in <module> import __builtin__ ModuleNotFoundError: No module named '__builtin__'
https://aur.archlinux.org/packages/caffeine/
CC-MAIN-2020-45
refinedweb
274
55.44
I’m a bit new to audio development, so I need some help. I have a parallel thread in my engine, that streams audio data generated by software synths. However, there’s no sound, and the test software grows in size quickly once the thread is initialized. While the first one is possibly on my own part of some NaN value mutes the output, the second one is a very concerning issue. Here’s the pseudocode of my thread entry point: def threadEntry() while isRunning() == true renderAudioToBuffers() SDL_QueueAudio(getAudioDeviceID(), finalBuffer.ptr, finalBuffer.length * float.sizeof) I have a feeling that this pushes way more data to the audio device that it’s able to handle, since there’s no scheduling. The documentation of the function SDL_QueueAudio isn’t very obvious in this regard. Should I suspend the thread every time it queues a chunk of audio data, then resume when new audio data is needed? SDL_AudioCallback doesn’t work at all for me for some reason, and it is harder to make it work with D code. On demand, I can link my engine’s code.
https://discourse.libsdl.org/t/streaming-audio-through-sdl-queueaudio/33741
CC-MAIN-2022-05
refinedweb
186
62.27
How to check if your Python app supports TLS 1.2 As you may have already heard, the Payment Card Industry (PCI) will be requiring everyone to use at least TLS 1.1 (1.2 is recommended) to meet their data security standard starting on June 30, 2018. Other services, such as PyPI, will be requiring only TLS 1.2 connections on the same date as well. You need to make sure that the version of Python you're using and related packages support TLS 1.2 or a bunch of things could break. If your app connects to APIs, they may enforce this as well. Shopify, for example, will be requiring all apps to use TLS 1.2 when connecting to their APIs this month as they have to comply with the PCI standard. You can run the commands below to make sure your version of Python and related packages support it. They should output "TLS 1.2" if you're on that version. Test with the "requests" package Python 2 python -c "import requests; print(requests.get('', verify=False).json()['tls_version'])" Python 3 python3 -c "import requests; print(requests.get('', verify=False).json()['tls_version'])" Test with the "urllib" package'])" In my specific case, since I use Django for most of my projects. I ran the following in the Django shell as well to be sure (Python 3): python manage.py shell import requests print(requests.get('', verify=False).json()['tls_version']) import json import urllib.request print(json.loads(urllib.request.urlopen('').read().decode('UTF-8'))['tls_version']) Sources:
https://www.calazan.com/how-to-check-if-your-python-app-supports-tls-12/
CC-MAIN-2020-34
refinedweb
259
68.06
Parameters for the mesh generator. More... #include <MeshGenParams.h> This class serves as container for mesh generator parameters. Client code can provide a class which derives from MeshGenParams and which provides custom implementations of the getMaxTriangleArea(Triangle* pT) method or the getMaxEdgeLength(Triangle* pT) method in order to gain control over the local density of the generated mesh. When the meshing algorithm decides if a certain triangle T must be refined, then it calls these functions. In case that some ConstraintSegment2 can be splitted and others must not be splitted use bAllowConstraintSplitting=true and add the ones that must not be splitted. The default implementation of the present class returns the value maxEdgeLength (which is DBL_MAX if not changed by the user). This method can be overridden by the client software in order to control the local mesh density. The default implementation of the present class returns the value maxTriangleArea (which is the default value DBL_MAX if not changed by the user). This method can be overridden by the client software in order to control the local mesh density. Defines if constraint segments can be splitted. Default: yes A previous call to refine() or refineAdvanced() may have created Steiner points. These may be partially or entirely removed during a later refinement call, even (!) if this later refinement takes place in a different zone. It depends on your application if this behavior is desired or not. Usually you want to preserve the points, thus the default value of /p bKeepExistingSteinerPoints is true. Limits the quotient edgeLength / height. Default value: 10.0 A command for development, not for public use. Will vanish soon. Set gridLength > 0 to mesh large enough areas with grid points. Border areas and narrow stripes where a grid does not fit are automatically meshed using classic Delaunay methods. By default gridLength=0 (off). When grid-meshing is used the grid is aligned to the gridVector. By default gridVector is axis aligned. Limits the growth of adjacent triangles. The mesh is constructed such that for any two adjacent triangles t0 and t1 (where t0 is the larger one) area(t0)/area(t1) < growFactor. Recommendation: growFactor>5.0, Default: growFactor=DBL_MAX The growFactor value is ignored for triangles with a smaller area than growFactorMinArea. This value prevents generation of hundreds of tiny triangles around one that is unusually small. Default: 0.001 This value is returned by the default implementation of getMaxEdgeLength(Triangle* pT). Larger edges are automatically subdivided. If a custom implementation of getMaxEdgeLength(Triangle* pT) is provided then this value is ignored. Default value: DBL_MAX. If pHeightGuideTriangulation is set and the height error exceeds locally maxHeightError then the triangulation is further refined. This value is returned by the default implementation of getMaxTriangleArea(Triangle* pT). Larger triangles are automatically subdivided. If a custom implementation of getMaxTriangleArea(Triangle* pT) is provided then this value is ignored. Default value: DBL_MAX. Minimum interior angle: Default: 20.0, maximum: 30.0 Edges below the minimum length are not subdivided. This parameter is useful to avoid tiny triangles. Default: 0.001 When new vertices are created then their height (z-coordinate) is usually computed from the existing triangles. In a situation where an extra triangulation with more accurate heights exists this extra triangulation cn be set als height guide triangulation. In this case the z-coordinates are computed from the triangles of the height guide triangulation.
https://www.geom.at/fade25d/html/classGEOM__FADE25D_1_1MeshGenParams.html
CC-MAIN-2019-22
refinedweb
562
50.53
How to Set up an Online Multi-Language Magazine with Sulu We previously demonstrated the proper way to get started with Sulu CMS by setting up a Hello World installation on a Vagrant machine. Simple stuff, but can be tricky. If you’re a stranger to Vagrant and isolated environments, our excellent book about that very thing is available for purchase. This time we’ll look into basic Sulu terminology, explain how content is formed, created, stored, and cached, and look into building a simple online magazine with different locales (languages). Recommended reading before you continue: - Setting up Isolated Development Environments for PHP – 5 minute read - Getting started with Sulu on Vagrant the Right Way – 10 minute read Pages and Page Templates A page is exactly what you’d expect it to be: a block of content, often composed of smaller blocks. A page template is a two-part recipe for how a page is assembled. A page template has two parts: the twig template, and the XML configuration. The Twig part is responsible for rendering the content of the page’s sub-blocks, like so: {% extends "master.html.twig" %} {% block meta %} {% autoescape false %} {{ sulu_seo(extension.seo, content, urls, shadowBaseLocale) }} {% endautoescape %} {% endblock %} {% block content %} <h1 property="title">{{ content.title }}</h1> <div property="article"> {{ content.article|raw }} </div> {% endblock %} This is the full content of the default twig template that comes with sulu-minimal, found at app/Resources/Views/Templates/default.html.twig. It extends a master layout, defines some blocks, and renders their content. The XML configuration on the other hand is a bit more convoluted (as most things with XML are): ... <key>default</key> <view>templates/default</view> <controller>SuluWebsiteBundle:Default:index</controller> <cacheLifetime>2400</cacheLifetime> <meta> <title lang="en">Default</title> <title lang="de">Standard</title> </meta> <properties> <section name="highlight"> <properties> <property name="title" type="text_line" mandatory="true"> <meta> <title lang="en">Title</title> <title lang="de">Titel</title> </meta> <params> <param name="headline" value="true"/> </params> <tag name="sulu.rlp.part"/> </property> ... If you’re new to Sulu, none of this will make sense yet – but we’ll get there. For now, we’re introducing concepts. The main takeaways from the above snippet of XML are: - the keyis the unique slug of the template, and its entry into the admin template selection menu (it must be identical to the filename of the xml file, without the extension). - the viewis where its twigcounterpart can be found. A template will only appear in the menu if it has both the XML and corresponding Twig file! - the controller is where its logic is executed. We’ll go into more detail on controllers later on, but in general you can leave this at its default value for simple content. - meta data is how the template will appear in the admin template selection menu, depending on the language selected in the UI: - properties are various elements of the page – in this case, a field to input a title and a non-editable URL field You define new page types (templates) by defining new combinations of properties in this XML file, and then rendering them out in the corresponding twig file. As an experiment, try using the menu in the admin interface to switch the Homepage to the Default template, then in the master.html.twig layout file (one folder above), add nonsense into the HTML. Feel free to also populate the article property in the UI. ... <form action="{{ path('sulu_search.website_search') }}" method="GET"> <input name="q" type="text" placeholder="Search" /> <input type="submit" value="Go" /> </form> Lalala <section id="content" vocab="" typeof="Content"> {% block content %}{% endblock %} </section> ... If you click Save and Publish in the top left corner now and refresh the homepage, you should see the changes. For the curious: you may be wondering why they took the XML route instead of having users manage everything in the database. Reason one is being able to version-control these files. Reason two is that even if one were to add a property in a GUI, it would still be missing from the twig template. At that point, they would either have to make twig templates editable in the GUI via a DB too, or the user would again be forced to edit files – and if they’re already editing them, they may as well edit XML files. Pages vs Themes So what’s a theme in all this? A theme is a collection of page types. Contrary to popular belief, a theme is not a master layout which is then extended by the page template twigs – it’s a whole collection of page templates and master layouts to use. A theme will also contain all the necessary assets to fully render a site: CSS, JS, images, fonts, and more. For the curious: We won’t be dealing with themes in this tutorial but feel free to read up on them here. About Caching If the homepage content doesn’t change when you modify it and refresh, it might have something to do with cache. Here are important things to keep in mind: - during development, your server setup should set Symfony development environment variables. This allows you to deploy the app directly to production without modifying the environment value manually in files like web/admin.phpor web/website.php, and makes your app highly debuggable in development. The values are SYMFONY_ENVand SYMFONY_DEBUG, and are automatically set if you’re using Homestead Improved. If not, copy them over from here if you’re on Nginx. - the command line of Symfony apps (so, when using bin/adminconsoleor bin/websiteconsolein the project) defaults to the devenvironment. In order to execute commands for another environment, pass the --env=flag with the environment to match, like so: bin/adminconsole cache:clear --env=prod. This might trip you up if your app is in prodmode and you’re trying to clear cache but nothing is happening – could be the environments don’t match, and the command is clearing the wrong cache. An Online Magazine Let’s consider wanting to launch an online magazine. By definition, such a magazine has: - pages that explain things, like “About”, “Contact”, “Careers”, etc. - many articles, often grouped by month (as evident by the common URL pattern: mysite.com/05/2017/some-title-goes-here) - different permissions for different staff member levels (author, editor, guest, admin…) - a media library in which to store static files for inclusion in posts and pages (images, CSS, JS, etc.) These are all things Sulu supports, with a caveat. When building something like this, we need keep storage in Sulu in mind. Jackawhat? This section might sound confusing. There’s no way to make it easier to understand. Be comforted by the fact that you don’t need to know anything about it at all, and treat it as “for those who want to know more” material. Jackalope is a PHP implementation of PHPCR which is a version of JCR. The rabbit hole is very deep with these terms, so I recommend avoiding trying to learn more about them. If you insist, it’s somewhat covered in this gist. In a nutshell, if you don’t need to version your content, you’re fine with the default Jackalope-Doctrine-DBAL package that Sulu pulls in automatically. It’ll store content in a usual RDBMS (e.g. MySQL) but without versions. If you do need versioning, you need to install Jackrabbit – an Apache product that’s basically a database server with a non-obvious twist, and use a different PHP implementation to store the content: Jackalope-Jackrabbit (also pulled in automatically). Note that an RDBMS is still needed – Jackrabbit merely augments it by providing a different storage mechanism for the actual content, but permissions, settings, etc. are still stored in a regular database. The catch is that Jackrabbit (and PHPCR in general) has a limit of 10000 children per node. Since articles on online magazines and blogs are usually sequential and don’t have a hierarchy (i.e. they’re flat), they would end up being children of “root”, and after 10k posts you’d be in trouble. This is why the Sulu team have developed a bundle which will auto-shard content by month and emulate a flat structure. By setting each month as a parent, each month can have 10000 posts. If need be, this can be further fragmented by week, or even by day. Keep this in mind: The ArticleBundle is a prerequisite if you’re building a news site, blog site, or magazine because otherwise, after 10000 units of content, you’d be in trouble. Note: The ArticleBundle is currently under heavy development, and is likely to have some API changes before a 1.0 release. Use with caution. Okay, let’s install the ArticleBundle to get support for our online magazine website. ElasticSearch Unfortunately, the bundle requires ElasticSearch, which could be more straightforward to install. If you’re using Ubuntu 16.04 (like our Homestead Improved box): sudo add-apt-repository ppa:webupd8team/java sudo apt-get update sudo apt-get install oracle-java8-installer After it’s done (it’ll take a while, as anything with Java), the newly installed version should be set as default on Ubuntu. Elsewhere, you can configure it with: sudo update-alternatives --config java Finally, set the JAVA_HOME environment variable permanently by editing /etc/environment and adding the line JAVA_HOME="/usr/lib/jvm/java-8-oracle" to the top or bottom of it. Reload this file with source /etc/environment. Java is now ready, but we still have to install ES. wget -qO - | sudo apt-key add - sudo apt-get install apt-transport-https echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list sudo apt-get update && sudo apt-get install elasticsearch sudo service elasticsearch start Phew. Note: Due to RAM requirements, ES takes a while to start up, even if the command executes immediately, so wait a while before trying to curl to test it. If it doesn’t work after a minute or so, go into /etc/elasticsearch/elasticsearch.yml, uncomment network.host and set it to 0.0.0.0. Then restart ES, wait a minute, and try curling again. ArticleBundle Now we can finally install the bundle. Note: At the time of writing, ArticleBundle is in an experimental state, and Sulu itself is undergoing an RC process. To work with the latest versions of these two main packages, I recommend the following composer.json setup: ... "sulu/sulu": "dev-develop as 1.6.0-RC1", "sulu/article-bundle": "dev-develop" ... ... "repositories": [ { "type": "vcs", "url": "" }, { "type": "vcs", "url": "" } ], ... ... "minimum-stability": "dev", "prefer-stable": true ... Then, install things with composer install, or update with composer update if you’re already working on a running installation. Once both packages are stable, the bundle will be installable via: composer require sulu/article-bundle Now that the bundle is downloaded, we need to add it to the AbstractKernel.php file: new Sulu\Bundle\ArticleBundle\SuluArticleBundle(), new ONGR\ElasticsearchBundle\ONGRElasticsearchBundle(), In app/config/config.yml we add the following (the sulu_core section should be merged with the existing one): sulu_route: mappings: Sulu\Bundle\ArticleBundle\Document\ArticleDocument: generator: schema options: route_schema: /articles/{object.getTitle()} sulu_core: content: structure: default_type: article: "article_default" article_page: "article_default" paths: article: path: "%kernel.root_dir%/Resources/templates/articles" type: "article" article_page: path: "%kernel.root_dir%/Resources/templates/articles" type: "article_page" ongr_elasticsearch: managers: default: index: index_name: su_articles mappings: - SuluArticleBundle live: index: index_name: su_articles_live mappings: - SuluArticleBundle sulu_article: documents: article: view: Sulu\Bundle\ArticleBundle\Document\ArticleViewDocument types: # Prototype name: translation_key: ~ # Display tab 'all' in list view display_tab_all: true Then, in app/config/admin/routing.yml, we add routes: sulu_arictle_api: resource: "@SuluArticleBundle/Resources/config/routing_api.xml" type: rest prefix: /admin/api sulu_article: resource: "@SuluArticleBundle/Resources/config/routing.xml" prefix: /admin/articles Add example templates. For templates/articles/article_default.xml: <?xml version="1.0" ?> <template xmlns="" xmlns: <key>article_default</key> <view>articles/article_default</view> <controller>SuluArticleBundle:WebsiteArticle:index</controller> <cacheLifetime>144000</cacheLifetime> <meta> <title lang="en">Default</title> <title lang="de">Standard</title> </meta> <tag name="sulu_article.type" type="article"/> <properties> <section name="highlight"> <properties> <property name="title" type="text_line" mandatory="true"> <meta> <title lang="en">Title</title> <title lang="de">Titel</title> </meta> <params> <param name="headline" value="true"/> </params> </property> <property name="routePath" type="route"> <meta> <title lang="en">Resourcelocator</title> <title lang="de">Adresse</title> </meta> </property> </properties> </section> <property name="article" type="text_editor"> <meta> <title lang="en">Article</title> <title lang="de">Artikel</title> </meta> </property> </properties> </template> For views/articles/article_default.html.twig: {% extends "master.html.twig" %} {% block content %} <h1 property="title">{{ content.title }}</h1> <div property="article"> {{ content.article|raw }} </div> {% endblock %} Finish initialization: php bin/console assets:install # this installs JS, CSS, etc for the UI. Defaults to hard copy, if you want symlinks use `--symlinks` php bin/console sulu:translate:export # creates translations php bin/console sulu:document:init # initializes some PHPCR nodes php bin/console ongr:es:index:create # initializaes the elasticsearch index Add permissions: in the Admin UI, go to Settings -> User Roles. Select the User role, and scroll down to “Articles”. Select all, save, and refresh the UI to see the changes. The “Articles” option should now appear in the left menu. Note: If you see the loader spinning indefinitely on this screen, it’s possible your ElasticSearch server powered down because of a lack of RAM on the VM (lack of notification about this has been reported as a bug). ES is very resource intensive, and just running it idle will waste approximately 1.5GB of RAM. The solution is to either power up a machine with more RAM or to restart the ES instance with sudo service elasticsearch start. You can always check if it’s running with sudo service elasticsearch status. Try writing an example Hello World post and publishing it. It should appear at the url /articles/hello-world if you titled it Hello World. URL Schemes The default route setup, as seen in config.yml previously, is articles/{object.getTitle()} for articles. And sure enough, when generated, our article has this URL scheme. But what if we want something like /blog/06/2017/my-title? It’s easy enough to change. Modify config.yml so that route_schema is changed from: /articles/{object.getTitle()} to /blog/{object.getCreated().format('m')}/{object.getCreated().format('Y')}/{object.getTitle()} The route fragments between curly braces are being eval’d, and full-stops are being interpreted as method accessors, so, for example, object.getCreated().format('Y') turns into object->getCreated()->format('Y'). If we try saving a new post now and leaving the resource locator field blank (so that it autogenerates), we get the new route format nicely generated: Locales Let’s set up a different language of our site now – we want to cover a wide market, so we’ll build our UI and our content in two languages, when applicable. To add a new language, we edit the webspace file in app/Resources/webspaces. Under localizations, all we need to do is add a new locale: <localizations> <localization language="en" default="true"/> <localization language="hr"/> </localizations> HR is for Hrvatski, which is the local name for the Croatian language. Refreshing the Admin UI now shows a new option in the language selector: Before we can use this, we must allow the current user to manage this locale. Every locale is by default disallowed for every existing user. To let the current user edit this locale, we go to Contacts -> People -> [YOUR USER HERE] -> Permissions, and under locale we select the new locale, then save and reload the Admin UI. A Word of Warning After this is done, we MUST run the following console command: php bin/adminconsole sulu:document:initialize This will initialize the PHPCR documents for the new locale. This CANNOT be done after you already create content for the new locales or things will break. Remember to run the sulu:document:initializecommand after every locale-related webspace change! If you ever forget and end up creating content after a locale was added but before the initialize command was run, the following commands will delete all content related to that locale (so use with care!) and allow you to restart the locale (replace hr in the commands with your own locale identifier(s)). php bin/adminconsole doctrine:phpcr:nodes:update --query "SELECT * FROM [nt:base]" --apply-closure="foreach(\$node->getProperties('i18n:hr-*') as \$hrNode) {\$hrNode->remove();};" php bin/websiteconsole doctrine:phpcr:nodes:update --query "SELECT * FROM [nt:base]" --apply-closure="foreach(\$node->getProperties('i18n:hr-*') as \$hrNode) {\$hrNode->remove();};" php bin/adminconsole sulu:document:initialize A discussion on how to best warn people about this in the UI is underway. Back to articles now. When on an existing article, switching the locale will summon a popup which lets you either create a new blank article in this language, or create a new article with content that exists in another language (in our case en). Let’s pick the latter. The resulting article will be a draft copy of the one we had before. You’ll notice that the URL is the same for this post as it is for its English counterpart. This means we can’t visit it in the browser unless we switch the locale, but we don’t have any kind of locale switcher. Webspaces and Locales If you look at the webspace file again, you’ll notice there are portals at the bottom. Portals are like sub-sites of webspaces. In most cases, portals and webspaces will have a 1:1 relationship, but we use portals to define URL routes for different environments. The default sets a specific language via an attribute, and defines the root of all our URLs as merely {host}. Ergo, we get homestead.app in the screenshots above, appended by things like /articles/something or /hello-world. To get our different languages to render, we need to change this section. Replace every <url language="en">{host}</url> with <url>{host}/{localization}</url>. This will produce URLs in the form of homestead.app/en/hello-world. Sure enough, we can now manually switch locale by prefixing all URLs with the locale identifier. It’d be easier if we could just flip a literal switch on the page, though, wouldn’t it? Theming for Locales Let’s modify our layout so that we have an actual language selector on screen that we can use to change the site’s language. In app/Resources/views/master.html.twig, we’ll make the following modification. Right above the form, we’ll put this: <div> {% for locale, url in urls %} {% set{{ locale }}</a> {% if not loop.last %} | {% endif %} {% endfor %} </div> If we look at the docs, we’ll notice that there’s a urls variable. The urls variable is an associative array from the webspace configuration, containing the URLs of all locales with locales as keys and URLs for values, so we iterate through them. Since we only have one per environment defined, Sulu auto-generates as many as there are locales, blending them with the URL schema defined in the url key. The URLs contain the language prefix (currently a bug), so we need to strip that. We define this part as extra in Twig ( "/"~request.locale means “merge / and request.locale“). Then, we use the sulu_content_path helper function from vendor/sulu/sulu/src/Sulu/Bundle/WebsiteBundle/Twig/Content/ContentPathTwigExtension.php to feed it the current URL, stripped of the extra content we defined one line above. Additional arguments include the webspace key so it knows where to look for route values to generate, and the locale which to prepend to the content path when returning a new one. Don’t be confused by the ?:'/' part – that’s just a ternary IF which sends along the root route as the parameter ( /) if there is no url. Finally, we check if this is the last element in the for loop, and if it is, we don’t echo the pipe character, thereby creating a separator all up until the last element. We now have a language switcher in our master layout, and can switch to and from English and Croatian easily. Using this same approach, the language can be turned into a dropdown or any other desired format. What We Do in the Shadows Finally, we need to do something about the missing translations for pages. If we visit the Homepage while on the hr locale, Sulu will throw a 404 error because that page doesn’t exist in Croatian. Because some pages might not have translations, we want to load an alternative from a secondary language when they’re accessed – the audience may be bilingual and it’d be a shame to discriminate against them just because we don’t have matching content yet. We do this with shadow pages in Sulu. To turn our Croatian version of the Homepage into a shadow page, we do the following: - go to the homepage edit screen in the Admin UI while locale is set to English - switch locale and pick “Empty page” in the popup - go to “Settings” of this new page, pick “Enable Shadow”, and select “en” as the locale from which to grab content. If we now visit the homepage while in the hr locale, it should produce the en homepage, whereas our blog posts will be switchable between the hr and en version. Try it out! Conclusion In this tutorial, we explained some basic Sulu terminology, installed a custom bundle into our Sulu installation, and played around with content. Finally, we set up a basic multi-language news site with a language selector. One thing to note so far is that Sulu is a very expensive CMS to run – just the ElasticSearch requirement of the ArticleBundle upped the RAM needs of our server to 2-3GB which is by no means a small machine any more, and the vendor folder itself is in the hundreds of megabytes already. But Sulu is about to demonstrate its worth in the future posts – I urge patience until then. At this point, Sulu should start feeling more and more familiar and you should be ready for more advanced content. That’s exactly what we’ll focus on in the next part.
https://www.sitepoint.com/set-online-multi-language-magazine-sulu/
CC-MAIN-2019-22
refinedweb
3,726
53.92
> I haven't checked, but ... I checked, and your solution works. In the context of a larger program, getting NOLINE pragmas in all the right places would be challenging, wouldn't it? I found a bug report on the GHC Trac [1] in which Simon explains the importance of evaluating the thunk before calling addFinalizer. (Otherwise the finalizer is added to the thunk.) This works: newThing :: IO Thing newThing = do x <- Thing `fmap` newIORef True return $ unsafePerformIO ( do x' <- evaluate x addFinalizer x' $ putStrLn "running finalizer" ) `seq` x If anyone can show me how to get rid of unsafePerformIO in there, that'd be great. Tried a few things to no avail. > Finalizers are tricky things, especially when combined with some of > GHC's optimisations. No kidding! [1] Mike Craig On Thu, Feb 16, 2012 at 4:15 PM, Ian Lynagh <igloo at earth.li> wrote: > On Thu, Feb 16, 2012 at 02:55:13PM -0600, Austin Seipp wrote: > > 64-bit GHC on OS X gives me this: > > > > $ ghc -fforce-recomp -threaded finalizer > > [1 of 1] Compiling Main ( finalizer.hs, finalizer.o ) > > Linking finalizer ... > > $ ./finalizer > > waiting ... > > done! > > waiting ... > > running finalizer > > done! > > > > However, it's a different story when `-O2` is specified: > > > > $ ghc -O2 -fforce-recomp -threaded finalizer > > [1 of 1] Compiling Main ( finalizer.hs, finalizer.o ) > > Linking finalizer ... > > $ ./finalizer > > waiting ... > > running finalizer > > done! > > waiting ... > > done! > > > > This smells like a bug. The stranger thing is that the GC will run the > > finalizer, but it doesn't reclaim the object? I'd think `readIORef` > > going after an invalidated pointer the GC reclaimed would almost > > certainly crash. > > The finalizer is attached to the Thing, not the IORef. I haven't > checked, but I assume that ioref gets inlined, so effectively (ioref x) > is evaluated early. If you change it to > > readIORef (ioref' x) >>= \ix -> ix `seq` return () > > and define > > {-# NOINLINE ioref' #-} > ioref' :: Thing -> IORef Bool > ioref' = ioref > > then you'll get the sort of output you expect. > > Finalizers are tricky things, especially when combined with some of > GHC's optimisations. > > > Thanks > Ian > > > _______________________________________________ > Glasgow-haskell-users mailing list > Glasgow-haskell-users at haskell.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
http://www.haskell.org/pipermail/glasgow-haskell-users/2012-February/021881.html
CC-MAIN-2013-48
refinedweb
359
65.83
This section explains how to create a basic printing program that displays a print dialog and prints the text "Hello World" to the selected printer. Printing task usually consists of two parts: First create the printer job. The class representing a printer job and most other related classes is located in the java.awt.print package. import java.awt.print.*; PrinterJob job = PrinterJob.getPrinterJob(); Next provide code that renders the content to the page by implementing the Printable interface. variable will be true if the user gave a command to go ahead and print. If the doPrint variable is false, the user cancelled the print job. Since displaying the dialog at all is optional, the returned value is purely informational. If the doPrint variable is true, then the application will request that the job be printed by calling the PrinterJob.print method. if (doPrint) { try { job.print(); } catch (PrinterException e) { // The job did not successfully // complete } } The PrinterException will be thrown if there is problem sending the job to the printer. However, since the PrinterJob.print method returns as soon as the job is sent to the printer, the user application cannot detect paper jams or paper out problems. This job control boilerplate is sufficient for basic printing uses. The Printable interface has only one method: public int print(Graphics graphics, PageFormat pf, int page) throws PrinterException; The PageFormat class parameter is the zero-based page number that will be rendered. The following code represents the full Printable implementation: import java.awt.print.*; import java.awt.*; public class HelloWorldPrinter implements Printable { public int print(Graphics g, PageFormat pf, int page) throws PrinterException { // We have only one page, and 'page' // is zero-based if (page > 0) { g.drawString("Hello world!", 100, 100); // tell the caller that this page is part // of the printed document return PAGE_EXISTS; } } The complete code for this example is in HelloWorldPrinter.java. Sending a Graphics instance to the printer is essentially the same as rendering it to the screen. In both cases you need to perform the following steps: Graphics2D. Printable.print()method is called by the printing system, just as the Component.paint()method is called to paint a Component on the display. The printing system will call the Printable.print()method for page 0, 1,.. etc until the print()method returns NO_SUCH_PAGE. print()method may be called with the same page index multiple times until the document is completed. This feature is applied when the user specifies attributes such as multiple copies with collate option. print()method may be skipped for certain page indices if the user has specified a different page range that does not involve a particular page index.
http://docs.oracle.com/javase/tutorial/2d/printing/printable.html
CC-MAIN-2016-26
refinedweb
446
56.25
The code for Julia packages (also called libraries) is contained in a module whose name starts with an uppercase letter by convention, like this: # see the code in Chapter 6\modules.jlmodule Package1export Type1, percinclude("file1.jl")include("file2.jl")# codemutable struct Type1 totalendperc(a::Type1) = a.total * 0.01end This serves to separate all its definitions from those in other modules so that no name conflicts occur. Name conflicts are solved by qualifying the function by the module name. For example, the packages Winston and Gadfly both contain a function plot. If we needed these two versions in the same script, we would write it as follows: import Winstonimport GadflyWinston.plot(rand(4))Gadfly.plot(x=[1:10], ...
https://www.oreilly.com/library/view/julia-10-programming/9781788999090/aeeda946-0294-4b48-8954-010f0609fa58.xhtml
CC-MAIN-2020-05
refinedweb
119
56.96
Some of them, like moving UV shells already exist in other scripts, but I wanted to make one toolbox to contain everything I find useful. It has been an ongoing project since the summer of 2010. After placing the contents of the zip in your scripts folder, start the script using: from UVDeluxe import uvdeluxe uvdeluxe.createUI() New in 1.1.2: Automatic texture resolution detection. Added option to sample pixel density (copy and paste). Temporary selection storage, store a selection of UVs, Faces, Vertices or Vertex Faces (clears on closing the editor) Select Shell Border Edges --------------------------------------- If you find that UV Deluxe is saving you lots of time and tedius work, then please consider making a donation! :) Feature Breakdown and how to use ---------------------------------------- Mover Move selected UV shells whole 1-0 steps or by a custom value. Useful for when baking maps and you want to get those shells out of the way temporarily without having to struggle with getting them back into the exact place as they were before. Scaing and Rotation The main purpose of Scaler is to let you easily half or double the width/height of your shells when texture sizes are non-square. Scales UV cordinates by 25%, 50% 75% 200% of original size in either U or V direction with simple buttons. Second tool is "Smart Rotate". I made this because I got tired of not being able to rotate UV shells when they have been scaled to fit a a non-square texture. Now, as long as you set the resolution in the Settings panel, these rotate buttons will keep the aspect-ratio of your shells no matter what degree you rotate by. Ratio This might be the main feature if you are working with video game textures, where pixel density is so important. This tool will scale your selected shells as close as possible to the desired pixel density. Accuracy depends on how evenly unfolded your shells are, as it calculates by the size avarage. A model with no UV distortion will be scaled with correctly, so you can use a new polyCube or polyPlane for comparisons. !! Make sure you tell UVDeluxe what texture size you are working with. If your texture is 2048*1024 for example, set the texture slider to that in the "Settings" panel, then if your shells are scaled to the appropriate aspect ratio for your texture, the Set Ratio button will work. !! Also make sure to set Maya to the correct working unit first. UVD will always show you what unit type you are working with at the very top. Match UVs Select the UVs you want to move, and place them roughly as close as you can to the UVs you wish to snap them to. Will snap to UVs on other objects if they are selected! Straighten Edges Straighten edges in a selection of UVs. Tools works by building a list of horizontal and vertical edges, based on an angle tolerance set by you. It treats connected edges as one continious edge, then flattens each each or row of edges, in the orientation you choose. If you find that the script misses an edge, or flattens to much, try changing the angle value. Works best for objects like pipes, and on smaller selections of UV for more predictable results. Experiment! AlignTools - There are two tools in this category. Align shells and Smart Rotate. Align Shells does what you expect, rotate shells to nearest 90 degree angle based on a selection. It can handle any number of shells at the same time, but you should only have two uvs selected per shell. It won't break if you select more than two, but it might have problems knowing how you want to align them. - Shell alignment. Just like aligning text in Word, this tool takes all selected UV Shells and alignes them left/right, up/down or to the middle vertically or horizontally. Often allot easier to groups things than using Match UVs. Quick UV snapshot I got tired of having to browse every time I wanted to save a uv snapshot. So I added this button to save snapshot to the project folder with one press. Also has the default option to automatically open the file with your default program (probably Photoshop). ------------------------------------------------------- Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://www.highend3d.com/maya/script/uvdeluxe-for-maya
CC-MAIN-2020-40
refinedweb
747
70.63
On Mon, 2005-04-04 at 11:44 -0400, Deron Meranda wrote: > I was, though, expecting ls -Z to show the applied label. So the filesystem > context is being applied, but you can't see it via ls -Z? I guess that makes > sense now that I think about it, but it was a little surprising. I > kind of expected > the context= option to work somewhat like the uid= and gid= options as far > as it's visibility to ls. Unfortunately, no. ls -Z ultimately calls getxattr on the inode, and unless the filesystem implementation provides a getxattr method, you can't get that information. There has been discussion of putting a transparent redirect in the VFS so that if the filesystem implementation doesn't provide getxattr/setxattr on the security namespace, the VFS will automatically redirect the request to the security module (i.e. SELinux) and let it handle it based on the incore inode security context. > Also I think context= is what I want, versus fscontext=, since this is > an ISO9660 > filesystem that doesn't support extended attributes (xattr). Otherwise Apache > could see the filesystem, but not the individual files inside it. > Isn't that correct? I think for iso9660 they are effectively equivalent. It would make a difference for filesystems that have native xattr support. -- Stephen Smalley <sds tycho nsa gov> National Security Agency
https://listman.redhat.com/archives/fedora-selinux-list/2005-April/msg00015.html
CC-MAIN-2021-25
refinedweb
227
54.63
Django - Referencing the User Model Django has a powerful, built-in user authentication system that makes it quick and easy to add login, logout, and signup functionality to a website. But how should a Django developer reference the User model? The official Django docs list three separate ways: User AUTH_USER_MODEL get_user_model() But I find the explanation given to be incomplete. So in this post we will review why you might need to reference the User model and the pros/cons of each approach. Option 1: User Let’s assume we have a basic models.py file for a Blog. Here’s what it might look like: # models.py from django.db import models class Post(models.Model): title = models.CharField(max_length=50) body = models.TextField() Now what if we wanted to add an author field so that we could track which user created a blog post and later add permissions on top of that? The default way is to access User directly, which is the built-in Django model that provides us with username, first_name, and last_name fields. from django.db import models from django.contrib.auth.models import User class Post(models.Model): author = models.ForeignKey(User, on_delete=models.CASCADE) title = models.CharField(max_length=50) body = models.TextField() Pretty straightforward, no? The problem is that we should always use a custom user model for new projects. The official docs even say so. But if we have a custom user model, we cannot refer to it as User so how do we reference it? It turns out there are two different ways. Option 2: AUTH_USER_MODEL The first–and until recently best–way to reference the custom user model was via AUTH_USER_MODEL. To add a custom user to a new Django project you need to create a new user model and then set it as such in your settings.py file. I wrote a separate post on how to do this. But the takeaway is that if we made a users app, a CustomUser model within it, then to override the default User model in our settings.py file we would do the following. # settings.py AUTH_USER_MODEL = `users.CustomUser` In our Blog models.py file the code would look like this: from django.conf import settings from django.db import models class Post(models.Model): author = models.ForeignKey( settings.AUTH_USER_MODEL, on_delete=models.CASCADE ) title = models.CharField(max_length=50) body = models.TextField() Using settings we pass the AUTH_USER_MODEL as a string into our model. Option 3: get_user_model A third way to access the user model is via get_user_model. Up until the release of Django 1.11 get_user_model was not called at import time–meaning it would not always work correctly–however that has since been changed. It is therefore safe to use. The code would look as follows: # settings.py AUTH_USER_MODEL = `users.CustomUser` Then in our Blog models.py file the code would look like this: from django.contrib.auth import get_user_model from django.db import models class Post(models.Model): author = models.ForeignKey( get_user_model(), on_delete=models.CASCADE ) title = models.CharField(max_length=50) body = models.TextField() get_user_model() will return the currently active user model: either a custom user model if specified or else User. It is therefore a cleaner approach, in my opinion, because AUTH_USER_MODEL only works if a custom user model is set. Conclusion The takeaway is that working with User models is tricky in any web framework. Django has a robust user authentication system but given that many projects use a custom user model instead, the best approach when you want to refer to the current user model–of the three available–is to just always use get_user_model.
https://wsvincent.com/django-referencing-the-user-model/
CC-MAIN-2020-05
refinedweb
608
61.53
Provides information about available fonts. More... #include <qfontdatabase.h> List of all member functions. QFontDatabase provides information about the available fonts of the underlying window system. Most often you will simply want to query the database for all font families(), and their respective pointSizes(), styles() and charSets(). Creates a font database object. Returns if the font which matches the settings family, style and charSet is bold or not. See also italic() and weight(). [static] Returns some sample characters which are in the charset charSetName. Returns a list of all char sets in which the font family is available in the current locale if onlyForLocale is TRUE, otherwise all charsets of family independent of the locale are returned. Returns a list of names of all available font families in the current locale if onlyForLocale is TRUE, otherwise really all available font families independent of the current locale are returned. If a family exists in several foundries, the returned name will be "foundry-family". Returns a QFont object which matches the settings of family, style, pointSize and charSet. If no matching font could be created an empty QFont object is returned. Returns whether the font which matches family, style and charSet is a scaleable bitmap font. Scaling a bitmap font produces a bad, often hardly readable result, as the pixels of the font are scaled. It's better to scale such a font only to the available fixed sizes (which you can get with smoothSizes()). See also isScalable() and isSmoothlyScalable(). Returns whether the font which matches family, style and charSet is fixed pitch. Returns TRUE if the font which matches the settings family, style and charSet is scaleable. See also isBitmapScalable() and isSmoothlyScalable(). Returns whether the font which matches family, style and charSet is a smoothly scaleable. If this function returns TRUE, it's save to scale this font to every size as the result will always look good. See also isScalable() and isBitmapScalable(). Returns if the font which matches the settings family, style and charSet is italic or not. See also weight() and bold(). Returns a list of all available sizes of the font family in the style style and the char set charSet. See also smoothSizes() and standardSizes(). Returns the point sizes of a font which matches family, style and charSet, that is guaranteed to look good. For non-scalable fonts and smoothly scalable fonts this function is equivalent to pointSizes(). See also pointSizes() and standardSizes(). [static] Returns a list of standard fontsizes. See also smoothSizes() and pointSizes(). Returns a string with describes the style of the font f. This is Something like "Bold Italic". Retruns all available styles of the font family in the char set charSet. [static] Returns a string which gives a quite detailed description of the charSetName which can be used e.g. for displaying in a dialog for the user. Returns the weight of the font which matches the settings family, style and charSet. See also italic() and bold(). Search the documentation, FAQ, qt-interest archive and more (uses): This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved.
https://doc.qt.io/archives/2.3/qfontdatabase.html
CC-MAIN-2021-25
refinedweb
519
66.94
I need help understanding some of the outputs of the code below. (This is just a sample question for a midterm, not homework). #include <stdio.h> void figure_me_out(int* a, int b, int c, int* d); int main(void) { int var1 = 1, var2 = 10, var3 = 15, var4 = 20; figure_me_out(&var1, var2, var3, &var4); printf("%d, %d, %d, %d\n", var1, var2, var3, var4); return 0; } void figure_me_out(int* a, int b, int c, int* d) { c = b; b = *d; *a = 222; *d = 100; a = d; *a = c; } 222, 10, 15, 10 Its simple I think, Lets go step by step: void figure_me_out(int* a, int b, int c, int* d) { c = b; // c = 10 b = *d; // b = 20 *a = 222; // *a = 222 : Value at address a is changed to 222 *d = 100; // *d = 100 : Value at address d is changed to 100 a = d; // a = d: Change address of local pointer variable a to d. *a = c; // Changing value of address a which is same as address d to 10 } In step 3 you have changed original value at address a which you have passed from main function. In step 5 you are changing passed address from main to local variable a in function. After doing a = d, local variable having address is changed to address of d. Now anything you do with this address will get effected at address location of d In step 6 you have changed the value of d to 10. So final answer coming is 222, 10, 15, 10
https://codedump.io/share/YjzLfbuNR5iW/1/determining-the-output-with-pointers
CC-MAIN-2017-51
refinedweb
254
74.05
sem_open - initialize and open a named semaphore (REALTIME) [SEM] #include <semaphore.h>#include <semaphore.h> sem_t *sem_open(const char *name, int oflag, ...); The sem_open() function shall establish(), [TMO] sem_timedwait(),sem_timedwait(), sem_trywait(), sem_post(), and sem_close(). The semaphore remains usable by this process until the semaphore is closed by a successful call to sem_close(), _exit(), or one of the exec functions shall refer to the same semaphore object, as long as that name has not been removed. If name does not begin with the slash character, the effect is implementation-defined. The interpretation of slash characters other than the leading slash character in name is implementation-defined. If a process makes multiple successful calls to sem_open() with the same value for name, the same semaphore address shall be returned for each such successful call, provided that there have been no calls to sem_unlink() for this semaphore, and at least one previous successful sem_open() call for this semaphore has not been matched with a sem_close() call. References to copies of the semaphore produce undefined results. Upon successful completion, the sem_open() function shall return the address of the semaphore. Otherwise, it shall return a value of SEM_FAILED and set errno to indicate the error. The symbol SEM_FAILED is defined in the <semaphore.h> header. No successful return from sem_open() shall return the value SEM_FAILED. If any of the following conditions occur, the sem_open() function shall, <semaphore.
http://pubs.opengroup.org/onlinepubs/009604499/functions/sem_open.html
CC-MAIN-2016-22
refinedweb
232
54.02
Hi, Trent Piepho wrote: > On Mon, 4 Jun 2007, Benoit Fouet wrote: > >> mmh wrote: >> >>> > >>> > this is wrong. >>> > from my videodev2.h: >>> > #define VIDIOC_S_RDS _IOWR ('V', BASE_VIDIOC_PRIVATE+11, >>> > struct v4l2_radio_rds_set) >>> > >>> > i can (if i want) change the BASE_VIDIOC_PRIVATE+11 by >>> > BASE_VIDIOC_PRIVATE+20 >>> > this ioctl will be defined, but will not do what you want... >>> > >>> > moreover, it can just return without any error, and lead you to believe >>> > that framerate is accepted by the camera, even though this is not true... >>> > > Not exactly. The size of the argument to the ioctl (struct > v4l2_radio_rds_set) is part of the ioctl number. You could assign two > different ioctls to BASE_VIDIOC_PRIVATE+20 and they won't be the same if > the size of the arugment and the direction (R, W, WR) doesn't match. > > yes, my explanation was quite incomplete >>> #ifndef VIDIOSFPS >>> #define VIDIOSFPS _IOW('v',BASE_VIDIOCPRIVATE+20, int) /* Set fps */ >>> #endif >>> >>> >> yes, i really think this should be removed >> and, FWIW, syntax in v4l2 is, most of the time, as follows: >> VIDIOC_S_* >> i don't know if it's planned to add such a feature in future videodev >> version, though... >> > > This isn't a v4l2 ioctl, but a v4l1 ioctl. V4L2 uses the ioctl range 'V', > while v4l1 uses 'v'. Notice the case difference. Also BASE_VIDIOCPRIVATE > is the base for v4l1 private ioctls and BASE_VIDIOC_PRIVATE is the base for > v4l2. It should still be VIDIOCSFPS to have correct v4l1 naming > conventions. > > ok, my bad, i work mainly with v4l2... > The whole ioctl name clash thing isn't solved by just removing the #define > from the ffmpeg code. What happens if someone compiles on a kernel where > 'v',private+20 is SFPS and then runs on a kernel where it's something else? > > the ifdef should do it, no ? and compiling on another machine (or at least configuration) than the running one is bad :) > This is why changing ioctls once they make it into the kernel isn't > allowed. anyway, thanks for developping ;) Ben -- Purple Labs S.A.
http://ffmpeg.org/pipermail/ffmpeg-devel/2007-June/030212.html
CC-MAIN-2016-36
refinedweb
332
70.73
User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:8.0.1) Gecko/20100101 Firefox/8.0.1 Build ID: 20111120135848 Steps to reproduce: In my javascript I am updating image.src in setInterval, or on mouse move. For example: function updateImage() { document.images["L2"].src = 'Default.ashx?Ind=' + ind.toString() + '&no-cache' + '=' + Math.random().toString(10); } Actual results: Images are blinking/flashing when updated. it's especially noticeable when images are in grey scale on the black background. In other browsers and Firefox 7 images are updated smoothly so you have the impression that black part of the images remains the same only central area is changing. Starting from Firefox version 8.0 it seems the image area first blinks (turns white) and only then the image is updated. Expected results: Images should be updated smoothly as in version 7. This isn't security-sensitive. Can you please either point to a complete testcase showing the problem or use to find when the problem first appears for you? Or both, of course.... ;) I am using asp.Net for the test but I think it shouldn't be matter, could be done on java. My Default.aspx <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="SimpleAspWebApplication._Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" > <head runat="server"> <title></title> </head> <body id="Page" onload="window.setInterval('vistar();', 200);"> <div id="Conteneur"> <div id="Aff_Images_grd"> <div id="Grp_image"> <img id='L1' alt="" src="" /> <img id='L2' style="width:800px;height:800px" alt="" src="" /> </div> </div> </div> </body> <script type="text/javascript"> var ind = 0; function vistar() { if (ind == 0) ind = 1; else ind = 0; document.images["L2"].src = 'Default.ashx?Ind=' + ind.toString() + '&no-cache' + '=' + Math.random().toString(10); text.value = ind.toString(); } </script> </html> My request handler: public class Default : IHttpHandler { private int ind = 0; private string[] data=new string[]{"C:\\Test\\image11.jpg", "C:\\Test\\image22.jpg"}; public void ProcessRequest(HttpContext context) { context.Response.ContentType = "image/jpg"; context.Response.CacheControl = "no-cache"; // Return an jpeg image ind = Int32.Parse(context.Request.Params["Ind"]); string test = data[ind]; context.Response.BinaryWrite(File.ReadAllBytes(test)); } The actual image data probably matters (because I can't reproduce anything like what you're describing so far using images I have here). Please link to an actual live site showing the problem? Created attachment 577332 [details] images for sample application I've attached images for testing. We don't have live site, our application installed on internal customer's servers. (In reply to Boris Zbarsky (:bz) from comment #3) > The actual image data probably matters (because I can't reproduce anything > like what you're describing so far using images I have here). Please link > to an actual live site showing the problem? Is it reproducible with attached images? Yep. Sorry for the lag.... I'll attach a testcase to this bug. I'd assume we're dispatching the notification that makes the image loading content switch to the new request before the decode is done. That's sort of broken. Created attachment 578395 [details] First image Created attachment 578397 [details] Second image Created attachment 578407 [details] Testcase, relies on loading those images from file:// Over http from Bugzilla the 200ms timeout is too small for me to get the broken effect; the response is just not fast enough. And if I precache the images the flicker disappears... In my testing with our proprietary code (I cannot post) I pull images from our external device as fast as they can load (binding on the load event callback) - I am displaying monochrome on black background. Since upgrading to 8.0 this is extremely noticible if network load times for images are > 200 ms per image - anything faster than that and you don't see the flash of the black background - anything slower than that and it's very noticible. Same network speeds and different browsers you don't see anything like this at all. It's like the imaging library changed how it renders image data in the browser. Local testing on really fast network loads it is not reproducable. jcurtis, whatever you're seeing sounds different from this bug. The code in this bug does not wait for load events, for one thing. I'd appreciate you filing a separate bug and using to find when the problem you see first appears for you, since you can't share the site that shows the problem. Hi Boris, I am having the same issue. We used to be proud of Firefox to demo our system on, but now it is the slowest of the bunch. The problem seems to be related to image size. The following attachment can cause the issue even when loading from local disk. In the past < FF8 this image used to show in a few milliseconds, now takes many seconds to load. All other browsers are fine Created attachment 580210 [details] Large image loading very slowly in FF8 Large image loading very slowly in FF8 Samir, that's a separate issue from what this bug is about, related to async decoding of images. There are existing bugs on that. Again, this bug is about flicker on dynamic src _changes_. I'm seeing something similar on the following page: Using mozregression, it seems the regression was introduced by the following changeset: changeset: 79230:8b89d7037306 user: Matt Woodrow <[email protected]> date: Wed Oct 26 16:24:58 2011 +1300 summary: Bug 695275 - Fix conversion of ThebesLayers to ImageLayers. r=roc Here is a detailed presentation of my issue. When clicking on an image on the page above, a larger version of the image is displayed. To provide a quick feedback, I use a canvas containing a low quality version of the image, and I put above it an img element containing a higher quality image. Previously, this image was loaded progressively over the canvas, in a smooth way (the part not yet loaded was transparent). Now, I get a dark background for a short time when the image starts progressively replaced by the image. Jérôme, you're seeing something very different from this bug, I think. In particular, this bug was filed on a build that predates the changeset you mention in comment 16. Could you please file a separate bug on your issue? Also, we're probably going to need a public URL or testcase; the one you posted is "localhost", which means your local computer, not a server on the Internet. Boris - I am sorry i haven't been able run mozilla regression until today. And the flickering was introduced on 7/22/11 - the nightly build from 7/21/11 has smooth image transitions - but 7/22/11 clearly introduced erasing of background before redrawing (that is what it seems like) and shows as a black flicker because the background is black. We can't use firefox for our product until this gets addressed. I will go ahead and post another bug. (In reply to Boris Zbarsky (:bz) from comment #17) > Jérôme, you're seeing something very different from this bug, I think. In > particular, this bug was filed on a build that predates the changeset you > mention in comment 16. > > Could you please file a separate bug on your issue? Indeed, this appears to be a different issue. I have filed a separate bug report (bug 717572). *** Bug 717323 has been marked as a duplicate of this bug. *** Here is a testcase from bug 717323 I also have this issue. Google didn't offer any good solutions and Firefox 13 & Nightly also have this issue. Problem: Need to show low quality image when full quality is still downloading. Solution: I decided to go with specific "css hack" for mozilla only. I used jQuery. Image flicker is still present but it is not visible to human eye because low quality image is in the background. Solution works for my current needs. Maybe someone will find this helpful. $wrapper = jQuery('#wrapper'); $image = $wrapper.find('img'); if(jQuery.browser && jQuery.browser.mozilla) { $wrapper.css({ 'background-color':'#FFF' ,'background-repeat':'no-repeat' ,'background-attachment':'scroll' ,'background-size':'100% 100%' ,'background-position':'center center' }); $image.removeAttr('src'); $wrapper.css('background-image', 'url("'+image_path+'")'); } else { $image.attr('src', image_path); } Html: <div id="wrapper"><img src="somepath.jpg"></div> Here's some simple code that demonstrates the problem. Create your own, 1.png, 2.png, 3.png, 4.png to complete the example. Create them at 1440 X 900 pixels. You'll see the screen flash when the image source is changed. If you run this same test on IE or Chrome - no flash on update. <!DOCTYPE html/> <html> <head> <title></title> <script type="text/javascript"> var ct = 0; function click() { var im1 = document.getElementById("myimage"); im1.src = "" + (ct + 1) + ".png"; ct = (ct + 1) % 4; } </script> </head> <body> <a href="javascript:click();">click here</a><br /> <img id="myimage" src="" alt=""/> </body> </html> I am also seeing this bug on Firefox 17 and 18. Setting the src attribute of an <img> sporadically causes the blink whether or not the image is pre-cached. Creating two stacked img's and setting src attributes and swapping visibility after a small delay following the onload trigger seems to work around the issue (though very clumsily). Any progress tracking this down since IE and Chrome do not exhibit the same behavior? Still seeing this bug in Firefox 33.1.1 (osx 10.9.5). Any plans or progress with fixing? This bug is still present in Firefox 36.0.4 (Windows 8). An update would be appreciated, as we would like to use Firefox as a reference browser to showcase sites in client meetings, but sadly we can't until this is fixed. What testcase are you using to test? I tried the testcase attached to this bug ("First Image", "Second Image", "Testcase, relies on loading those images from file://") and it looked fine to me on Firefox 36, 37, 38, and 40. For people who are still seeing this bug, what testcase are you using? In my case it happens on a client website, which I am currently asking permission to provide a link to. I can also reproduce it on the testcase posted in comment #22 though ( ). Using Firefox 37.0.1 on Windows 8.1 x64 Alright, we got permission from the client to supply a link to their site. Select any of the indoor or outdoor links, and drag / scroll the scene left to right. There is no flickering on Chrome, IE, Safari (neither on the mobile versions), with Firefox 37.0.2 (Windows 8.1 x64) there is. There is a script on the site that preloads all the images so they are cached, even after waiting for all images to load it happens. Any replies or updates to this report are greatly appreciated. We experienced the same problem. Ended up working around it by doing our image looping/transitions in a canvas. Firefox seems to handle what looks like same job on a canvas just fine. Which makes this bug even more puzzling to the layman I can reproduce this. I haven't debugged it at the Gecko level, but it seems pretty clear that we aren't decoding the preloaded images because they aren't visible. Fixing this is nontrivial without severely regressing memory usage. We need more sophisticated heuristics than the ones we currently have, which requires that we track more information about images than we currently are tracking. We should revisit this after bug 1123976. (In reply to Chris.Lauer from comment #31) > We experienced the same problem. Ended up working around it by doing our > image looping/transitions in a canvas. Firefox seems to handle what looks > like same job on a canvas just fine. Yes, the canvas case is quite different, because when you draw an image to a canvas we decode and draw the image synchronously - it's required by the spec. Changing an <img> element's 'src' attribute or changing the 'background-image' property, by contrast, triggers asynchronous decoding. That can result in flashing because we may repaint before the image is fully decoded. Using <canvas> is probably a better choice for this particular use-case. (Either a single <canvas> which you draw the Image's into, or a <canvas> per frame.) A <video> would be even more efficient if it meets your needs. First of all, thank you for your insights. Secondly I'd just like to note that the test case (the client site) I posted in comment #30 can't be used to reproduce the issue anymore, since we found a (very simple to be honest) workaround and implemented that. I'd have liked to make it into a <canvas>, but time constraints only let me implement the workaround. For those who are interested, the workaround is simply having two more <img>s with the two possible images that may come next, placed behind the actual <img> so they're not visible. (In reply to F. Rauch from comment #33) > For those who are interested, the workaround is simply having two more > <img>s with the two possible images that may come next, placed behind the > actual <img> so they're not visible. I'd advise others to use the <canvas> approach, which is guaranteed to work by the spec. Decoding heuristics for <img> elements can and will change in the future. This is still a bad bug today. When using responsive images with lazy loading, the images will flash/flicker when they're being swapped. It happens on both Firefox and Firefox mobile. In IE, Chrome, Chrome Mobile the image transition from old to new is smooth, unlike Firefox. One more note, I think it has gotten worse in Firefox v45, is that possible? At least it's now that I noticed it. I only just recently started seeing this problem, and I think started after I had upgraded to the 64 bit version of Firefox (Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:45.0) Gecko/20100101 Firefox/45.0). I see this problem on both my stationary and my on laptop. Both are Firefox 64 bit running on Windows 10. I also have a 32bit Windows XP partition on my stationary, and when booting to WinXP and running 32 bit Firefox on that, the problem is gone. I have to correct my comment from yesterday. When I yesterday on my Windows XP partition tested the 32-bit version of Firefox, I forgot to update the browser first. When updating my 32 bit Firefox from version 44.0.2 to the latest 45.0.2 I see same problem as in my 64 bit Firefox on Windows 10. So, sorry... No relation to OS or to 32 vs. 64 bit versions of Firefox it seems. The annoying flicker/flashing when replacing an image seems to be introduced in version 45 (or in one of its minor revisions). I see this is a very old bug. Maybe there's another one directly related to this recently (re?)introduced problem? I haven't succeeded finding one, though... I have created a new bug for the issue I see introduced in FF45, I don't know if the issue I'm describing is related to this bug. I have only seen the problem after upgrading to FF45, and this is a very old bug talking about FF8+ - But by description, issues looks very similar) is anybody actively working on this? The internet is full of websites with images that are hurt by this bug :( Yes, this issue applies to multiple versions of firefox, and it is quite annoying. Very easy to reproduce, just open for instance valamit.com and the flashes are easily visible. No other browser has this behavior (not even IE/Edge :-) Any news about it?
https://bugzilla.mozilla.org/show_bug.cgi?format=default&id=705826
CC-MAIN-2019-04
refinedweb
2,655
66.33
I'm adding a foreign import in a GHC module from the RTS. I'm using a CPP directive to avoid the import in the stage1 compiler, since the RTS function I need doesn't necessarily exist in that version of the RTS. > #if STAGE < 2 > … -- make do without the import > #else > … -- do the import and use it > #endif Do you have any criticism of this overall design? EG Is there a preference to use Config.cStage instead of CPP and the STAGE symbol? Thanks. P.S. — for some context, this is for Solution 1 at -------------- next part -------------- An HTML attachment was scrubbed... URL: <>
http://www.haskell.org/pipermail/ghc-devs/2013-July/001652.html
CC-MAIN-2014-41
refinedweb
104
71.65
Le 5 sept. 05, à 02:49, Pier Fumagalli a écrit : > ...PS: I noticed that in the 2.1.x branch, most of the XMAPs in the > block samples declare elements within the "map" namespace for > patching, but don't declare the namespace itself. I suspect this is > because ANT doesn't use a namespace-aware parser (bytes me how), but > XMLLINT reports failures for a number of files (see list below). In my > blocks I declare the namespace and that doesn't seem to create any > problems, should we go ahead and fix the wrong ones?.. Yes, one reason is to be able to use the refdoc block to generate documentation from them, by annotating them - they must be parseable for this to work. -Bertrand
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200509.mbox/%[email protected]%3E
CC-MAIN-2016-30
refinedweb
126
79.6
Each Answer to this Q is separated by one/two green lines. I: - I do not want to use Processing - I do not want to use any language other than those stated above - I do want to display this image on my screen in any way, shape or form - I do not want to display a live video feed from my webcam on my screen, or save such a feed to my hard drive - The Java Media Framework is far too out of date. Do not suggest it. - I would rather not use JavaCV, but if I absolutely must, I want to know exactly which files from the OpenCV library I need, and how I can use these files without including the entire library (and preferably without sticking these files in any sort of PATH. Everything should be included in the one directory) - I can use Eclipse on the 64-bit Win7 computer if need be, but I also have to be able to compile and use it on 32-bit Linux as well - If you think I might or might not know something related to this subject in any way shape or form, please assume I do not know it, and tell me. @thebjorn has given a good answer. But if you want more options, you can try OpenCV, SimpleCV. using SimpleCV (not supported in python3.x): from SimpleCV import Image, Camera cam = Camera() img = cam.getImage() img.save("filename.jpg") using OpenCV:") imwrite("filename.jpg",img) #save image using pygame: import pygame import pygame.camera pygame.camera.init() pygame.camera.list_cameras() #Camera detected or not cam = pygame.camera.Camera("/dev/video0",(640,480)) cam.start() img = cam.get_image() pygame.image.save(img,"filename.jpg") Install OpenCV: install python-opencv bindings, numpy Install SimpleCV: install python-opencv, pygame, numpy, scipy, simplecv get latest version of SimpleCV Install pygame: install pygame import cv2 camera = cv2.VideoCapture(0) while True: return_value,image = camera.read() gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) cv2.imshow('image',gray) if cv2.waitKey(1)& 0xFF == ord('s'): cv2.imwrite('test.jpg',image) break camera.release() cv2.destroyAllWindows() Some time ago I wrote simple Webcam Capture API which can be used for that. The project is available on Github. Example code: Webcam webcam = Webcam.getDefault(); webcam.open(); try { ImageIO.write(webcam.getImage(), "PNG", new File("test.png")); } catch (IOException e) { e.printStackTrace(); } finally { webcam.close(); } I wrote a tool to capture images from a webcam entirely in Python, based on DirectShow. You can find it here:. You can use the whole application or just the class FilterGraph in dshow_graph.py in the following way: from pygrabber.dshow_graph import FilterGraph import numpy as np from matplotlib.image import imsave graph = FilterGraph() print(graph.get_input_devices()) device_index = input("Enter device number: ") graph.add_input_device(int(device_index)) graph.display_format_dialog() filename = r"c:\temp\imm.png" # np.flip(image, axis=2) required to convert image from BGR to RGB graph.add_sample_grabber(lambda image : imsave(filename, np.flip(image, axis=2))) graph.add_null_render() graph.prepare() graph.run() x = input("Press key to grab photo") graph.grab_frame() x = input(f"File {filename} saved. Press key to end") graph.stop() I am able to achieve it, this way in Python (Windows 10): import pyautogui as pg #For taking screenshot import time # For necessary delay import subprocess # Launch Windows OS Camera subprocess.run('start microsoft.windows.camera:', shell=True) time.sleep(2) # Required ! img=pg.screenshot() # Take screenshot using PyAutoGUI's function time.sleep(2) # Required ! img.save(r"C:\Users\mrmay\OneDrive\Desktop\Selfie.PNG") # Save image screenshot at desired location on your computer #Close the camera subprocess.run('Taskkill /IM WindowsCamera.exe /F', shell=True)
https://techstalking.com/programming/python/capturing-a-single-image-from-my-webcam-in-java-or-python/
CC-MAIN-2022-40
refinedweb
609
60.61