text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
How to make widget stick to top of keyboard I'd like to make a widget that sticks to the bottom of the page, and then is pinned to the top of the keyboard (when it appears). Note how the input textfield is pinned to the keyboard in the image below: How would I do this? I tried putting it in the bottomNavigationBar, but this (obviously) didn't work. Is there a builtin way to do this? This is a working example of the thing you want. I think! Just copy/paste/run What's important in this example is the Expanded. A really nice widget that expands to as much space as it can get. And in result pushing the chat box down as much as possible (Bottom of the screen or bottom of the keyboard) import 'package:flutter/material.dart'; void main() => runApp(new MyApp()); class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new MaterialApp( title: 'Flutter Demo', theme: new ThemeData( primarySwatch: Colors.blue, ), home: new MyHomePage(title: 'Flutter Demo Home Page'), ); } }('49715760 Stackoverflow'), ), body: new Column( crossAxisAlignment: CrossAxisAlignment.stretch, children: <Widget>[ new Expanded( child: new Material( color: Colors.red, child: new Text("Filled"), ), ), new Container( color: Colors.white, padding: new EdgeInsets.all(10.0), child: new TextField( decoration: new InputDecoration( hintText: 'Chat message', ), ), ), ], ), ); } } How to make bottom "appbar" that sticks to the top of the keyboard , Id like the message text field to be at the bottom of the screen, but then stick to you could create a Stack, and then wrap the thing inside an Align widget with�. The best way to resolve this is to use a dedicated widget. MediaQuery.of(context).viewInsets.bottom will give you the value of the height covered by the system UI(in this case the keyboard). import 'dart:async'; import 'package:flutter/material.dart'; void main() => runApp(MyApp()); class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { var home = MyHomePage(title: 'Flutter Demo Home Page'); return MaterialApp( title: 'Flutter Demo', theme: ThemeData( primarySwatch: Colors.blue, ), home: home, ); } }( resizeToAvoidBottomInset: false, appBar: AppBar( title: Text(widget.title), ), body: _getBody(), floatingActionButton: FloatingActionButton( onPressed: () {}, tooltip: 'Increment', child: Icon(Icons.add), ), ); } Widget _getBody() { return Stack(children: <Widget>[ Container( decoration: BoxDecoration( image: DecorationImage( image: AssetImage("assets/sample.jpg"), fit: BoxFit.fitWidth)), // color: Color.fromARGB(50, 200, 50, 20), child: Column( children: <Widget>[TextField()], ), ), Positioned( bottom: MediaQuery.of(context).viewInsets.bottom, left: 0, right: 0, child: Container( height: 50, child: Text("Hiiiii"), decoration: BoxDecoration(color: Colors.pink), ), ), ]); } } How to make widget stick to top of keyboard, I'd like to make a widget that sticks to the bottom of the page, and then is pinned to the top of the keyboard (when it appears).Note how the input textfield is� I've got a "save" button which I want to push up together with the soft keyboard. So when the user clicks an EditText in my layout, then the button has to stay above the keyboard. Now the button becomes hidden underneath the keyboard. there is a lib for that: Widget build(BuildContext context) => FooterLayout( footer: MyFooterWidget(), child: PageMainContent(), ); Can't stick TextField to keyboard when opening keyboard on iOS , I'm trying to stick TextField to the top of keyboard like in messaging apps Widget build(BuildContext context) { return CupertinoPageScaffold(� I'm creating an application which has a "floating" widget which can be dragged around inside the application window. But it starts up, or tends to go behind other widgets sometimes. Is there any way to make sure that the widget in my application stays on top of all other widgets whenever it is made visible? Thanks. This worked for me, showBottomSheet( context: context, builder: (context) => Container( height: // YOUR WIDGET HEIGHT child: // YOUR CHILD ) showBottomSheet is a flutter inbuilt function. Avoiding the On-Screen Keyboard in Flutter, We can get the height of the on-screen keyboard by doing a MediaQuery and Then we can wrap our widget in a simple Padding widget that� To see if any of the volume keys where stuck on my keyboard I pressed each key a couple of times nothing happened to the volume bar and they weren't stuck, then I went to the volume icon on the task bar and turned the volume up to the maximum and it made the notification sound but then by itself turned the down to mute from 100 to 99, 98 etc.. How to fix bottom overflowed when keyboard shows error in Flutter , How to fix bottom overflowed when keyboard shows error in Flutter - Programming Addict Top Duration: 1:19 Posted: Jan 13, 2019 Step 1, Disconnect the keyboard from its power source. If you're using a laptop, this entails turning off and unplugging the laptop and removing its battery if possible. If you're using a standalone keyboard, unplugging it and/or removing the batteries will suffice.Step 2, Spray the keyboard with compressed air. Use the compressed air to blow any debris or dust out from the spaces between the keys and the keyboard base. Spritzing the compressed air around each key is a good idea; even if not. - I don't have the code on hand. But i think you can do this by making a Scaffold. In the body of the scaffold you place a expanded. The childs of the expanded are your chat messages etc. And under the expanded you place the text box like in whats app. The expanded will constantly expand to the end of your screen leaving enough room for a textbox if you place it under the expanded. If you can't figure it it i'll try it. Let me know! - android example? - This solution sadly is similar to the other in that it is not ideal as the TextField moves to its new position at a different speed to the keyboard, causing a less than ideal animation. - Thank you ! Using Positioned + MediaQuery nailed it :) - This is a nice clean solution, however not ideal as the TextField moves to its new position at a different speed to the keyboard, causing a less than ideal animation.
http://thetopsites.net/article/59979130.shtml
CC-MAIN-2021-04
refinedweb
1,025
60.95
Hello. I am using an XML Response as a DataSource. It has an array of books, and I can process the array by changing books[1] to books, so that it will iterate through each book. Each book also has chapters, so I'm trying to get the chapters that are specific to each book. I have a Book DataSource (XML) and a Chapters DataSource (XML), but I cannot figure out how to get the chapters that are specific only to the book I am processing. It seems like there would be an easy way to get the row index so that I can do something like: books[rowIndex]:chapters. In other words...get all of the chapters for book[1], then book[2], etc. Any help is appreciated! Solved! Go to Solution. Hi, I think I understand what you're trying to do, but the description isn't quite clear in places. My take: You have a Books datasource that you are iterating over. You have a second datasource called chapters. Based on my reading, you want to pull out the chapters from the second datasource for a given book. One part that is confusing me, is that if Books and Chapters are both datasource steps or "sources of data" rather than datasources in a SoapUI sense. If you do have two datasource steps, maybe frame your test like... In this method, you have to iterate over each book, but for each book you have to iterate over every chapter and for each chapter, check whether it belongs to the book of interest. If, you just have two distinct lumps of XML, one for Books and one for Chapters, and you've already managed to iterate Books, then you ought to look at XML Holder. You can import XML Holder by this... import com.eviware.soapui.support.XmlHolder; to then populate XML Holder you do something like... chaptersXmlHolder = new XmlHolder(chaptersVariable); chaptersXmlHolder.namespaces["ns1"] = theUrlForNs1Namespace; You can then (hopefully) directly access the chapters using the getNodeValue method on your XML holder. keyValuePath = "//${bookName}/${chapters}[1]"; firstChapterForBook = chaptersXmlHolder.getNodeValue(keyValuePath); Without a concrete example of the data you're working with, it's not possible to provide specific answers. You should look at this SmartBear link on XML Holder . View solution in original post Thank you for posting your query! Let's see if the Community can help you figure this one out. Any ideas @msiadak @ChrisA @HimanshuTayal @avidCoder ? Thank you Chris! @brianchi73 does this help?
https://community.smartbear.com/t5/API-Functional-Security-Testing/Array-of-Array-in-XML-Response/m-p/207791
CC-MAIN-2020-45
refinedweb
416
74.69
. 1 + mastrofallz on May 27th, 2010 at 1:03 pm said: Sounds great, can’t wait! 2 + reson8er on May 27th, 2010 at 1:07 pm said: Can’t wait to get home and watch this video! 3 + Plankfan on May 27th, 2010 at 1:08 pm said: Woot! 4 + Tebeau23 on May 27th, 2010 at 1:10 pm said: OMG YES! BEst Interview ever! I cannot wait for Killzone 3! Thanks sony! + Jeff Rubenstein on May 27th, 2010 at 4:04 pm said: Thank *you*! 5 + SS_keyblade on May 27th, 2010 at 1:11 pm said: Thanks Jeff, can’t wait to hear more hopefully at E3 and soon! 6 + Barnolde on May 27th, 2010 at 1:11 pm said: Amazing! Killzone 3 will be godlike! I just wish the guns were louder and had more of a booming, powerful effect, like the movie Heat. 7 + SS_keyblade on May 27th, 2010 at 1:12 pm said: BTW: Who’s winning the PlayStation beard-off? + Jeff Rubenstein on May 27th, 2010 at 4:04 pm said: Not me! 8 + The1stMJC on May 27th, 2010 at 1:12 pm said: Can’t wait to spend another 500hrs on the multiplayer pending on if they didn’t change much of it. 9 + ATOMS11 on May 27th, 2010 at 1:12 pm said: Every grip that I had with KZ2 seems to be first priority for Gorilla. That is a good sign! Cant wait to see gameplay! 10 + xeno3d on May 27th, 2010 at 1:13 pm said: Killzone 3!! YES!! BTW, can we PLEASE get an HD remastered version of the original Killzone for the PS3?? PLEASE??? 11 + emzee83 on May 27th, 2010 at 1:14 pm said: WOOOOT! 12 + FFObsessed on May 27th, 2010 at 1:16 pm said: OMGGGG Co-op? Multiplayer details? Please use dedicated dervers. I’ll fund it myself! 13 + Mj2sL on May 27th, 2010 at 1:16 pm said: Nice interview, but the audio (left-right) is flipped. + Jeff Rubenstein on May 27th, 2010 at 4:06 pm said: This is what I get for editing in the car. Should’ve normalized. Thanks. 14 + Frejim on May 27th, 2010 at 1:18 pm said: NICE!!!! 15 + sfgdfds on May 27th, 2010 at 1:19 pm said: awesome. would be great to see some of that gameplay in the background in the foreground (does that make sense?) I think you know what I mean. thanks for the interview; it is going to be really good and a must-buy for me. 16 + Darvan on May 27th, 2010 at 1:20 pm said: Good questions Jeff great Interview can’t wait to learn more. 17 + SeanScythe on May 27th, 2010 at 1:22 pm said: Wait is that KZ3 in the background being played? I haven’t played the game in a year so I’m not remembering that level and the melee kill looks different. + Jeff Rubenstein on May 27th, 2010 at 4:06 pm said: :-D 18 + lcmnick on May 27th, 2010 at 1:22 pm said: Awesome. 19 + ninjatuned on May 27th, 2010 at 1:22 pm said: no more swearing? really? that was a concern? otherwise it all sounds awesome and i can’t wait for the multiplayer details 20 + Azure-Edge on May 27th, 2010 at 1:23 pm said: Aww wish I could see the vid but it isn’t showing >.< Either way Killzone 3 will be beast. 21 + Karsghul on May 27th, 2010 at 1:23 pm said: Great interview, Jeff. I’m Looking forward to playing the beta? 22 + Barnolde on May 27th, 2010 at 1:24 pm said: Huge levels with no loading hiccups, less swearing, Rico not being as annoying? SOLD!!! 23 + trapper12 on May 27th, 2010 at 1:24 pm said: Hey, Jeff grew a beard. :) Anyways, can’t wait to see more of KillZone 3. 24 + Dragonzblaze on May 27th, 2010 at 1:25 pm said: killzmass 3 is incoming again cant wait for more info E3 around the corner woot!!!!!!!!!!!! 25 + emzee83 on May 27th, 2010 at 1:26 pm said: thanks for putting in my request Jeff + Jeff Rubenstein on May 27th, 2010 at 4:06 pm said: Thanks for requesting. 26 + Blakkternalx520 on May 27th, 2010 at 1:26 pm said: i will be getting this game on launch no matter how broke i’ll be afterward. 27 + roadrunner79 on May 27th, 2010 at 1:26 pm said: I loved Killzone 2′s multiplayer, but it was nearly impossible for big parties. If anything should be added, it should be split-screen multiplayer, whether it be online or offline or possibly both, but please include split-screen multiplayer. 28 + FFObsessed on May 27th, 2010 at 1:30 pm said: Excellent questions btw Jeff! 29 + RavageHeitaro on May 27th, 2010 at 1:31 pm said: OMG!! We finally get to see gameplay footage!!! And they are SICK! O.O 30 + esko on May 27th, 2010 at 1:32 pm said: OMFG I WANT KILLZONE 3 NOW I’m jumping back into KIllzone 2 today 31 + niked on May 27th, 2010 at 1:32 pm said: Its FIRST GAME PLAY FOOTAGE OF KZ3 in the background!!! MY GOD!!! The controls and recoil looks much better from what I can see! I need this game! BTW. I hope KZ3 have fully destructible environments! PLZ! 32 + Serr on May 27th, 2010 at 1:33 pm said: - Killzone 3 can be played with the Playstation Move - The multiplayer will feature small vehicles that can be used - The player can choose from 8 different classes - The classes can be personalized with a new reward system - But only the characters appearance and the perks can be changed - Guerilla Games is also working on a 2 player coop mode - And on a 4 player objective coop mode - The game is planned to be released in May 2011 33 + GuitarrassDeAmor on May 27th, 2010 at 1:34 pm said: Killzone 2 was absolutely amazing. From launch, I played and played and played. It was the reason I got into a clan (not in it now) and the reason I really, really started getting serious on the multiplayer aspect. The weight of the weapons was awesome, the graphics were fantastic, and the balance was there (for the most part). I hated changes like the control options and such to appeal to the annoying CoD players, but I still like Guerilla and will get Killzone 3 when it releases. Cannot wait, and I might actually buy a 3DTV for it… 34 + RandySore on May 27th, 2010 at 1:37 pm said: Jeff Thanks A lot for this interview. By the Way EPIC beard lol. + Jeff Rubenstein on May 27th, 2010 at 4:07 pm said: It’s as good as I can do. At least I don’t have a hairy back… 35 + UNCyrus on May 27th, 2010 at 1:37 pm said: Great interview 36 + Link01 on May 27th, 2010 at 1:38 pm said: I sooooo can’t wait…. I’m pissed it’ll be 2011 though :( 37 + Prox1mately on May 27th, 2010 at 1:38 pm said: Great job Jeff! :D I especially liked how you presented the suggestions we fans put in. I sensed a brief moment of nervosity there but damn you handled it well. Can’t wait to see the stuff you guys were talking about. As far as variety goes, if they’re gonna’ keep it at that level of variety through the game… I’m probably gonna’ die from being awestruck. 38 + Link01 on May 27th, 2010 at 1:43 pm said: what the hell… thanks for teasing with the footage we can’t really see in the background >.< 39 + NITROWOLF_2 on May 27th, 2010 at 1:45 pm said: Will we get Offline Storyline Split-screen? This needs to be brought back this gen Also why must you guys at Sony always tease us with these interview with gameplay footage for a game in the back. gaaaaaaaaaar 40 + reson8er on May 27th, 2010 at 1:45 pm said: I just saw a screenshot of Sev using the jetpack on 1up.com. I would like to formally request that Sev and Rico (and any other ISA troops in the snowy area) get snow environmental gear. I see the level is supposed to be super cold (hence all the Helghast in snow gear) but Sev is FLYING (making it even colder because of wind) yet he is in the same gear as always. I know players generally wont see themselves/Sev in 3rd person, but it made me do a double take when I saw him AND Rico in normal gear flying in the arctic conditions! Thanks for reading Guerrilla! Really looking forward to this! :D 41 + rc94 on May 27th, 2010 at 1:45 pm said: Kudos for asking those fan questions at the end, Jeff. Please pose these questions next time around: 1. Will the clan system be making a return (perhaps even with some tweaks?) 2. Will there be any reward, in terms of MP, for people that reached the 100000 points mark in KZ2? Thanks a lot, and again: great job on the interview. 42 + poodude on May 27th, 2010 at 1:47 pm said: Can’t wait for this game! I loved Killzone 2, but I would just make two small changes. 1. A better story. 2. Allow us to use any gun with any class in multiplayer. I loved the SMG, but I could never use it because I liked being a medic more. Would have been lovely if I could have done both. Anyhow, great game nonetheless, and I’m sure Killzone 3 will be even better. I can’t wait! 43 + IGLOO05 on May 27th, 2010 at 1:47 pm said: Great interview Jeff. Even though I was mostly watching the tv with the gamplay in the background:) + Jeff Rubenstein on May 27th, 2010 at 4:08 pm said: I thought that might happen :) 44 + Blkant on May 27th, 2010 at 1:47 pm said: More vehicles! Vehicles online! :) 45 + Blkant on May 27th, 2010 at 1:48 pm said: Also weapon pick ups online, not just class based MP! :P 46 + NITROWOLF_2 on May 27th, 2010 at 1:49 pm said: we need a clan system, and possible something that games don’t do and some sort of clan tournament arena/leader boards. Like game battles, have a learderbaord of clan and we can send reqwest to battle them. The game should be able to track who wins or loses. that would be awesome 47 + JinZX1 on May 27th, 2010 at 1:49 pm said: I’m sorry were you guys talking about something? Because I was too busy staring in awe of that TV behind you. 48 + BEASTXJASON on May 27th, 2010 at 1:50 pm said: On the fence for this one. The controlls never did seem dialed in for me. Even after the patch’s and extra settings it still felt off. Amazing game but Ill have to wait to play it before I order this bad boy. 49 + rc94 on May 27th, 2010 at 1:50 pm said: @47: Did you even play KZ2? 50 + waypoetic on May 27th, 2010 at 1:52 pm said: You’re a damn good interviewer, and a real cool guy Jeff. That was an awesome interview and i loved the tease in the background of some in-game footage. And yeah, a special edition of Killzone 3 would be much appreciated and i would buy it, no doubt. Looking forward to the first trailer for the game and looking forward to E3! Take care man.
http://blog.us.playstation.com/2010/05/27/killzone-3-for-ps3-reveal-guerrilla-games-interview/
CC-MAIN-2014-15
refinedweb
1,957
79.8
Answered by: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool Hi My application is working with a DataBase server which is connected through VPN. Some time i get an exception like... "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Inner Exception:>>" Can you please let me know if you have come across any such scenario in your application & any solution for it? Question Answers. All replies I believe you need to increase connection timeout in your application to be able to establish connection. When application tries to open connection, then it has default timeout around 30 seconds. If connection will not be established during this time, then provider generates an error. If you are using .NET, you could increase this time setting ConnectionTimeout property of the connection class. But if server is not visible at all, setting this property will not help and you would need to diagnose while server is not visible over VPN Val Mazur I recently started having this problem, and here is some of the informaiton that I discovered to help me resolve it. On your SQL connection string, there is an option (Max Pool Size) that you can specify to increase the size of your connection pool. The default is 100 if you don't specify any other value. The default value should be sufficient in most cases. If you determine that you need more, you can specify a higher value in your connection. Replys to other posts in this forum recommend turning off connection pooling to resolve the issue, but this will significantly increase your overheand and decrease your performance. I don't recommend this. Another reply recommends setting the connection timeout to 30 seconds. Again, I don't recommend this unless you are running on a cluster, because it will unnecessarily increase your application overhead and it won't really solve your problem. You are running out of unused connections in the pool, decreasing the timeout does not make used connections available sooner, so the only fix is to make more unused connections available. If you analyze your application and find out that you need to have more than 100 concurrent connections open to your database, then the only way to fix the problem is to increase the size of you connection pool by adding "Max Pool Size" to your connection string. Search for "Max Pool Size" in MSDN if you need more information on how to do this. Also, make sure that your DB server has enough memory to handle the extra connections. Most often the problem is because database connections are being opened but never closed. Eventually they will time out and the problem will go away for a while, but will soon return. If you are opening a connection inside a loop, try moving the open/close connections outside the loop or call .close() on the connection at the end of the loop. Examine your code and make sure that you close all of your database connections as soon as you are done with them, this will return your connection to the pool so it can be used by the next database operation. If you don't explicitly close the connection when you are done, it will stay open and out of the pool until it times out, which may be several minutes. - Proposed as answer by Mark Brislane Friday, April 15, 2011 8:26 AM im using 'Using' block whilw creating command n reader objects.. it shud close the connection...but its not hapening.. im using enterprise library..... here is my code.... options.IsolationLevel = IsolationLevel.ReadCommitted'Setting Transaction Timout as 2 mins. options.Timeout =New TimeSpan(0, 2, 0) Using thisTransactionScope As TransactionScope = New TransactionScope(TransactionScopeOption.Required, options) GRIDDB.AddInParameter(cmd,"PropertyID", DbType.String, leaseInfo.PropertyID) GRIDDB.AddInParameter(cmd,"LeaseStart", DbType.DateTime, leaseInfo.OriginalLeaseStart) GRIDDB.AddInParameter(cmd,"TermStart", DbType.DateTime, leaseInfo.CurrentTermStart) GRIDDB.AddInParameter(cmd,"LeaseExp", DbType.DateTime, leaseInfo.LeaseExpiration) GRIDDB.AddInParameter(cmd,"LeaseType", DbType.Int64, leaseInfo.TypeOfLease.ValueId) GRIDDB.AddInParameter(cmd,"RenewalOption", DbType.String, leaseInfo.RenewalOptions) GRIDDB.AddInParameter(cmd,"RenewalSummary", DbType.String, leaseInfo.RenewalSummary) GRIDDB.AddInParameter(cmd,"OptionToPurchase", DbType.String, leaseInfo.OptionToPurchase) GRIDDB.AddInParameter(cmd,"Tenant", DbType.Int64, leaseInfo.SingleMultipleTenant.ValueId) GRIDDB.AddInParameter(cmd,"CurrentMonthCost", DbType.Decimal, leaseInfo.CurrMonthlyCostRate) GRIDDB.AddInParameter(cmd,"LeaseRate", DbType.Decimal, leaseInfo.YearlyBaseLeaseRate) GRIDDB.AddInParameter(cmd,"OperatingRate", DbType.Decimal, leaseInfo.YearlyEstOperatingRate) GRIDDB.AddInParameter(cmd,"SecurityDeposit", DbType.String, leaseInfo.SecurityDeposit) GRIDDB.AddInParameter(cmd,"Renewal", DbType.String, leaseInfo.Renewal) GRIDDB.AddInParameter(cmd,"ExitInfo", DbType.String, leaseInfo.ExitDetail) GRIDDB.AddInParameter(cmd,"Refusal", DbType.String, leaseInfo.RightOfFirstRefusal) GRIDDB.AddInParameter(cmd,"IsComplete", DbType.String, leaseInfo.IsComplete) GRIDDB.AddOutParameter(cmd,"isError", DbType.Int32, 4) 'executing the stored procedure GRIDDB.ExecuteNonQuery(cmd) flag = GRIDDB.GetParameterValue(cmd,"isError") cmd.Parameters.Clear()' cmd.Connection.Close() End Using ' End If thisTransactionScope.Complete()End Using 'If we reached this point then returning true If flag = 0 Then Return True End If GRIDDB =Nothing End Try pls reply soon.its very urgent... It is not clear what the actual issue is. Do you get opened connection? I also believe you do not need to use transaction if you execute only single INSERT SQL statement, since it is atomic anyway and adding transaction will only add overhead, but will not provide any value. You need to do 2 things to fix the issue: 1) Increase the Connection Timeout. By default it is 30 seconds. You can assign your own value to it. 2) As other than timout you are also getting the error that "the The timeout period elapsed prior to obtaining a connection from the pool.This may have occurred because all pooled connections were in use and max pool size was reached. Inner Exception:>>" So you can also increase the Connection Pool Size. I am not sure but I think that the default size is 50. You can assign your value to it like 60 or 70. - Hi, I had the same problem. I cheked latter that my connecctions aren't closed or disposed, so, for each called to BD the connection stayed open, resulting in a series of conecction in queue. Then the error disappear.. This article is old, however, this may be helpfull to everyone else. I had always thought it was sufficient to set the connection to null to close it. However, that seems to have been wrong. By adding the statement below, if (cnnX.State == ConnectionState.Open) cnnX.Close(); my problem vanished and the code ran very quickly. try { if (cnnX.State == ConnectionState.Closed) cnnX.Open(); cmdX.ExecuteNonQuery(); } catch (Exception ex) { string strErrMsg = ex.Message; } finally { if (cmdX != null) cmdX = null; if (cnnX.State == ConnectionState.Open) cnnX.Close(); // <<<<--- New addition if (cnnX != null) cnnX = null; }. - Hi, I also came across same problem. I dont know my solution will solve your problem too as I was having problem in my own application (while you have third party VPN connection software). I had windows application and I was trying to open 4-5 connections one by one in seperate threads. So I was getting above error. The solution I got is - I increased some time between starting two threads (in that way, increased time between opening two connections). So it got solved. You said ." I totaly disagree with that assertion. I believe it is a good practice to use the "using" statement as it will call the dispose function when the program counter will reach the } that close the "using". The dispose fucntion of SqlConnection is going to call the Close function as you can see it with The Reflector : protected override void Dispose(bool disposing) { if (disposing) { this._userConnectionOptions = null; this._poolGroup = null; this.Close(); } this.DisposeMe(disposing); base.Dispose(disposing); }the using keyword help for lisibility of the code. the using keyword has nothing to do with the GC (It is interpreted by the C# compiler) as the close method will be call when the program counter will reach the } of the using statement. The msdn explains that . " (). Could you, please, explain more deeply what is written in your book Chapter 9 ? My book and my comments are based on years of field experience. I have been able to solve leaking connection pools in a number of cases by REMOVING the Using statement and explicitly closing the Connection object. The questioner in this case was able to solve the problem in the same way. No, I agree, this should not make a difference. Yes, I too saw the generated code and thought that it should work. For some reason that I and those folks at Microsoft that reviewed the problem don't understand, it didn't. What did work was explicitly calling Close on the connection. __________________________________________________________________ William Vaughn Mentor, Consultant, Trainer, MVP “Hitchhiker’s Guide to Visual Studio and SQL Server (7th Edition)” Please click the Mark as Answer button if a post solves your problem! Hi William, A quick comment on your post. Our site is used by upwards of 3000 people at any time. Would you still only expect to see 50 connections? On some sales, we'd expect up to 50000 people to be looking at the site at any one time... again, would you still only expect 50 connections? How can you see the number of connections in use at any time? I currently look at SQL Server's "SP_WHO2" command to see connections to the database, however this is different to connections to the website itself. (Sorry for my naivity on this topic) Regards Andy. Wes I got the below error randomly on my production site . The weird this is it's hold the Guid (use as a primary key and base element that is used through out the application) .Hold means that each and every record related to that guid get corrupted .Interesting thing is I can insert a new record but it doesn't allow me to update any thing related to that guid and produce the same error .I did try to restart my IIS my Database but the problem never go away .The only thing that resolved it to replace the existing corrupted guid with a newly generated guid through out the the application database. The other thing that I did try is I replaced the connection string of my application on my local machine with the production connection string in this case it worked perfectly . ErrorMessage: type=System.Data.SqlClient.SqlException message=A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)Packet(Int32 bytesExpected)) at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteNonQuery(DbCommand command) at OpusFirst.SaveFieldController.SaveTextField(String loanGuid, String fieldName, String fieldValue, String additionalColumnName, String additionalColumnValue, String tableName, String databaseName, Boolean checkDiscrepancy, String UserID) I just had a question about relying on "using" to close the connection. We have been seeing the timeout on the connection pool issue and have a huge amount of code that relies on "using" to close connections. Before we go ahead and change all of our code, I'd like to replicate the problem in a simple test harness to show our team that "using" does not indeed close the connection. Do you have a sample that would raise this error? We tried something like: SqlCommand cmd = new SqlCommand("select 1"); for(int i = 0; i< 10000; i++) { SqlConnection conn = new SqlConnection("blah"); cmd.Connection = conn; using(conn) { cmd.ExecuteQuery(); } } And cannot replicate the issue. I'm a bit sceptical about calling Close() explicitly is different to the "using" statement. When using "using", it's not the Garbage Collector that closes the connection. The closing brace becomes a call to Dispose. This *explicitly* closes the connection. The GC would call Finalize, which itself would call Dispose. This *implicitly* closes it, at some unknown time. In the examples below, I've replaced the using by explicit calls to Close() in a try/finally block, and it made no difference whatsoever. Anyway, here's a bit of c#4 code that would run plenty of threads, each opening a DB connection and doing some work. I've artificially simulated long running statement (20 seconds) by using the WAIT FOR statement. This results in the same Exception as the initial poster System.Data.SqlClient.SqlException “Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.”. I believe this would be a genuine case of "you're doing a lot of hard work in your application, 100 parallel connections is not enough, increase the connection pool for this application." (or if you don't anything fancy or complicated, write better sql, add indexes, etc.. so that the queries return faster, which means the connections would be closed faster, therefore being available again in the pool). In Activity Monitor, there are 100 sessions shown, and none are blocked, they are all busy "doing things". using System; using System.Data; using System.Data.SqlClient; using System.Threading.Tasks; namespace ConsoleApplication3 { class Program { static void Main(string[] args) { using (IDbConnection conn = new SqlConnection(@"Data Source=localhost\SQLEXPRESS; Initial Catalog=MyDB; UID=myUser; Password=myPwd; Persist security info=false;")) { //breakpoint here to let you add a perfmon counter on ".Net Data Provider for Sql Server", "Number of Pooled Connections" //on this process instance which is only listed once the program has opened 1 connection. } Task[] tasks = new Task[500]; for (int i = 0; i < tasks.Length; i++) { tasks[i] = Task.Factory.StartNew((Action)delegate { DoFullDBStatement(); }); } Task.WaitAll(tasks); } private static void DoFullDBStatement() { using (IDbConnection conn = new SqlConnection(@"Data Source=localhost\SQLEXPRESS; Initial Catalog=MyDB; UID=myUser; Password=myPwd; Persist security info=false;")) { using (IDbCommand command = conn.CreateCommand()) { conn.Open(); command.CommandText = "select * from MyTable WAITFOR DELAY '00:00:20'"; command.CommandType = CommandType.Text; IDataReader reader = command.ExecuteReader(); } } } } } Now, another way to cause the same Exception, run this in Query Analyzer, and after a second, press the stop button. This results in a lock being held on this table, and it's not released because you pressed STOP. BEGIN TRANSACTION SELECT * FROM MyTable WITH (TABLOCKX, HOLDLOCK) WHERE 0 = 1 WAITFOR DELAY '00:05:00' ROLLBACK TRANSACTION Then run the same Console App as above. It will crash with the same TimeOut exception, but this time, it's caused by the Command not returning in time, because there is a lock held on MyTable, each connection is waiting for that lock to be released and it doesn't get released, so it timesout. The callstack is different. In SQL Activity Monitor, you will see the column "Locked By" has an ID in it and the wait type is LCK_M_IS. To resolve this one, you have to find in your application who is keeping a lock and blocking everyone else. Note: you can make things happen quicker in testing by setting the pool size to a smaller number than the default of 100 (for example just add: Max Pool Size=10). The number of connections that remain in the pool is a function of several factors. First, if the operation is efficient and fast, the number of users that the system can service/second goes up. Think of the connection pool like the line at Taco Time. If the clerks can't take the orders fast enough they open another register. The customers queue for the registers (just like SQL Server Connection Open operations queue for an available connection) and don't "time out" (go to another taco joint) unless the processing time overflows the capacity of the clerks at all available registers. So would adding cash registers help (adding to the Max Pool count)? Perhaps, but only if the taco stuffers in the back can keep up. So, without the analogy, the server can typically handle a full load with a relatively few number of connections. That's also because with web apps, the connections stay open for milliseconds while the query is being execute and the rowset is returned. If the app is written poorly or the query takes too long to execute the whole process breaks down.! Could you give me your opinion on this situation I have an application in C# Framework 4.0. Like many app this one connects to a data base to get information. In my case this database is SqlServer 2008 Express. The database is in my machine In my data layer I’m using Enterprise Library 5.0 When I publish my app in my local machine (App Pool Classic) · Windows Professional · IIS 7.5 The application works fine. I’m using this query to check the number of connections my application is creating when I’m testing it. SELECT db_name(dbid) as DatabaseName, count(dbid) as NoOfConnections,loginame as LoginName FROM sys.sysprocesses WHERE dbid > 0 AND db_name(dbid) = 'MyDataBase' GROUP BY dbid, loginame When I start testing the number of connection start growing but at some point the max number of connection is 26. I think that’s ok because the app works When I publish the app to TestMachine1 · XP Mode Virtual Machine (Windows XP Professional) · IIS 5.1 I works fine, the behavior is the same the number of connections to the database increment to 24 or 26, after that they stay at that point no matter what I do in the application. The problem: When I publish to TestMachine2 (App Pool Classic) · Windows Server 2008 R2 · IIS 7.5 I start to test the application the number of connection to the database start to grow but this time they grow very rapidly and don’t stop growing at 24 or 26, the number of connections grow till the get to be 100 and the application stop working at that point. I have check for any difference on the publications, especially in Windows Professional and Windows Server and they seem with the same parameters and configurations. Any clues why this could be happening? , any suggestions? Luis Forero Given that this thread has a number of very poor suggestions (increase the timeout, increase the pool size, flush the pool) that might address the symptoms, I can see how you're confused. These suggestions DO NOT solve the problem. You have a leaking connection pool. You MUST ensure that the connections are closed after use EACH time and EVERY time they are used. This means if something goes wrong, the connection must be closed--EXPLICITLY. You cannot depend on scope to close a connection (as you could with VB6), you can't expect the USING to work correctly (for reasons that I can't explain), you can't expect the garbage collector to close your connections. You MUST execute a Close method after the data is consumed in any application that does not have a persistent connection architecture. No ASP.NET (or similar) application can support that architecture. This means if you pass a DataReader to another layer in your application that layer must close the DataReader AND you must create the DataReader with the CloseConnection option. Folks, if you don't know how this mechanism works, don't bother answering the question. __________________________________________________________________! - Edited by William VaughnModerator Tuesday, December 06, 2011 6:09 PM - Proposed as answer by bpeikes Tuesday, December 06, 2011 6:15 PM Thanks William for clearing this up. My question is, (and I know you said you can't explain it), why doesn't using work? It's just plain insane that "USING" does not guarantee the closing of a connection. This makes the whole system suspect. This means IDisposable is not reliable, it means that "USING" is not reliable. What else is not reliable? Someone from MS should really pipe in and explain why the USING statement does not close a connection. I had a client that had an overflowing pool. We searched high and low for places in the code where the connections were getting opened but not closed. We found some but the problem did not go away. We finally isolated the problem to one routine that looked great, but it used the Using statement to close the connection. We looked at the IL and it looked correct. We removed the Using block and replaced it with an explicit Close and the problem went away. We asked Microsoft to explain (I taked to the ADO.NET engineers themselves) they could not. I just know that Using does not (always) work. It's one of those unsolved mysteries of life. "Using. Just say Close"! I believe you, and it really does make me wary of the whole runtime. Call it whatever you want, it's a bug that hasn't been fixed since the 1.0 Framework. The stranger thing about it, is that I have never been able to reproduce it with a simple test. For example, this does not break. while(true) { SqlConnection con = GetMyConnection(); using(con) { // run some query } } Doesn't ever throw an exception. Still doesn't even if I run something similar with multiple threads. Using does seem to do what it is supposed to. That's the one thing I don't understand is that if "using" doesn't work, then we should be able to reproduce the problem. I had the same problem - I was not able to replicate the problem. I don't like that - It feels too much like guessing. I was eventually able to replace the problem by reducing the "Max Pool Size" entry in the connection string to a really low number (e.g. 2). (see for details for how to do this) Once I had done that, the problem was easily reproducible, even with only a single thread. As William Vaughn suggests (very strongly), explicitly closing the connection (with a try...finally) completely eliminated the issue. I'm not sure if you were referring to my (bpeikes) code, but I did try to do this in multiple threads and was not able to reproduce the error. It appears that there is some scenario where "Using" does not work the same way that calling close explicitly does. If you have code that can reproduce the problem, please post it so we can get closer to the source of the issue. What people have proposed is that a using clause does not necessarily work the same as calling close explicitly. People have been able to fix this "problem" by removing "using" and making explicit calls to Close(). Why should it make a difference? - this should be the answer definetly. thanks for the information. how about when we use datasets and entity framework to do DB connections? when their work is finished, connection is handled by datasets or ef or do we need to close connections manually? "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." This is old, but anyway... If you have lots of db threads and a really low max pool size, then "Timeout..." is exactly the message you should be getting. Now, why did an explicit close fix the problem for you(ncook/vaughn)? Well, if you are doing an explicit close, then you are closing the connection sooner than if you wait for the end of the using block to call Dispose(). Dispose() has a couple lines of code that it executes before calling Close(). You might say that those 2 lines and the overhead of a method call don't take very much time. In absolute terms, that is true, but if you have a lot of threads running and lots of context switching happening, then those 2 lines will make the difference at some point. You found that point in your case was when the thread pool was set to 2. Nice experiment. So, you want to keep your connection open for as little time as possible. But you also want your code to be easily readable.
http://social.msdn.microsoft.com/Forums/en-US/c57c0432-c27b-45ab-81ca-b2df76c911ef/timeout-expired-the-timeout-period-elapsed-prior-to-obtaining-a-connection-from-the-pool?forum=adodotnetdataproviders
CC-MAIN-2014-41
refinedweb
4,011
55.84
Safari Books Online is a digital library providing on-demand subscription access to thousands of learning resources. Images are used on web pages all across the internet. They’re used in the form of application icons, corporate logos, and photos of you and your friends. Naturally, Silverlight includes a mechanism for displaying these types of content. In fact, Silverlight provides two different elements for showing them. These elements belong to the System.Windows.Controls namespace and target two different scenarios. We’ll cover the basic Image element first. Then, we’ll discuss the MultiScaleImage control, which enables a feature known as Deep Zoom. The Image element empowers you to display images from across the internet. In addition to loading images relative to your project, the Image element allows you to retrieve images from another domain. Take a look at how snippet 7.12 uses the Source property to get an image from the website.
http://my.safaribooksonline.com/book/web-development/silverlight/9781933988429/managing-digital-media/ch07lev1sec5
CC-MAIN-2014-10
refinedweb
154
51.55
I am using the TwitterAPI library in Python with the 30 day Premium API (Sandbox mode). Everything works until I try to add more search operators such as ‘maxResults’ : ‘500’. Whenever I add a new operator I get a 422 error. What is the right way to use these operators in a query? I have tried adapting the example given in the Twitter Documentation for ‘Example POST Request’ (), but nothing seems to work. My code is as follows: from TwitterAPI import TwitterAPI SEARCH_TERM = ‘#Dáil100 OR #soloheadbeg lang:en’ PRODUCT = ‘30day’ LABEL = ‘dev’ api = TwitterAPI("XXXX", "XXXX", "XXXX", "XXXX", auth_type='oAuth2') r = api.request('tweets/search/%s/:%s' % (PRODUCT, LABEL), {'query': SEARCH_TERM, "maxResults":"500"}) for item in r: print(item[‘text’] if ‘text’ in item else item)
https://twittercommunity.com/t/adding-search-operators-to-twitterapi-premium-search/121188
CC-MAIN-2019-09
refinedweb
126
63.29
How to Create a Simple Program in C++ Ever wanted to program in C++? Here's how to make a simple program. From the help of these examples you can make more programs, they only give a outline to you about programming in C++. They describe the structure of a C++ Program. [edit] Steps - Get a compiler and/or IDE. Two good choices are GCC, or if your computer is running Windows, Visual Studio Express Edition. A good, simple, easy to use compiler is the Bloodshed Dev-C++ - Some Examples (Copy and Paste this into a text/code editor) A simple program is given by Bjarne Stroustrup (developer of C++) to check your compiler: #include <iostream> #include <string> using namespace std; int main () { string s; cout << "Please enter your first name followed by a newline \n" ; cin >> s; cout << "Hello, " << s << '\n' ; return 0; // this return statement isn't necessary } // The sum of two numbers. ; } - Save this as sum.cpp, Don't confuse there are many other extensions for C++ files, choose any of them (like *.cc, *.cxx, *.c++, *.cp) . HINT : It should say Save as Type: {select "All Files"} - Compile it. For users of linux and gcc compiler Command : g++ sum.cpp Users of Window can use any C++ compiler Like MS Visual C++ or any they like to use - Run the program For users of Linux and gcc compiler Command : ./a.out (a.out is a executable file produce by compiler after compilation of program.) [edit] Tips - For more details about programming in C++ give a visit cplusplus.com - cin.ignore() prevents the program from ending prematurely and closing the window immediately (before you have time to see it)! Press any key if you want to end the program. - Add // before all of your comments. - Learn programming in C++ with ISO standards - Feel free to experiment! [edit] Things You'll Need - A text/code editor (e.g. vim, notepad, etc). - A compiler. - Alternatively, an IDE contains an editor and a compiler.
http://www.wikihow.com/Create-a-Simple-Program-in-C%2B%2B
crawl-002
refinedweb
333
73.07
On Wed, Apr 02, 2008 at 07:19:58PM -0400, Trond Myklebust wrote:> I'm just suggesting splitting out the namespace-specific part of struct> file into a separate structure that would be private to the VFS.> Something like> > struct file_descriptor {> struct file *file;> struct vfsmount *mnt;> atomic_t refcount;> };> > and then having the 'struct file' hold a reference to the superblock> instead of holding a reference to the vfsmount.> > Why would that be problematic for SCM_RIGHTS? We don't allow people to> send arbitrary references to 'struct file' using SCM_RIGHTS now; they> have to send descriptors.HUH? Descriptor is a number. There is no struct file_descriptor,let alone refcounting for such. There is a table, indexed by numberand containing references to struct file.If you want to shove pointer to vfsmount in there (what for? to wastesome memory and make SMP protection on access more interesting?), youcould do that, but IMO it's too ugly to consider.Anyway, what the hell for? It's more complex and buys you nothinguseful.
https://lkml.org/lkml/2008/4/2/617
CC-MAIN-2015-48
refinedweb
170
66.44
binban 1 Posted April 7, 2020 Share Posted April 7, 2020 Hello guys! i try to find info about this but couldnt find! i am levling dks, and to make my life easier i would like to set in my quest profile the currrent settings for ; #disable blockages in last 10 mins #turn on "try to detect evading mobs #mount Distance to 5 Also Failed to autoload my plugins! public class LoadPlugins { public static void Allplugins() { wManager.Plugin.PluginsManager.DisposeAllPlugins(); foreach (var p in wManager.wManagerSetting.CurrentSetting.PluginsSettings) { if (p.FileName == "BetterTalents.dll") p.Actif = true; if (p.FileName == "Death knight Item Manager.dll") p.Actif = true; } wManager.Plugin.PluginsManager.LoadAllPlugins(); } } Quote Link to comment Share on other sites More sharing options... Recommended Posts Join the conversation You can post now and register later. If you have an account, sign in now to post with your account.
https://wrobot.eu/forums/topic/12019-change-current-settings/
CC-MAIN-2022-40
refinedweb
147
51.85
templatizer.jstemplatizer.js Simple solution for compiling jade templates into vanilla JS functions for blazin' fast client-side use. If you're looking for something that works with pug, check out puglatizer. v2 has been released. See the changelog for breaking changes. What is this?What is this? Client-side templating is overly complicated, ultimately what you actually want is a function you can call from your JS that puts your data in a template. Why should I have to send a bunch of strings with Mustaches {{}} or other silly stuff for the client to parse? Ultimately, all I want is a function that I can call with some variable to render the string I want. So, the question is, what's a sane way to get to that point? Enter jade. Simple, intuitive templating, and happens to be what I use on the server anyway. So... Jade has some awesome stuff for compiling templates into functions. I just built templatizer to make it easy to turn a folder full of jade templates into a CommonJS module that exports all the template functions by whatever their file name. Is it faster?Is it faster? From my tests it's 6 to 10 times faster than mustache.js with ICanHaz. How do I use it?How do I use it? npm install templatizer - Write all your templates as individual jade files in a folder in your project. - Somewhere in your build process do this: var templatizer = require('templatizer'); // pass in the template directory and what you want to save the output file as templatizer( __dirname + '/templates', __dirname + '/output.js', options, // Optional function (err, templates) { console.log(err || 'Success!') } ); So a folder like this /clienttemplates user.jade app.jade /myfolder nestedTemplate.jade Compiles down to a JS file that looks something like this: var jade = require('jade-runtime') // This is a peerDependency var templates = {}; // a function built from the `user.jade` file // that takes your data and returns a string. templates.user = function () {} // built from the `app.jade` file templates.app = function () {} // the function // folders become nested objects so // myfolder/nestedTemplate.jade becomes templates.myFolder = {}; templates.myfolder.nestedTemplate = function () {} // the template function module.exports = templates; The awesome thing is... they're just functions at this point. Crazy fast, SO MUCH WIN!!!! DependenciesDependencies templatizer has jade-runtime as a peerDependency. In npm 3.x.x peerDependencies will no longer be installed by default. When this happens, you'll want to run the following npm install jade-runtime to install it yourself. Note: the currently published jade-runtime only works with the upcoming templatizer uses an npm publically scoped module that is a copy of the current runtime @lukekarrys/jade-runtime. This will be changed once [email protected] is released. APIAPI templatizer( templatesDirectory, outputFile?, options?, function (err, templates) {} ) templatesDirectory (string or array, required)templatesDirectory (string or array, required) A string or an array of paths to look for templates. The path can also be a glob instead that can be used to match *.jade files across multiple directories. For example: templatizer(__dirname + '/app/**/*.jade', ...); outputFile (string)outputFile (string) Optionally build the compiled templates to a file. The output will be a CommonJS module. If you don't build to a file, you'll want to supply a callback to do something else with the compiled templates. Options (object)Options (object) jade (object, default {}) jade is an object which will be passed directly to jade.compile(). See the Jade API documentation for what options are available. Here's an example where we set the Jade compileDebug option to true. templatizer(templatesDir, outputFile, { // Options jade: { compileDebug: true } }); globOptions (object, default {}) globOptions will be passed directly to node-glob. See the API docs for available options. transformMixins (boolean, default false) Set this to true to turn on mixin AST transformations. Jade has a feature called mixins which when compiled get treated as function declarations within the compiled function. Templatizer can pull these out of the compiled function and place them on the namespace of the parent function. For example: // users.jade ul each user in users mixin user(user) mixin user(user) // Jade mixin content Templatizer will compile this as // Compiled fn from file exports.users = function () {} // Compiled mixin fn exports.users.user = function (user) {} This is helpful as it allows you to call users() to create your list and then users.user() to render just a single item in the list. Callback (function)Callback (function) If the last parameter is a function, it will be treated as a callback. The callback will always have the signature function (err, templates) {}. Use this to respond to errors or to do something else with the source of the compiled templates file. This can be helpful if you don't want to write the compiled templates directly to a file, and you want to make modifications first. Argument orderArgument order Both the outputFile string and options object are optional. // Use just the callback to do something else with your templates // besides write them to a file templatizer(templatesDir, function (err, templates) { }); // Build to a file and do something in the callback templatizer(templatesDir, outputFile, function (err, templates) { }); // Use only with options templatizer(templatesDir, { /* options */ }, function (err, templates) { }); // Use with options and outputFile templatizer(templatesDir, outputFile, { /* options */ }, function (err, templates) { }); Passing client side data to templatesPassing client side data to templates Simply pass in data objects to make those variables available within the template: templatizer.Template({ title: ..., description: ...}); Using jade's &attributes(attributes) syntax: templatizer.Template.call({ attributes:{ class: ..., value: ...}} , data); templatizer.Template.apply({ attributes:{ class: ..., value: ...}} , [data]); CLICLI Templatizer comes with a bin script to use from makefiles/package.json scripts/etc, it works like this: $ templatizer -d path/to/templates -o /path/to/output/templates.js TestsTests Run npm test to run the tests (you'll need phantomjs installed). You can also run the tests in your browser with npm start and going to. ChangelogChangelog 2.0.3 - Return err from callback on jade compile errors (#94 @klausbayrhammer) 2.0.2 - Use publically scoped runtime from @lukekarrys/jade-runtime 2.0.0 Breaking Changes: - Async API Pass a callback as the last parameter with the signature function (err, templates) {}to know when compilation is complete. - Compiled templates are no longer UMD. The compiled templates are now only a CommonJS module. Global and AMD support have been removed. If you want to consume this file as an AMD module or global, you'll need to do that as part of a build step in your project. Try the require.jsconversion tool or amd-wrapfor AMD compatibility or creating a standalone build with browserifyfor global builds. jade-runtimeis no longer inlined. jade-runtimeis now installed as a peerDependencyand required from the compiled templates file. namespaceoptions have been removed. Since the compiled templates no longer have the option to attach to a global variable, the namespaceoptions are no longer relevant. - Mixin transformation is now off by default. Mixin transformation can be turned back on by using the option transformMixins: true. Also, the dynamic mixin compiler is no automatically turned on if opting-in to mixin transformation. LicenseLicense MIT ContributorsContributors - Aaron McCall github profile - Luke Karrys github profile If you think this is cool, you should follow me on twitter: @HenrikJoreteg
https://libraries.io/github/HenrikJoreteg/templatizer
CC-MAIN-2017-34
refinedweb
1,208
59.09
Member since 08-26-2020 5 0 Kudos Received 1 Solution 09-06-2020 06:44 AM 09-06-2020 06:44 AM I am using GetHTTP processor to get a html from the given url. I want to grab the innerHTML text of a particular class selector. I tried to achieve the same by using GetHTMLElement but this GetHTMLElement processor is giving N number of flowFiles. The selector what I am passing has only once in a html but I am getting n number of times. does anyone know why? In the below screenshot first 10 entries are unique and remaining redundant. My requirement is to fetch a web page html and extract required innerHTML text by passing selector or XPath and then make a JSON and insert into mongoDB. I am facing issue while extracting innerHTML text by passing css selector. Please help me to solve this. How can i achieve this. I have searched a lot but didn’t get any proper solution. I would really appreciate if you help me. ... View more Labels: 08-31-2020 05:45 AM 08-31-2020 05:45 AM 08-29-2020 10:22 AM 08-29-2020 10:22 AM I have achieved this by using AVRO schema. In place of putMongo Now I am using putMongoRecord and convertRecord for converting schema. Here is an example for AVRO schema : { "type": "record", "namespace": "nifi", "name": "fredSchema", "fields": [ { "name": "createdAt", "type": { "type": "int", "logicalType": "date" } } ] } Use the above AVRO schema, and remember the date what you are going to sen should be yyyy-MM-dd format. Thanks. ... View more 08-26-2020 04:07 AM 08-26-2020 04:07 AM I am trying to insert a flow file into MongoDb which has a createdAt date key record as an attribute. When I inserted that flow file into MongoDb using PutMongo processor, it saves the "createdAt" attribute as a string. I want this to be saved as an ISO date object in mongoDB. When I inserted flow file by sending "2020-08-26T04:00:00.000Z" PutMongo Processor inserts as a string. When I sent like this “ISODate("2020-08-26T04:00:00.000Z”)” it also inserts same string as it is into mongo. When I tried like this ISODate("2020-08-26T04:00:00.000Z”) without double quotation it throws error invalid object. I need output as : { “createdAt”: ISODate("2020-08-26T04:00:00.000Z”) } Kindly help if there is any way to do so. ... View more Labels:
https://community.cloudera.com/t5/user/viewprofilepage/user-id/81067/user-messages-feed/latest-contributions
CC-MAIN-2022-27
refinedweb
418
74.59
#include <PPPoELayer.h> Represents a PPPoE tag and its data A templated method to retrieve the tag data as a certain type T. For example, if tag data is 4B (integer) then this method should be used as getTagDataAs<int>() and it will return the tag data as integer. Notice this return value is a copy of the data, not a pointer to the actual data A templated method to copy data of type T into the tag data. For example: if tag data is 4[Bytes] long use this method like this to set an integer "num" into tag data: setTagData<int>(num) A pointer to the tag data. It's recommended to use getTagDataAs() to retrieve the tag data or setTagData() to set tag data The length of the tag data The type of the data, can be converted to PPPoEDiscoveryLayer::PPPoETagTypes enum (or use getType())
https://pcapplusplus.github.io/api-docs/structpcpp_1_1_p_p_po_e_discovery_layer_1_1_p_p_po_e_tag.html
CC-MAIN-2021-25
refinedweb
148
51.31
qt No options to configure Number of commits found: 42 Retire QT2. QT3 was released a few years ago and QT4 will be released soon. Start the QT2 deorbit by marking all ports that depend non-optional on qt23 DEPRECATED Suggested by: eik SIZEify. Chase the new location of libXft. Add NO_LATEST_LINK Bump PORTREVISION on all ports that depend on gettext to aid with upgrading. (Part 1) No member of the kde@ team has touched anything related to Qt2 in ages, so stop pretending to maintain these ports. Define USE_PERL5_BUILD, not erroneous USE_PERL. Submitted by: Oliver Eikemeier Define USE_PERL to make Perl available for (mostly deprecated) "perl -pi -e" construction. Make WRKSRC overridable by slave-ports. Declare (or resolve) conflicts for KDE and QT. Remove pkg-comment from remaining master/slave port sets. Approved by: portmgr (implicitly) Protect targets with .if target(...) ... .endif for targets that are redefined in slave ports. Slave port maintainers need to check that they aren't actually relying on the master port target to be executed. Exorcise the ghost of objprelink, which was killed and buried with kde2. This should unbreak some things, like some errors from portsdb -Uu, as well as the port build itself. Remove USE_NEWGCC, which is no longer supported or required. Submitted by: Tilman Linneweh <[email protected]> PR: ports/40571 Block installation if Qt3 is already installed to prevent people from clobbering the installations. Submitted by: lioux. Use correct -soname for qtgl shared library, so that libqtgl actually works. Previously libqtgl.so.4 was libked with soname of libqt2.so.4, so that when you link application with -lqtgl you are fine, but when you are trying to run resulting application it dies because libqt2 (which has no GL code) is dynamically linked instead. Not objected by: will fix Makefile, thanks to roam@ Fix OSVERSION, bad me. Pointed out by will. Add NO_QT_OBJPRELINK=yes, let -CURRENT installs smoothly. Remove -frerun-cse-after-loop, a vestige from the days when our compiler was slightly broken [1]. Use ${ECHO_CMD} instead of ${ECHO} where you mean the echo command; the ECHO macro is set to "echo" by default, but it is set to "true" if make(1) is invoked with the -s option while ECHO_CMD is always set to the echo command. Apply objprelink patch and use it only if MACHINE_ARCH=i386 *and* NO_QT_OBJPRELINK isn't defined. This was a little trickier than fixing the KDE stuff, but I think this will work ok. Enable to build this ports when you set MAKE_ENV on your shell environment. Preemptive note to keep people from asking me questions about AA support.. Bump png major Set DIST_SUBDIR=KDE Make MAINTAINER overridable for japanese/qt23. (I forgot to commit this along with other kde* ports) missing manpage. Remove -xft since it doesn't work with older X11. Now will people stop bothering me about putting that in, 'cause I'm not doing it again! Even if you do a patch that checks XFREE86_VERSION! Add one more MASTER_SITE. Update to 2.3.1. Support building QT with debugging turned, keyed off the QT_DEBUG variable. Replicate the fixes in the other two patches when building qt23 with debugging turned on. -pthread --> ${PTHREAD_LIBS} -D_THREAD_SAFE --> ${PTHREAD_CFLAGS} Fix last-minute change to avoid broken packaging for qt2-static.. Ladies and gentlemen, I give you QT 2.3.0! Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD 7 vulnerabilities affecting 11 ports have been reported in the past 14 days * - modified, not new All vulnerabilities
http://www.freshports.org/x11-toolkits/qt23/
CC-MAIN-2014-35
refinedweb
584
67.15
Hi, I wonder if anybody can set me straight as I am having a hard time understanding what is going on with INotifyPropertyChanged. My understanding of the events are: 1> Have a window with 2 textfields and a button 2> Create an instance of the Employee object 5> Bind both textboxes to the Employee instance 6> The button just does ((Employee)_bs.Current).Name = "Bill" Here is the Employee class: public class Employee : INotifyPropertyChanged { public event PropertyChangedEventHandler propertyChanged; public string Name { get { return _name; } set { if (_name != value) { _name = value; //notifyPropertyChanged(); } } } public void notifyPropertyChanged([CallerMemberName] string property) { if (propertyChanged != null) propertyChanged(this, new PropertyChangedEventArgs(property)); } } Now when I type something in textbox1 and lose focus, textbox2 is altered but when I click the button neither textbox is updated. Obviously if I uncomment the call to notifyPropertyChanged all works fine. This however brings up questions in my mind. 1> If I change textfield1 this must be changing the datasource and then the datasource somehow triggers textbox2 to re-fetch the data then displaying it. 2> If I click the button this changes the datasource but for some reason the control isn't triggered to fetch the data again. 3> If when I uncheck the notifyPropertyChanged line, it works, but when using events I would normally assign a method to an event ie. EmployeeInstance.PropertyChanged += myMethod, and this method would update controls/properties etc. So is there something going on in the background that I am just unaware of, if so what? Otherwise something is going wrong in my head! I hope that I have explained myself well enough and someone can put me straight. Thanks
https://www.daniweb.com/programming/software-development/threads/434313/inotifypropertychanged
CC-MAIN-2017-26
refinedweb
274
52.29
The java.lang.Object class contains a clone() method that returns a bitwise copy of the current object. protected native Object clone() throws CloneNotSupportedException Not all objects are cloneable. It particular only instances of classes that implement the Cloneable interface can be cloned. Trying to clone an object that does not implement the Cloneable interface throws a CloneNotSupportedException. For example, to make the Car class cloneable, you simply declare that it implements the Cloneable interface. Since this is only a marker interface, you do not need to add any methods to the class. public class Car extends MotorVehicle implements Cloneable { // ... } For example Car c1 = new Car("New York A12 345", 150.0); Car c2 = (Car) c1.clone(); Most classes in the class library do not implement Cloneable so their instances are not cloneable. Most of the time, clones are shallow copies. In other words if the object being cloned contains a reference to another object A, then the clone contains a reference to the same object A, not to a clone of A. If this isn't the behavior you want, you can override clone() yourself. You may also override clone() if you want to make a subclass uncloneable, when one of its superclasses does implement Cloneable. In this case simply use a clone() method that throws a CloneNotSupportedException. For example, public Object clone() throws CloneNotSupportedException { throw new CloneNotSupportedException("Can't clone a SlowCar"); } You may also want to override clone() to make it public instead of protected. In this case, you can simply fall back on the superclass implementation. For example, public Object clone() throws CloneNotSupportedException { return super.clone(); }
http://www.cafeaulait.org/course/week4/46.html
CC-MAIN-2017-04
refinedweb
270
56.76
I'm having trouble with a class which is fine until another method tries to access variables in the same class, seems like a scope issue to me. I get null pointers when i try to access an object which clearly works in a previous method. All my data and called methods work in readFile() but when i call asdf() to just print the same data, it looks like it didn't hold the values. import java.util.*; import java.io.*; public class Students { public static final int MAX = 100;// max # of Student objects in array private Student students[]; // array of Student objects private int count; // maintain # of objects in array public Students() { Student students[] = new Student[MAX]; count = 0; } public void readFile() { LineReader lr = new LineReader("sometext.txt"); String line; // line is used to hold values of lines being read, which are later parsed. String name = " "; int age = 0; double gpa = 0; do { line = lr.readLine(); name = line; try { line = lr.readLine(); age = Integer.parseInt(line); line = lr.readLine(); gpa = Double.parseDouble(line); } catch (NumberFormatException exception) { System.out.println("Found a null pointer instead of a number."); }; students[count] = new Student(name, age, gpa); ++count; } while (line != null); lr.close(); // This call of getName() works, the one below doesn't. System.out.println("Here is student 3: " + students[3].getName()); } public void asdf() { // getName() worked a few lines above, but not here, now I get null pointer instead of a name, age, and gpa print out. System.out.println("Here is student 3: " + students[3].getName()); } }
https://www.daniweb.com/programming/software-development/threads/191234/trouble-with-linereader
CC-MAIN-2017-26
refinedweb
257
66.44
All of my entities have a base class: public class Entity<TKey> : IEntity<TKey> { dynamic IEntity.Id { get { return this.Id; } set { this.Id = value; } } public TKey Id { get; set; } } For example Status entity: [MetadataType(typeof(StatusMetadata))] public partial class Status : Entity<byte> { public string Title { get; set; } } When I run the query against the database I get the following error: "The item with identity 'Id' already exists in the metadata collection. Parameter name: item". Is there any way to fix this or it's an issue caused by inheritance and I can't inherit my entities from any class? The reason is because you inherit from a class that already has an Id property of a different type. I have seen the same error in CodeMigrations. I had a property named "Version" of type string, and the EntityData data class where I was inheriting from also containes a Version property of type byte[]. This generated the same error as you mentioned The solution for this is just to don't use the same property names that are already in your base class.
https://entityframeworkcore.com/knowledge-base/23213271/the-item-with-identity--id--already-exists-in-the-metadata-collection--parameter-name--item
CC-MAIN-2021-17
refinedweb
183
54.02
One of the basic elements of programming languages are variables. Simply speaking a variable is an abstraction layer for the memory cells that contain the actual value. For us, as a developer, it is easier to remember the name of the memory cell than it is to remember its physical memory address. A valid name can consist of characters from 'a' to 'z' (in both lower and upper cases) as well as digits. No spaces or special characters, like umlauts and hyphens, are allowed in the name. Furthermore, variables have a specific data type like strings (characters), digits, lists or references to other variables. In Python, we may reuse the same variable to store values of any type. The type is automatically determined by the value that is assigned to the name. In order to define a variable with a specific value, simply assign this value to a name as follows: age = 42 name = "Dominic" places = ["Berlin", "Cape Town", "New York"] The Python interpreter creates the three variables age, name, and places, and assigns the value 42 to the first and "Dominic" to the second variable, and places becomes a list of three elements that contains the strings "Berlin", "Cape Town", and "New York". Namespaces All the variables from above are part of the same namespace and therefore have the same scope. Unless redefined as a local variable later on, a variable defined in the main program belongs to the global namespace, that can be accessed by any function in your Python program. The following example code demonstrates that and uses the two variables name and age in the function info(). age = 42 name = "Dominic" places = ["Berlin", "Cape Town", "New York"] def info(): print("%s is %i years old." % (name, age)) return info() The output consists of the single line that comes from the info(): $ python3 global.py Dominic is 42 years old. To be more precise, every module, class and function has its own namespace and variables are locally bound to that. In the next example we make use of two namespaces - the outer, global one from the main program and the inner, local one from the function simply named output(). The variable place exists in the main program (line 6) and is redefined as a local variable with a new value in line 2 of the function output(). def output(): place = "Cape Town" print("%s lives in %s." % (name, place)) return place = "Berlin" name = "Dominic" print("%s lives in %s." % (name, place)) output() The output consists of these two lines, whereas the first line originates from the main program (line 8) and the second line from the output(). At first the two variables name and place are defined in the main program (lines 6 and 7) and printed to stdout. Calling the output() function, the variable place is locally redefined in line 2 and name comes from the global namespace, instead. This leads to the output as shown below. $ python3 localscope.py Dominic lives in Berlin. Dominic lives in Cape Town. Modifying Global Variables in a Different Namespace The value of a global variable can be accessed throughout the entire program. In order to achieve that from within functions, Python offers the usage of the keyword global. The function below demonstrates how to use it and imports the variable name into the namespace of the function: def location(): global place place = "Cape Town" return place = "Berlin" print(place) location() print(place) The variable place is already defined in the main program (line 6). Using the keyword global in line 2, the variable becomes available in the function location() and can be set to a different value, immediately (line 3). The output of the the code is shown here: $ python3 globalscope.py Berlin Cape Town Without using the keyword global as seen in line 2, the variable place would be treated as a local variable in the function location() instead and the variable place from the main program is unchanged then. Detect the Scope of a Variable Python has two built-in methods named globals() and locals(). They allow you to determine whether a variable is either part of the global namespace or the local namespace. The following example shows how to use these methods: def calculation(): "do a complex calculation" global place place = "Cape Town" name = "John" print("place in global:", 'place' in globals()) print("place in local :", 'place' in locals()) print("name in global :", 'name' in globals()) print("name in local :", 'name' in locals()) return place = "Berlin" print(place) calculation() The output is as follows and shows the scope of the two variables place and name inside the function calculation(): $ python3 variablelist.py Berlin place in global: True place in local : False name in global : False name in local : True Using Global Variables in Practice Using and modifying global variables from inside functions is seen as a very bad programming style, as it causes side effects, which are rather difficult to detect. It is strongly recommended to use proper function parameters, instead. Acknowledgements The author would like to thank Mandy Neumeyer for her support while preparing the article. Links and References - Bernd Klein: Global, Local and nonlocal Variables, - Variable scope and lifetime, University of Cape Town, - Python 3: globals(), - Python 3: locals(),
https://stackabuse.com/local-and-global-variables-in-python/
CC-MAIN-2019-43
refinedweb
875
65.96
csSndSysSoundFormat Struct Reference [Sound system] The sound format. More... #include <isndsys/ss_structs.h> Detailed Description The sound format. This keeps information about the frequency, bits and channels of a sound data object. Definition at line 40 of file ss_structs.h. Member Data Documentation number of bits per sample (8 or 16) Definition at line 45 of file ss_structs.h. number of channels (1 or 2) Definition at line 47 of file ss_structs.h. Flags further describing the format of the sound data. Definition at line 52 of file ss_structs.h. Frequency of the sound (hz). Definition at line 43 of file ss_structs.h. The documentation for this struct was generated from the following file: - isndsys/ss_structs.h Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4.1/structcsSndSysSoundFormat.html
CC-MAIN-2016-18
refinedweb
129
62.44
29 March 2012 07:39 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The expansion at the 20,000 tonne/year MA unit is scheduled to take place in May or June and will see the company’s overall capacity of MA increase to 60,000 tonnes/year, the source said. “Shandong Hongxin’s MA export volume is likely to increase to 1,800 tonnes each month from the second half of this year,” the source said. The company at present exports about 100 tonnes of MA each month, the source said. Shandong Hongxin also operates a 100,000 tonne/year unsaturated polyester resin (UPR) plant and three phthalic anhydride (PA) lines with a total production capacity of 120,000 tonnes/year at the same site. Other MA producers in the region
http://www.icis.com/Articles/2012/03/29/9545783/chinas-shandong-hongxin-ma-exports-to-surge-in-second-half-2012.html
CC-MAIN-2015-18
refinedweb
131
68.6
Solution for “Cross origin requests are only supported for HTTP.” error when loading a local file is Given Below: I’m trying to load a 3D model into Three.js with JSONLoader, and that 3D model is in the same directory as the entire website. I’m getting the "Cross origin requests are only supported for HTTP." error, but I don’t know what’s causing it nor how to fix it. My crystal ball says that you are loading the model using either file:// or C:/, which stays true to the error message as they are not http:// So you can either install a webserver in your local PC or upload the model somewhere else and use jsonp and change the url to Origin is defined in RFC-6454 as ...they have the same scheme, host, and port. (See Section 4 for full details.) So even though your file originates from the same host ( localhost), but as long as the scheme is different ( http / file), they are treated as different origin. Just to be explicit – Yes, the error is saying you cannot point your browser directly at Here are some options to quickly spin up a local web server to let your browser render local files Python 2 If you have Python installed… Change directory into the folder where your file some.htmlor file(s) exist using the command cd /path/to/your/folder Start up a Python web server using the command python -m SimpleHTTPServer This will start a web server to host your entire directory listing at - You can use a custom port python -m SimpleHTTPServer 9000giving you link: This approach is built in to any Python installation. Python 3 Do the same steps, but use the following command instead python3 -m http.server Node.js Alternatively, if you demand a more responsive setup and already use nodejs… Install http-serverby typing npm install -g http-server Change into your working directory, where your some.htmllives Start your http server by issuing http-server -c-1 This spins up a Node.js httpd which serves the files in your directory as static files accessible from Ruby If your preferred language is Ruby … the Ruby Gods say this works as well: ruby -run -e httpd . -p 8080 PHP Of course PHP also has its solution. php -S localhost:8000 In Chrome you can use this flag: --allow-file-access-from-files Read more here. Ran in to this today. I wrote some code that looked like this: app.controller('ctrlr', function($scope, $http){ $http.get('localhost:3000').success(function(data) { $scope.stuff = data; }); }); …but it should’ve looked like this: app.controller('ctrlr', function($scope, $http){ $http.get('').success(function(data) { $scope.stuff = data; }); }); The only difference was the lack of http:// in the second snippet of code. Just wanted to put that out there in case there are others with a similar issue. Just change the url to instead of localhost. If you open the html file from local, you should create a local server to serve that html file, the simplest way is using Web Server for Chrome. That will fix the issue. I’m going to list 3 different approaches to solve this issue: - Using a very lightweight npmpackage: Install live-server using npm install -g live-server. Then, go to that directory open the terminal and type live-serverand hit enter, page will be served at localhost:8080. BONUS: It also supports hot reloading by default. - Using a lightweight Google Chrome app developed by Google: Install the app, then go to the apps tab in Chrome and open the app. In the app point it to the right folder. Your page will be served! - Modifying Chrome shortcut in windows: Create a Chrome browser’s shortcut. Right-click on the icon and open properties. In properties, edit targetto "C:Program Files (x86)GoogleChromeApplicationchrome.exe" --disable-web-security --user-data-dir="C:/ChromeDevSession"and save. Then using Chrome open the page using ctrl+o. NOTE: Do NOT use this shortcut for regular browsing. Note: Use http:// like in case you face error. In an Android app — for example, to allow JavaScript to have access to assets via — use setAllowFileAccessFromFileURLs(true) on the WebSettings that you get from calling getSettings() on the WebView. Use http:// or https:// to create url error: localhost:8080 solution: fastest way for me was: for windows users run your file on Firefox problem solved, or if you want to use chrome easiest way for me was to install Python 3 then from command prompt run command python -m http.server then go to then navigate to your files python -m http.server If you use Mozilla Firefox, It will work as expected without any issues; P.S. Surprisingly, IntenetExplorer_Edge works absolutely fine!!! For those on Windows without Python or Node.js, there is still a lightweight solution: Mongoose. All you do is drag the executable to wherever the root of the server should be, and run it. An icon will appear in the taskbar and it’ll navigate to the server in the default browser. Also, Z-WAMP is a 100% portable WAMP that runs in a single folder, it’s awesome. That’s an option if you need a quick PHP and MySQL server. Though it hasn’t been updated since 2013. A modern alternative would be Laragon or WinNMP. I haven’t tested them, but they are portable and worth mentioning. Also, if you only want the absolute basics (HTML+JS), here’s a tiny PowerShell script that doesn’t need anything to be installed or downloaded: $Srv = New-Object Net.HttpListener; $Srv.Prefixes.Add(""); $Srv.Start(); Start-Process ""; While($Srv.IsListening) { $Ctx = $Srv.GetContext(); $Buf = [System.IO.File]::OpenRead((Join-Path $Pwd($Ctx.Request.RawUrl))); $Ctx.Response.ContentLength64 = $Buf.Length; $Ctx.Response.Headers.Add("Content-Type", "text/html"); $Buf.CopyTo($Ctx.Response.OutputStream); $Buf.Close(); $Ctx.Response.Close(); }; This method is very barebones, it cannot show directories or other fancy stuff. But it handles these CORS errors just fine. Save the script as server.ps1 and run in the root of your project. It will launch index.html in the directory it is placed in. Easy solution for whom using VS Code I’ve been getting this error for a while. Most of the answers works. But I found a different solution. If you don’t want to deal with node.js or any other solution in here and you are working with an HTML file (calling functions from another js file or fetch json api’s) try to use Live Server extension. It allows you to open a live server easily. And because of it creates localhost server, the problem is resolving. You can simply start the localhost by open a HTML file and right-click on the editor and click on Open with Live Server. It basically load the files using instead of using file://.... EDIT It is not necessary to have a .html file. You can start the Live Server with shortcuts. Hit (alt+L, alt+O)to Open the Server and (alt+L, alt+C)to Stop the server. [On MAC, cmd+L, cmd+Oand cmd+L, cmd+C] Hope it will help someone 🙂 I suspect it’s already mentioned in some of the answers, but I’ll slightly modify this to have complete working answer (easier to find and use). Go to:. Install nodejs. Install http-server by running command from command prompt npm install -g http-server. Change into your working directory, where index.html/ yoursome.htmlresides. Start your http server by running command http-server -c-1 Open web browser to or – depending on your html filename. I was getting this exact error when loading an HTML file on the browser that was using a json file from the local directory. In my case, I was able to solve this by creating a simple node server that allowed to server static content. I left the code for this at this other answer. It simply says that the application should be run on a web server. I had the same problem with chrome, I started tomcat and moved my application there, and it worked. I suggest you use a mini-server to run these kind of applications on localhost (if you are not using some inbuilt server). Here’s one that is very simple to setup and run: For all y’all on MacOS… setup a simple LaunchAgent to enable these glamorous capabilities in your own copy of Chrome… Save a plist, named whatever ( launch.chrome.dev.mode.plist, for example) in ~/Library/LaunchAgents with similar content to… <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""> <plist version="1.0"> <dict> <key>Label</key> <string>launch.chrome.dev.mode</string> <key>ProgramArguments</key> <array> <string>/Applications/Google Chrome.app/Contents/MacOS/Google Chrome</string> <string>-allow-file-access-from-files</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> It should launch at startup.. but you can force it to do so at any time with the terminal command launchctl load -w ~/Library/LaunchAgents/launch.chrome.dev.mode.plist TADA! 😎 💁🏻 🙊 🙏🏾 Not possible to load static local files(eg:svg) without server. If you have NPM /YARN installed in your machine, you can setup simple http server using “http-server” npm install http-server -g http-server [path] [options] Or open terminal in that project folder and type “hs”. It will automaticaly start HTTP live server. If you insist on running the .html file locally and not serving it with a webserver, you can prevent those cross origin requests from happening in the first place by making the problematic resources available inline. I had this problem when trying to to serve .js files through file://. My solution was to update my build script to replace <script src="..."> tags with <script>...</script>. Here’s a gulp approach for doing that: 1. run npm install --save-dev to packages gulp, gulp-inline and del. 2. After creating a gulpfile.js to the root directory, add the following code (just change the file paths for whatever suits you): let gulp = require('gulp'); let inline = require('gulp-inline'); let del = require('del'); gulp.task('inline', function (done) { gulp.src('dist/index.html') .pipe(inline({ base: 'dist/', disabledTypes: 'css, svg, img' })) .pipe(gulp.dest('dist/').on('finish', function(){ done() })); }); gulp.task('clean', function (done) { del(['dist/*.js']) done() }); gulp.task('bundle-for-local', gulp.series('inline', 'clean')) - Either run gulp bundle-for-localor update your build script to run it automatically. You can see the detailed problem and solution for my case here. er. I just found some official words “Attempting to load unbuilt, remote AMD modules that use the dojo/text plugin will fail due to cross-origin security restrictions. (Built versions of AMD modules are unaffected because the calls to dojo/text are eliminated by the build system.)” One way it worked loading local files is using them with in the project folder instead of outside your project folder. Create one folder under your project example files similar to the way we create for images and replace the section where using complete local path other than project path and use relative url of file under project folder . It worked for me - Install local webserver for java e.g Tomcat,for php you can use lamp etc - Drop the json file in the public accessible app server directory - Start the app server,and you should be able to access the file from localhost For Linux Python users: import webbrowser browser = webbrowser.get('google-chrome --allow-file-access-from-files %s') browser.open(url) Many problem for this, with my problem is missing “” example: jquery-1.10.2.js:8720 XMLHttpRequest cannot load It’s must be: I hope this help for who meet this problem. I have also been able to recreate this error message when using an anchor tag with the following href: <a href=":">Example a tag</a> In my case an a tag was being used to get the ‘Pointer Cursor’ and the event was actually controlled by some jQuery on click event. I removed the href and added a class that applies: cursor:pointer; cordova achieve this. I still can not figure out how cordova did. It does not even go through shouldInterceptRequest. Later I found out that the key to load any file from local is: myWebView.getSettings().setAllowUniversalAccessFromFileURLs(true); And when you want to access any http resource, the webview will do checking with OPTIONS method, which you can grant the access through WebViewClient.shouldInterceptRequest by return a response, and for the following GET/POST method, you can just return null. first close all instance of chrome. after that follow this. I used this command on mac . “/Applications/Google Chrome.app/Contents/MacOS/Google Chrome” –allow-file-access-from-files For windows: How to launch html using Chrome at “–allow-file-access-from-files” mode? Experienced this when I downloaded a page for offline view. I just had to remove the integrity="*****" and crossorigin="anonymous" attributes from all <link> and <script> tags
https://codeutility.org/cross-origin-requests-are-only-supported-for-http-error-when-loading-a-local-file/
CC-MAIN-2021-49
refinedweb
2,197
65.62
If you are building an InfoPath client-only solution and you need to filter drop-down list boxes, you can simply use the “Filter Data” feature when you set the Entries property for the control. However, since filters are not supported in browser-compatible form templates, how can you accomplish the same functionality? This is where .NET web services can “save the day!” By creating web methods that accept parameters, you can add those web methods as data connections and then pass the selected value from one drop-down list box to the appropriate data connection “queryField”. Once the queryField has been set, simply execute that data connection to retrieve the associated values. To setup this sample, you will need to have access to the SQL Server Northwind sample database and Visual Studio installed on your server. First, let’s create the web service and the two web methods we will use in this sample: Step 1: Open the appropriate web site - Launch Visual Studio - From the File menu, select Open and choose Web Site - Select File System and then navigate to: C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\TEMPLATE\LAYOUTS NOTE: By choosing to open the LAYOUTS folder, your web service will be available from all provisioned sites. If you want the web service only to be available from a specific site (i.e. the default site) you would want to open: C:\Inetpub\wwwroot\wss\VirtualDirectories\80 - Click Open - In the Solution Explorer, right-click on the web site and choose New Folder - Rename this folder to: WebServices - Because you may have multiple web services, let’s add a sub folder here that is specific to our web service: - Right-click on WebServices and choose New Folder - Rename this folder to: NorthwindTables Step 2: Create the web service - Right-click on NorthwindTables and choose Add New Item - From the Visual Studio installed templates list choose Web Service - In the Name box, rename this to: NorthwindTable.asmx - Uncheck the option “Place code in a separate file” and click Add Step 3: Add the web methods NOTE: For this sample, it is assumed the SQL Server database is installed on the same Microsoft Office SharePoint Server. - Add the following “using” declarations at the top of your code page: using System.Data; using System.Data.SqlClient; - Add the following web method to retrieve the CustomerID values from the Customers table in the Northwind database: [WebMethod] public DataSet GetCustomers() { // Create a SQL connection to the Northwind sample database SqlConnection cn = new SqlConnection(“Data Source=(local);Integrated Security=SSPI;Initial Catalog=Northwind”); // Create data adapter object passing it the SELECT // statement to retrieve the customer ID values SqlDataAdapter da = new SqlDataAdapter(“SELECT Customers.CustomerID FROM Customers Order By CustomerID”, cn); // Create a dataset object to store the data DataSet ds = new DataSet(); // Open the connection cn.Open(); // Fill the dataset da.Fill(ds, “Customers”); // Clean up cn.Close(); cn = null; da = null; return ds; } - Add the following web method to retrieve the associated orders for the selected customer: [WebMethod] public DataSet GetOrdersForSelectedCustomer(string strCustID) { // Create a SQL connection to the Northwind sample database SqlConnection cn = new SqlConnection(“Data Source=(local);Integrated Security=SSPI;Initial Catalog=Northwind”); // Create a string variable for the modified SQL statement string strOrdersSQL = “”; // Create a string variable for the default SQL statement string strOrdersOrigSQL = “SELECT * FROM Orders”; // Some of the customer ID values contain apostrophe’s – we need // to replace them with two single quotation marks so that all // single quotation marks in the CustomerID are parsed correctly. strCustID = strCustID.Replace(“‘”, “””); // Concatenate the default SQL statement with the “Where” clause // and add an OrderBy clause strOrdersSQL = strOrdersOrigSQL + ” Where CustomerID Like ‘%” + strCustID + “%’ Order By OrderID”; // Create data adapter object passing it the SELECT statement // to retrieve the OrderID values SqlDataAdapter daOrders = new SqlDataAdapter(strOrdersSQL, cn); // Create a dataset object to store the data DataSet Ds = new DataSet(); // Open the connection cn.Open(); // Fill the DataSet daOrders.Fill(Ds, “Orders”); // Clean up cn.Close(); cn = null; daOrders = null; return Ds; } - Build and save the project Step 4: Test the web methods NOTE: The Identity account of the Application Pool for the web site where this web service is published will need to have access to the SQL Server database. - Open a browser and navigate to: http://<server>/_layouts/WebServices/NorthwindTables/NorthwindTables.asmx (replace <server> with the name of your server) - You should see the two web methods created above along with the default HelloWorld web method: - Click the GetCustomers link and then click Invoke – this should return a list of the CustomerID values - Click the GetOrdersForSelectedCustomer link, in the strCustID box enter: BERGS and then click Invoke – this should return a list of only those OrderID values for BERGS Step 5: Create the InfoPath form - Design a new, blank, browser-compatible InfoPath Form Template - Add a drop-down list box to the view and modify the name to: SelectCustomer - Add another drop-down list box to the view and modify the name to: SelectOrder - Add a new “receive data” data connection to the NorthwindTables web service for each of the web methods created above as follows: - GetCustomers: - Enable the option “Automatically retrieve data when the form is opened” - Use ALFKI as the sample value for the strCustID parameter when prompted in the Data Connection Wizard - Uncheck the option “Automatically retrieve data when the form is opened” - Set a field’s value: Set the SelectOrder field to nothing (e.g. leave the Value blank) - Set a field’s value: Set the parameter value (strCustID) for the GetOrdersForSelectedCustomer data connection to the SelectCustomer field - Query the GetOrdersForSelectedCustomer data connection - Save the form locally as FilteredDrop-downs_IPFS.XSN Step 6: Publish the form - Publish the form to a server running InfoPath Form Services - Navigate to the form library where the form was published and click the New button - From SelectCustomer choose BERGS - Click SelectOrder – only those orders for BERGS are displayed - Select a different customer – notice the orders have also changed Scott Heim Support Engineer Encontrei num post do blog do JOPX , este conjunto de recursos sobre InfoPath 2007. General resources InfoPath 2007 resources General resources InfoPath General Overview InfoPath team Blog Designing Form I am a newbie in infopath and I have surfed almost every article to get my dependent dropdowns working! Most of the sites ask us to refer this URL.. This is a great article. I would like to add my comments for populating cascading dropdowns. You can also use repeating tables for this purpose. There is no need to call a webservice on change of each and every dropdown. Just write your logic in the codebehind and populate a repeating table onchange of a dropdown. Bind the dependent dropdown to this repeating table. Ensure to clear the reapeating table on next onchange event of your first dropdown! -Bhavana Bhat Hi Bhavana, Thank you for your suggestion as this is certainly another option! One of the main reasons I used a web service in this manner is that the InfoPath form template (for this functionality) requires no code – as such, the form template does not need to be "Administrator" deployed. Scott Heim When designing Microsoft Office InfoPath form templates, filtering can be used to limit the options that are displayed to users in certain controls. This out-of-the-box functionality can be used in list boxes, drop-down list boxes, combo boxes, repeating When designing Microsoft Office InfoPath form templates, filtering can be used to limit the options that are displayed to users in certain controls. However, if you are designing an Office InfoPath 2007 form template for a browser scenario, it should One of our customers here asked us to develop a simple InfoPath form, including dependant dropdown functionality. This is a great example, but what do you do if the controls are in a repeating table? The dependant control (SelectOrder) is "filtered" for every row in the repeating table based on the selected value of SelectCustomer in the current row. Hi Crisch, Are you referring to the behavior that when you click the SelectOrder box in an existing row that it only contains the values for the "newly added" row? If so, then correct – this would be expected behavior for this sample. If you wanted to be able to "refresh" the list when you move to a previously created row, you could add a new column with a button to the repeating table with a Rule that sets the query field and then queries the connection. The only other option may be using managed code but I have not explored that option as I was attempting to show how to accomplish this without using code in the InfoPath form template. Scott Hi, I have to admit that this post is long overdue. In the last two weeks, I came across a lot of people Hi, I have to admit that this post is long overdue. In the last two weeks, I came across a lot of people The InfoPath Team Blog has a great article on how to implement cascading dropdowns in InfoPath Forms Using managed code, it is possible to set the dropdown values dynamically even in the repeating context. The above URL contains an article on how to do it with a sample. Hello You are the hero, i was trin to do that from three days without filters but …. You solved the problem. Thanks alot. Hello I have q question. What if i have 3 dropdown lists. I applied same rule to 1est and 2nd as u did at "SelectCustomer " but its not working, it shows data in first ddl, but no data is selected in 2nd ddl. Pleaser help Thanks I solved the above problem it was my mistek. Thanks agin for this article u How to do custom themes for MOSS The InfoPath Team Blog has a great article on how to implement cascading dropdowns in InfoPath Forms Excellent article! Thanks so much. I needed to populate cascading dropdowns from SharePoint lists in the main site collection, so I just replaced the web service file with this: <code> <%@ WebService Language="C#" Class="ListAccess.ListAccessService" %> <%@ Assembly Name="Microsoft.SharePoint, Version=12.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"%> using System; using System.Web; using System.Web.Services; using System.Web.Services.Protocols; using Microsoft.SharePoint; using System.Xml; namespace ListAccess { [WebService(Namespace = "", Description = "Provides access to SharePoint Lists")] class ListAccessService : WebService { [WebMethod(Description = @"Gets filtered items from the specified list. Be sure to use the internal name for the filterField.")] public XmlDocument GetListItems(string listName, string filterField, string filterValue) { XmlDocument returnDoc = new XmlDocument(); using (SPSite site = SPContext.Current.Site) { using (SPWeb web = site.RootWeb) { SPList list = web.Lists[listName]; string queryText = String.Format(@"<Where> <Eq> <FieldRef Name=""{0}"" /> <Value Type=""Text"">{1}</Value> </Eq> </Where>", filterField, filterValue); SPQuery query = new SPQuery(); query.Query = queryText; SPListItemCollection items = list.GetItems(query); returnDoc.LoadXml(items.Xml); } } return returnDoc; } } } </code> Thanks Elliott! Hi, Is there anywhere a walkthrough for this procedure? I’m quite a newbe on this, an on step 2 visual web developpers 2008 express already giving the error: does not support opening Sharepoint Web sites. Hi pbakker_67, I don’t believe the "express" edition will provide the functionality you will need. I believe you will need a "Professional" level or greater to get the features for this type of functionality. Scott Is it possible to have rules or seperate xml's without creating a webservice to make this cascading work for browser enabled forms? I am able to set the second dropdown value based on the first, BUT it only takes the first value of the "new" xml list?? anyone tell me what version of VB to use with SP 2007 server 3.0 and form services 07? I think you need to correct this post as it is misleading. Because you can execute filters on web based infopath forms in Sharepoint 2010. As demonstrated here: sharepointsolutions.com/…/sharepoint-2010-tutorial-video-drop-down-filters great article. it works in default website generated from VS2005/08. but not in under (IIS)localhost. it throws error when clicking invoke btn for getcustomers as (The user is not associated with a trusted sql server connection.) i am having sql server seperately (but in same domain)and calling name of that server in data source.. pls help me..as iam in the halfway Hi Scott, Nice article! Thanks for posting it! May i know what are the basic tasks in Microsoft InfoPath 2010? <a href="">Sample Forms</a>
https://blogs.msdn.microsoft.com/infopath/2006/10/12/cascading-dropdowns-in-browser-forms/
CC-MAIN-2016-07
refinedweb
2,094
50.46
The spring attempts to reach a target angle by adding spring and damping forces. The JointSpring.spring force attempts to reach the target angle. A larger value makes the spring reach the target position faster. The JointSpring.damper force dampens the angular velocity. A larger value makes the spring reach the goal slower. The spring reaches for the JointSpring.targetPosition angle in degrees relative to the rest angle. The rest angle between the bodies is always zero at the beginning of the simulation. See Also: useSpring, JointSpring. using UnityEngine; using System.Collections; public class HingeExample : MonoBehaviour { void Start() { HingeJoint hinge = GetComponent<HingeJoint>(); // Make the spring reach shoot for a 70 degree angle. // This could be used to fire off a catapult. JointSpring hingeSpring = hinge.spring; hingeSpring.spring = 10; hingeSpring.damper = 3; hingeSpring.targetPosition = 70; hinge.spring = hingeSpring; hinge.useSpring = true; } }
https://docs.unity3d.com/ru/2020.2/ScriptReference/HingeJoint-spring.html
CC-MAIN-2020-45
refinedweb
140
63.76
curl_share_cleanup - clean up a shared object NAME curl_share_cleanup - Clean up a shared object SYNOPSIS #include <curl/curl.h> CURLSHcode curl_share_cleanup(CURLSH *share_handle); DESCRIPTION This function deletes a shared object. The share handle cannot be used anymore when this function has been called. Passing in a NULL pointer in share_handle will make this function return immediately with no action. RETURN VALUE CURLSHE_OK (zero) means that the option was set properly, non-zero means an error occurred as <curl/curl.h> defines. See the libcurl-errors.3 man page for the full list with descriptions. If an error occurs, then the share object will not be deleted. SEE ALSO curl_share_init(3), curl_share_setopt(3) This HTML page was made with roffit.
https://curl.se/libcurl/c/curl_share_cleanup.html
CC-MAIN-2020-50
refinedweb
118
67.76
Hi all, Am trying to create the file that contains some attributes and parameters to that attributes . and also want to give permissions to modify , rename, and delete like this .please give me hints to do this task . i wrote a program to create the empty file . below is that Code Java: import java.io.*; public class CreateFile { private static void doCreate() { File file = new File("NewFile1.txt"); boolean success = false; try { success = file.createNewFile(); } catch (IOException e) { e.printStackTrace(); } if (success) { System.out.println("File did not exist and was created.\n"); } else { System.out.println("File already exists.\n"); } } public static void main(String[] args) { doCreate(); } }
http://www.javaprogrammingforums.com/%20object-oriented-programming/6906-create-file-java-printingthethread.html
CC-MAIN-2016-26
refinedweb
108
55.71
Lightning Web Components are not supported as Quick Action. But we can use them with Lightning Component to overcome this. But issue come if we want to close Quick Action Modal from Lightning Web Components (LWC). Today we will cover how we can close the Quick Action Modal from Lightning Web Components. For the demo purpose I will create a simple Lightning Web Components and pass event into parent Lightning component. From Lightning component we will call the $A.get("e.force:closeQuickAction").fire(); to close the Quick Action. This is how our final output will look like: So now we will view our code: lwcQA.html <template> <lightning-card> <h3 slot="title"> <lightning-icon</lightning-icon> Close Quick Action From Lightning Web Components </h3> <div slot="footer"> </div> Record Id from Parent Component: {recordId} <br/> <lightning-button label="Close Quick Action" onclick={closeQuickAction}></lightning-button> </lightning-card> </template> lwcQA.js import { LightningElement, api } from 'lwc'; export default class LwcQA extends LightningElement { @api recordId; closeQuickAction() { const closeQA = new CustomEvent('close'); // Dispatches the event. this.dispatchEvent(closeQA); } } As we are calling event and passing data to parent component we don’t need the pub sub module here. We have also passed current record id from lightning component which we can use to do our further processing. We can use this to give user edit form using Lightning Web Components data service. Now we will check the Lightning Component code: QA_LC.cmp <aura:component <c:lwcQA </aura:component> QA_LC.js ({ closeQA : function(component, event, helper) { $A.get("e.force:closeQuickAction").fire(); } }) So with just few lines of code we can now use Lightning Web Components(LWC) inside Quick Action(QA) and easily close the Modal after the processing. You can find complete code on my GitHub repo. Start the development on Open Source Lightning Web Components here. Did you like the post or do you have any questions, let me know in comments. Happy Programming 🙂 2 thoughts on “Close Quick Action from Lightning Web Components” Excellent! Simple and well explained. Thank you.
https://newstechnologystuff.com/2019/12/29/close-quick-action-from-lightning-web-components/
CC-MAIN-2020-50
refinedweb
340
57.47
I read the sticky about helping with homework, so I hope this is ok. I'm new to programming, and I think I have the logic correct, but it doesn't give me the correct answer. We haven't learned arrays yet, so we have to do this using if statements and loops. The program is supposed to find all the prime numbers between 1 and 100. Find the sum, then output the average. Any help or suggestions would be appreciated. I'm not looking for the answer, just some help on what I'm doing wrong. Thanks. using System; public class PrimeNumbers { // Main method entry point public static void Main() { // declare variables // n starts at 2 as the first prime number // totalPrimeNumbers starts at 1 since 2 is a prime number // sumOfPrimes starts at 2 since 2 is the first prime number int n = 2, totalPrimeNumbers = 1, x; double sumOfPrimes = 2, average; // while loop when n <= 100 while (n <= 100) { // test if n is prime for (x = 2; x < n; x++) { if ((n % x) != 0) { sumOfPrimes = sumOfPrimes + n; totalPrimeNumbers++; // change value of x to end for loop x = n + 1; } } // increase n by 1 to test next number n++; } // calculate average average = sumOfPrimes / totalPrimeNumbers; // display average Console.WriteLine("The average of all prime numbers between 1 and 100 is: {0}", average); } }
https://www.daniweb.com/programming/software-development/threads/45747/average-of-prime-number-between-1-100
CC-MAIN-2020-34
refinedweb
222
66.57
sttp alternatives and similar packages Based on the "HTTP" category. Alternatively, view sttp alternatives based on common mentions on social networks and blogs. Spray9.4 0.0 sttp VS SprayActor-based library for http interaction. Http4s9.3 9.9 sttp VS Http4sA minimal, idiomatic Scala interface for HTTP. Akka HTTP9.1 9.3 sttp VS Akka HTTPThe Streaming-first HTTP server/module of Akka. Finch.io8.8 4.6 sttp VS Finch.ioPurely Functional REST API atop of Finagle. scalaj-http8.1 0.4 sttp VS scalaj-httpSimple scala wrapper for HttpURLConnection (including OAuth support). Dispatch6.9 1.5 sttp VS DispatchLibrary for asynchronous HTTP interaction. It provides a Scala vocabulary for Java’s async-http-client. requests-scala6.7 4.9 sttp VS requests-scalaA Scala port of the popular Python Requests HTTP client: flexible, intuitive, and straightforward to use. Scalaxb6.5 5.3 sttp VS ScalaxbAn XML data-binding tool for Scala that supports W3C XML Schema (xsd) and Web Services Description Language (wsdl) as the input file. Newman5.5 0.0 sttp VS NewmanA REST DSL that tries to take the best from Dispatch, Finagle and Apache HttpClient. See here for rationale. featherbed4.3 0.0 sttp VS featherbedAsynchronous Scala HTTP client using Finagle, Shapeless and Cats RösHTTP3.9 1.1 sttp VS RösHTTPA lightweight asynchronous HTTP API built with Scala.js in mind. Supports the JVM and Node.js runtimes as well as most browsers. lolhttp3.7 0.0 sttp VS lolhttpAn HTTP & HTTP/2 Server and Client library for Scala. Fintrospect3.0 4.0 sttp VS FintrospectLibrary that adds an intelligent HTTP routing layer to the Finagle RPC framework from Twitter. Tubesocks1.6 0.0 sttp VS TubesocksLibrary supporting bi-directional communication with websocket servers. Netcaty1.5 0.0 sttp VS NetcatySimple net test client/server for Netty and Scala lovers. jefe0.9 0.0 sttp VS jefeManages installation, updating, downloading, launching, error reporting, proxying, multi-server management, and much more for your stand-alone and web applications. scommons-api0.6 0.4 sttp VS scommons-apiCommon REST API Scala/Scala.js components Get performance insights in less than 4 minutes Do you think we are missing an alternative of sttp or a related project? Popular Comparisons README The Scala HTTP client that you always wanted! If you are an Ammonite user, you can quickly start experimenting with sttp by copy-pasting the following: import $ivy.`com.softwaremill.sttp.client3::core:3.0.0-RC11` import sttp.client3.quick._ quickRequest.get(uri"").send(backend) This brings in the sttp API and an implicit, synchronous backend. Quickstart with sbt Add the following dependency: "com.softwaremill.sttp.client3" %% "core" % "3.0.0-RC11" Then, import: import sttp.client3._ Type sttp. and see where your IDE’s auto-complete gets you! Other! Testing We offer commercial support for sttp and related technologies, as well as development services. Contact us to learn more about our offer!
https://scala.libhunt.com/sttp-alternatives
CC-MAIN-2021-10
refinedweb
489
51.95
Here’s a thoughtful piece by Directions on Microsoft suggesting what we have to get right this year. “Get Going on Tools” is right up there, but I don’t mean to single it out, because I think the whole list is dead right. Is there anything you’d change, or think is conspicuously missing? I found this to be an interesting quote: "Parts of Vista like the Web services framework cry out for tools. Microsoft needs to get Vista tools out to developers, particularly to Visual Basic developers who are less comfortable programming to a raw API." He’s talking about Indigo, which is a well-designed, thought-out, managed API. How much less "raw" can you get? It should be easy for VB developers to use it. Of course I’m not saying there’s no need for extra tools, but still, the API is a set of .NET classes/namespaces/etc. which should be easy for VB developers to start using. That said, I’d like to encourage Microsoft to keep working on the "raw" API–the unmanaged Windows API. With all the fuss about WinFX, lots of the new stuff in Vista is still function-based and COM-based. But details and documentation are quite scarce. Here’s hoping Microsoft fully documents and stands behind these new APIs, despite them not being .NET-based. Documentation seems to be Microsoft’s most consistent, weakest asset. There is nothing more frustrating than to have to scour blogs and/or newsgroups to find helpful documentation and/or answers to your questions. With this in mind, maybe a short-term solution would be to incorporate some kind of rss feed program into the help application to get more current updates? Interesting. I also find blogs and newsgroups to be, on average, the most useful source of documentation. Visual Studio has excellent debug-time assistance, but the leap to the MSDN docs is painful. I see two issues. The first is that the leap to the MSDN docs doesn’t always work. The context-sensitive F1 could use some attention. The second issue is that the community is producing very worthwhile information that is disconnected from the MSDN docs, and is most easily accessible through a search engine. So often I just skip the MSDN docs entirely. I like the RSS idea. Actually, what I’d prefer is something more wikipedia-like, where people can augment the MSDN doc entries with more information. When I solve a problem and "grok" something I tend to make a "note to self" with a cheat sheet and some code — why not take the extra 5 seconds and make it public? So: I go to find out how an asp:CommandField works, get the "official" MS-crafted story at top, and stay for a bunch of community-contributed examples at the bottom that show how to implement frequently-used CommandField-related patterns. (Also — I know I just gave a Developer-oriented spin on the issue of documentation, but the issue Lucas calls out is far more pervasive than just the MSDN docs. However, I don’t always agree that the documentation is weak. For example, I’ve been working with Small Business Server at home, and, not being an IT Pro myself, I have been tremendously impressed at how its documentation and wizards have made it easy for me to set up and secure the server.)
https://blogs.msdn.microsoft.com/robburke/2005/12/27/microsofts-top-10-challenges-for-2006/
CC-MAIN-2017-09
refinedweb
571
62.27
In the previous post we looked at how to minimize Boolean expressions using a Python module qm. In this post we’d like to look at how much the minimization process shortens expressions. Witn n Boolean variables, you can create 2^n terms that are a product of distinct variables. You can specify a Boolean function by specifying the subset of such terms on which it takes the value 1, and so there are 2^(2^n) Boolean functions on n variables. For very small values of n we can minimize every possible Boolean function. To do this, we need a way to iterate through the power set (set of all subsets) of the integers up to 2^n. Here’s a function to do that, borrowed from itertools recipes. from itertools import chain, combinations def powerset(iterable): xs = list(iterable) return chain.from_iterable( combinations(xs, n) for n in range(len(xs) + 1)) Next, we use this code to run all Boolean functions on 3 variables through the minimizer. We use a matrix to keep track of how long the input expressions are and how long the minimized expressions are. from numpy import zeros from qm import q n = 3 N = 2**n tally = zeros((N,N), dtype=int) for p in powerset(range(N)): if not p: continue # qm can't take an empty set i = len(p) j = len(qm(ones=p, dc={})) tally[i-1, j-1] += 1 Here’s a table summarizing the results [1]. The first column gives the number of product terms in the input expression and the subsequent columns give the number of product terms in the output expressions. For example, of the expressions of length 2, there were 12 that could be reduced to expressions of length 1 but the remaining 16 could not be reduced. (There are 28 possible input expressions of length 2 because there are 28 ways to choose 2 items from a set of 8 things.) There are no nonzero values above the main diagonal, i.e. no expression got longer in the process of minimization. Of course that’s to be expected, but it’s reassuring that nothing went obviously wrong. We can repeat this exercise for expressions in 4 variables by setting n = 4 in the code above. This gives the following results. We quickly run into a wall as n increases. Not only does the Quine-McCluskey algorithm take about twice as long every time we add a new variable, the number of possible Boolean functions grows even faster. There were 2^(2^3) = 256 possibilities to explore when n = 3, and 2^(2^4) = 65,536 when n = 4. If we want to explore all Boolean functions on five variables, we need to look at 2^(2^5) = 4,294,967,296 possibilities. I estimate this would take over a year on my laptop. The qm module could be made more efficient, and in fact someone has done that. But even if you made the code a billion times faster, six variables would still be out of the question. To explore functions of more variables, we need to switch from exhaustive enumeration to random sampling. I may do that in a future post. (Update: I did.) *** [1] The raw data for the tables presented as images is available here.
https://www.johndcook.com/blog/2020/11/19/boolean-expression-compression/
CC-MAIN-2021-10
refinedweb
556
70.94
This C Program counts number of bits set to 0 in a integer x. Here is source code of the C Program to number of bits set to 0 in a integer x. The C program is successfully compiled and run on a Linux system. The program output is also shown below. /* * C Program to Count Number of bits set to 0 in a Integer x */ #include <stdio.h> #define NUM_BITS_INT (8*sizeof(int)) int count_unset(int); int main() { int i, num, snum, res, count = 0; printf("\nEnter the number"); scanf("%d", &num); /* * Check each bit whether the bit is set or unset * Uses >> and & operator for checking individual bits */ for (i = 0;i <= NUM_BITS_INT;i++) { snum = num >> i; res = snum & 1; if (res == 0) count++; } printf("%d", count); } $ gcc bit1.c $ a.out Enter the number128 31 $ a.out Enter the number -127 6 Sanfoundry Global Education & Learning Series – 1000 C Programs. Here’s the list of Best Reference Books in C Programming, Data-Structures and Algorithms If you wish to look at programming examples on all topics, go to C Programming Examples.
https://www.sanfoundry.com/c-program-count-bits-set-0/
CC-MAIN-2018-30
refinedweb
184
69.21
Hi Lalo, Yes, you're right that dynamic typing is very cool. I guess the thing that I find tricky in Python is that it is possible to accidentally add unintended properties to a class by misspelling the intended one, and sometimes it's tricky to catch this. For example (my Python syntax might be a little rusty..): class MyClass(): def PrintName(self): print "My name is " + self.myname myobject = new MyClass("Tom") myobject.PrintName() gives: C:\> python test.py My name is C:\> Obviously in this somewhat trivial example it's easy to debug, but the point is that this introduces logical errors into the code, that are never caught by the interpeter itself, only by seeing that the results are not what we want. _______________________________________________ vos-d mailing list [email protected]
https://www.mail-archive.com/[email protected]/msg00430.html
CC-MAIN-2018-51
refinedweb
135
64.3
Global error variable #include <errno.h> extern int errno; char * const sys_errlist[]; int sys_nerr; libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The errno variable is set to certain error values by many functions whenever an error has occurred. This variable may be implemented as a macro, but you can always examine or set it as if it were a simple integer variable. The following variables are also defined in <errno.h>: The values for errno include at least the following. Some are defined by POSIX, and some are additional values.: In QNX Neutrino 6.4.0,. You can use the -e option to procnto to specify the value of EALREADY_DYNAMIC: /* * ) ); }
http://www.qnx.com/developers/docs/qnxcar2/topic/com.qnx.doc.neutrino.lib_ref/topic/e/errno.html
CC-MAIN-2020-05
refinedweb
122
59.5
Ridge regression is one of several regularized linear models. Regularization is the process of penalizing coefficients of variables either by removing them and or reduce their impact. Ridge regression reduces the effect of problematic variables close to zero but never fully removes them. We will go through an example of ridge regression using the VietNamI dataset available in the pydataset library. Our goal will be to predict expenses based on the variables available. We will complete this task using the following steps/ - Data preparation - Baseline model development - Ridge regression model Below is the initial code from pydataset import data import numpy as np import pandas as pd from sklearn.model_selection import GridSearchCV from sklearn.linear_model import Ridge from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_erro Data Preparation The data preparation is simple. All we have to do is load the data and convert the sex variable to a dummy variable. We also need to set up our X and y datasets. Below is the code. df=pd.DataFrame(data('VietNamI')) df.loc[df.sex== 'male', 'sex'] = 0 df.loc[df.sex== 'female','sex'] = 1 df['sex'] = df['sex'].astype(int) X=df[['pharvis','age','sex','married','educ','illness','injury','illdays','actdays','insurance']] y=df['lnhhexp' We can now create our baseline regression model. Baseline Model The metric we are using is the mean squared error. Below is the code and output for our baseline regression model. This is a model that has no regularization to it. Below is the code. regression=LinearRegression() regression.fit(X,y) first_model=(mean_squared_error(y_true=y,y_pred=regression.predict(X))) print(first_model) 0.35528915032173053 This value of 0.355289 will be our indicator to determine if the regularized ridge regression model is superior or not. Ridge Model In order to create our ridge model we need to first determine the most appropriate value for the l2 regularization. L2 is the name of the hyperparameter that is used in ridge regression. Determining the value of a hyperparameter requires the use of a grid. In the code below, we first are ridge model and indicate normalization in order to get better estimates. Next we setup the grid that we will use. Below is the code. ridge=Ridge(normalize=True) search=GridSearchCV(estimator=ridge,param_grid={'alpha':np.logspace(-5,2,8)},scoring='neg_mean_squared_error',n_jobs=1,refit=True,cv=10) The search object has several arguments within it. Alpha is hyperparameter we are trying to set. The log space is the range of values we want to test. We want the log of -5 to 2, but we only get 8 values from within that range evenly spread out. Are metric is the mean squared error. Refit set true means to adjust the parameters while modeling and cv is the number of folds to develop for the cross-validation. We can now use the .fit function to run the model and then use the .best_params_ and .best_scores_ function to determine the model;s strength. Below is the code. search.fit(X,y) search.best_params_ {'alpha': 0.01} abs(search.best_score_) 0.3801489007094425 The best_params_ tells us what to set alpha too which in this case is 0.01. The best_score_ tells us what the best possible mean squared error is. In this case, the value of 0.38 is worse than what the baseline model was. We can confirm this by fitting our model with the ridge information and finding the mean squared error. This is done below. ridge=Ridge(normalize=True,alpha=0.01) ridge.fit(X,y) second_model=(mean_squared_error(y_true=y,y_pred=ridge.predict(X))) print(second_model) 0.35529321992606566 The 0.35 is lower than the 0.38. This is because the last results are not cross-validated. In addition, these results indicate that there is little difference between the ridge and baseline models. This is confirmed with the coefficients of each model found below. coef_dict_baseline = {} for coef, feat in zip(regression.coef_,data("VietNamI").columns): coef_dict_baseline[feat] = coef coef_dict_baseline Out[188]: {'pharvis': 0.013282050886950674, 'lnhhexp': 0.06480086550467873, 'age': 0.004012412278795848, 'sex': -0.08739614349708981, 'married': 0.075276463838362, 'educ': -0.06180921300600292, 'illness': 0.040870384578962596, 'injury': -0.002763768716569026, 'illdays': -0.006717063310893158, 'actdays': 0.1468784364977112} coef_dict_ridge = {} for coef, feat in zip(ridge.coef_,data("VietNamI").columns): coef_dict_ridge[feat] = coef coef_dict_ridge Out[190]: {'pharvis': 0.012881937698185289, 'lnhhexp': 0.06335455237380987, 'age': 0.003896623321297935, 'sex': -0.0846541637961565, 'married': 0.07451889604357693, 'educ': -0.06098723778992694, 'illness': 0.039430607922053884, 'injury': -0.002779341753010467, 'illdays': -0.006551280792122459, 'actdays': 0.14663287713359757} The coefficient values are about the same. This means that the penalization made little difference with this dataset. Conclusion Ridge regression allows you to penalize variables based on their useful in developing the model. With this form of regularized regression the coefficients of the variables is never set to zero. Other forms of regularization regression allows for the total removal of variables. One example of this is lasso regression.
https://educationalresearchtechniques.com/tag/regularization/
CC-MAIN-2020-10
refinedweb
812
54.39
. NumberStringBooleanFunctionObjectSymbol (new in ES2015)... oh, and undefined and null, which are ... slightly odd. And Array,which is a special kind of object. And Date and RegExp, which are objectsthat you get for free. And to be technically accurate, functions are just aspecial type of object. So the type diagram looks more like this:NumberStringBooleanSymbol (new in ES2015)ObjectFunctionArrayDateRegExpnullundefinedAnd there are some built-in Error types as well. Things are a lot easier ifwe stick with the first diagram, however, so we'll discuss the types listedthere for now.NumbersNumbers in JavaScript are "double-precision 64-bit format IEEE 754values", according to the spec. This has some interesting consequences.There's no such thing as an integer in JavaScript, so you have to be a littlecareful with your arithmetic if you're used to math in C or Java. parseInt('010'); // 8parseInt('0x10'); // 16Here, we see the parseInt() function treat the first string as octal due tothe leading 0, and the second string as hexadecimal due to the leading"0x". The hexadecimal notation is still in place; only octal has beenremoved.If you want to convert a binary number to an integer, just change thebase: parseInt('11', 2); // 3Similarly, you can parse floating point numbers using the built-in parseFloat() function. Unlikeits parseInt() cousin, parseFloat() always uses base 10.You can also use the unary + operator to convert values to numbers:+ '42'; // 42+ '010'; // 10+ '0x10'; // 16A special value called NaN (short for "Not a Number") is returned if thestring is non-numeric:parseInt('hello', 10); // NaNNaN is toxic: if you provide it as an input to any mathematical operationthe result will also be NaN:NaN + 5; // NaNYou can test for NaN using the built-in isNaN() function:isNaN(NaN); // trueJavaScript also has the special values Infinity and -Infinity:1 / 0; // Infinity-1 / 0; // -InfinityYou can test for Infinity, -Infinity and NaN values using the built-in isFinite()function:isFinite(1 / 0); // falseisFinite(-Infinity); // falseisFinite(NaN); // falseThe parseInt() and parseFloat() functions parse a string until they reach acharacter that isn't valid for the specified number format, then return thenumber parsed up to that point. However the "+" operator simply converts thestring to NaN if there is an invalid character contained within it. Just try parsingthe string "10.2abc" with each method by yourself in the console and you'llunderstand the differences better. StringsStrings in JavaScript are sequences of Unicode characters. This should bewelcome news to anyone who has had to deal with internationalization.More accurately, they are sequences of UTF-16 code units; each code unitis represented by a 16-bit number. Each Unicode character is representedby either 1 or 2 code units.If you want to represent a single character, you just use a string consistingof that single character. To find the length of a string (in code units), access its length property:'hello'.length; // 5There's our first brush with JavaScript objects! Did we mention that youcan use strings like objects too? They have methods as well that allow youto manipulate the string and access information about the string:'hello'.charAt(0); // "h"'hello, world'.replace('hello', 'goodbye'); // "goodbye,world"'hello'.toUpperCase(); // "HELLO"Other typesJavaScript distinguishes between null, which is a value that indicates adeliberate non-value (and is only accessible through the null keyword),and undefined, which is a value of type undefined that indicates anuninitialized value that is, a value hasn't even been assigned yet. We'lltalk about variables later, but in JavaScript it is possible to declare avariable without assigning a value to it. If you do this, the variable's typeis undefined. undefined is actually a constant.JavaScript has a boolean type, with possible values true and false (bothof which are keywords.) Any value can be converted to a booleanaccording to the following rules:1.false, 0, empty strings (""), NaN, null, and undefined allbecome false.2.All other values become true.You can perform this conversion explicitly using the Boolean() function:Boolean(''); // falseBoolean(234); // trueHowever, this is rarely necessary, as JavaScript will silently perform thisconversion when it expects a boolean, such as in an if statement (seebelow).New variables in JavaScript are declared using one of threekeywords: let, const, or var. var is the most common declarative keyword. It does not have therestrictions that the other two keywords have. This is because it wastraditionally the only way to declare a variable in JavaScript. A variabledeclared with the var keyword is available from the function it is declaredin.var a;var name = 'Simon';An example of scope with a variable declared with var:// myVarVariable *is* visible out here '3' + 4 + 5; // "345" 3 + 4 + '5'; // "75"Adding an empty string to something is a useful way of converting it to astring itself. Comparisons in JavaScript can be made using <, >, <= and >=. These workfor both strings and numbers. Equality is a little less straightforward. Thedouble-equals operator performs type coercion if you give it differenttypes, with sometimes interesting results:123 == '123'; // true1 == true; // trueTo avoid type coercion, use the triple-equals operator: Dictionaries in Python.Hashes in Perl and Ruby.Hash tables in C and C++.HashMaps in Java.Associative arrays in PHP.The fact that this data structure is so widely used is a testament to itsversatility. Since everything (bar core types) in JavaScript is an object, anyJavaScript program naturally involves a great deal of hash table lookups.It's a good thing they're so fast! The "name" part is a JavaScript string, while the value can be anyJavaScript value including more objects. This allows you to build datastructures of arbitrary complexity. var obj = { name: 'Carrot', for: 'Max', // 'for' is a reserved word, use '_for' instead. details: { color: 'orange', size: 12 }};Attribute access can be chained together: obj.details.color; // orangeobj['details']['size']; // 12The following example creates an object prototype, Person and aninstance of that prototype, You.function Person(name, age) { this.name = name; this.age = age;} // Define an objectvar you = new Person('You', 24);// We are creating a new person named "You" aged 24.Once created, an object's properties can again be accessed in one of twoways://dot notationobj.name = 'Simon';var name = obj.name;And... // bracket notationobj['name'] = 'Simon';var name = obj['name'];// can use a variable to define a keyvar user = prompt('what is your key?')obj[user] = prompt('what is its value?')These are also semantically equivalent. The second method has theadvantage that the name of the property is provided as a string, whichmeans it can be calculated at run-time. However, using this methodprevents some JavaScript engine and minifier optimizations beingapplied. It can also be used to set and get properties with names thatare reserved words:obj.for = 'Simon'; // Syntax error, because 'for' is areserved wordobj['for'] = 'Simon'; // works fineStarting in ECMAScript 5, reserved words may be used as object propertynames "in the buff". This means that they don't need to be "clothed" in quoteswhen defining object literals. See the ES5 Spec.For more on objects and prototypes see: Object.prototype. For anexplanation of object prototypes and object prototype chainssee: Inheritance and the prototype chain.Starting in ECMAScript 2015, object keys can be defined by variable usingbracket notation upon being created. {[phoneType]: 12345} is possible insteadof just var userPhone = {}; userPhone[phoneType] = 12345.ArraysArrays in JavaScript are actually a special type of object. They work verymuch like regular objects (numerical properties can naturally be accessedonly using [] syntax) but they have one magic property called 'length'.This is always one more than the highest index in the array.One way of creating arrays is as follows: a.push(item);Arrays come with a number of methods. See also the full documentationfor array methods.Method name Description Returns a string with the toString() of each elementa.toString() separated by commas. Returns a string with the toLocaleString() of eacha.toLocaleString() element separated by commas.a.concat(item1[, item2[, ... Returns a new array with the items added on to it.[, itemN]]]) Converts the array to a string with values delimiteda.join(sep) by the sep parama.pop() Removes and returns the last item.a.push(item1, ..., itemN) Appends items to the end of the array.a.reverse() Reverses the array.a.shift() Removes and returns the first item.a.slice(start[, end]) Returns a sub-array.a.sort([cmpfn]) Takes an optional comparison function.a.splice(start, delcount[, Lets you modify an array by deleting a section anditem1[, ...[, itemN]]]) replacing it with more items.a.unshift(item1[, item2[, ... Prepends items to the start of the array.[, itemN]]]) FunctionsAlong with objects, functions are the core component in understandingJavaScript. The most basic function couldn't be much simpler: function add(x, y) { var total = x + y; return total;}This demonstrates a basic function. A JavaScript function can take 0 ormore named parameters. The function body can contain as manystatements as you like, and can declare its own variables which are localto that function. The return statement can be used to return a value atany time, terminating the function. If no return statement is used (or anempty return with no value), JavaScript returns undefined.The named parameters turn out to be more like guidelines than anythingelse. You can call a function without passing the parameters it expects, inwhich case they will be set to undefined.add(); // NaN// You can't perform addition on undefinedYou can also pass in more arguments than the function is expecting: add(2, 3, 4); // 5// added the first two; 4 was ignoredThat may seem a little silly, but functions have access to an additionalvariable inside their body called arguments, which is an array-like objectholding all of the values passed to the function. Let's re-write the addfunction to take as many values as we want:function add() { var sum = 0; for (var i = 0, j = arguments.length; i < j; i++) { sum += arguments[i]; } return sum;} add(2, 3, 4, 5); // 14That's really not any more useful than writing 2 + 3 + 4 + 5 though.Let's create an averaging function:function avg() { var sum = 0; for (var i = 0, j = arguments.length; i < j; i++) { sum += arguments[i]; } return sum / arguments.length;} (function() { var b = 3; a += b;})(); a; // 4b; // 2JavaScript allows you to call functions recursively. This is particularlyuseful for dealing with tree structures, such as those found in thebrowseryou call them recursively if they don't have a name? JavaScript lets youname function expressions for this. You can use named IIFEs(Immediately Invoked Function Expressions) as shown below: Custom objectsFor a more detailed discussion of object-oriented programming in JavaScript,see Introduction to Object Oriented JavaScript. s = makePerson('Simon', 'Willison');personFullName(s); // "Simon Willison"personFullNameReversed(s); // "Willison, Simon"This works, but it's pretty ugly. You end up with dozens of functions inyour global namespace. What we really need is a way to attach a functionto an object. Since functions are objects, this is easy:assigning references to them inside the constructor. Can we do any betterthanpart of a lookup chain (that has a special name, "prototype chain"): anytime you attempt to access a property of Person that isn't set, JavaScriptwill check Person.prototype to see if that property exists there instead.As a result, anything assigned to Person.prototypebecomes available toall instances of that constructor via the this object.This is an incredibly powerful tool. JavaScript lets you modify something'sprototype at any time in your program, which means you can add extramethods to existing objects at runtime: s.reversed(); // nomiSOur new method even works on string literals! Person.prototype.toString = function() { return '<Person: ' + this.fullName() + '>';} function nestedFunc() { var b = 4; // parentFunc can't use this return a + b; } return nestedFunc(); // 5}This provides a great deal of utility in writing more maintainable code. If afunction relies on one or two other functions that are not useful to anyother part of your code, you can nest those utility functions inside thefunction that will be called from elsewhere. This keeps the number offunctions that are in the global scope down, which is always a good thing. This is also a great counter to the lure of global variables. When writingcomplex code it is often tempting to use global variables to share valuesbetween multiple functions which leads to code that is hard tomaintain. Nested functions can share variables in their parent, so you canuse that mechanism to couple functions together when it makes sensewithout polluting your global namespace "local globals" if you like. Thistechnique should be used with caution, but it's a useful ability to have. ClosuresThis leads us to one of the most powerful abstractions that JavaScript hastoto the argument that it was created with.What's happening here is pretty much the same as was happening withthe inner functions earlier on: a function defined inside another functionhas access to the outer function's variables. The only difference here isthat the outer function has returned, and hence common sense wouldseem to dictate that its local variables no longer exist. But they dostillexist otherwise the adder functions would be unable to work. What'smore, there are two different "copies" of makeAdder()'s local variables one in which a is 5 and one in which a is 20. So the result of thosefunction calls is as follows:x(6); // returns 11y(7); // returns 27Here's what's actually happening. Whenever JavaScript executes afunction, a 'scope' object is created to hold the local variables createdwithin that function. It is initialized with any variables passed in asfunction parameters. This is similar to the global object that all globalvariables and functions live in, but with a couple of important differences:firstly, a brand new scope object is created every time a function startsexecuting, and secondly, unlike the global object (which is accessibleas this and in browsers as window) these scope objects cannot be directlyaccessed from your JavaScript code. There is no mechanism for iteratingover the properties of the current scope object, for example.So when makeAdder() is called, a scope object is created with oneproperty: a, which is the argument passed tothe makeAdder() function. makeAdder() then returns a newly createdfunction. Normally JavaScript's garbage collector would clean up thescope object created for makeAdder() at this point, but the returnedfunction maintains a reference back to that scope object. As a result, thescope object will not be garbage-collected until there are no morereferences to the function object that makeAdder() returned.Scope objects form a chain called the scope chain, similar to theprototype chain used by JavaScript's object.
https://de.scribd.com/document/363703440/Re-Introduction-to-JS
CC-MAIN-2019-47
refinedweb
2,407
56.76
points() Contents points()# Draw a collection of points, each a coordinate in space at the dimension of one pixel. Examples# import numpy as np def setup(): random_points = 100 * np.random.rand(500, 2) py5.points(random_points) Description# Draw a collection of points, each a coordinate in space at the dimension of one pixel. The purpose of this method is to provide an alternative to repeatedly calling point() in a loop. For a large number of points, the performance of points() will be much faster. The coordinates parameter should be a numpy array with one row for each point. There should be two or three columns for 2D or 3D points, respectively. Underlying Processing method: points Signatures# points( coordinates: npt.NDArray[np.floating], # 2D array of point coordinates with 2 or 3 columns for 2D or 3D points, respectively /, ) -> None Updated on September 01, 2022 16:36:02pm UTC
https://py5.ixora.io/reference/sketch_points.html
CC-MAIN-2022-40
refinedweb
148
57.47
Hi, there seems to be little to no consensus on how the device naming of multipathed devices should look like across distributions: The current kpartx udev rule in the upstream source (the patch came from Hannes) sets the device naming to: /dev/mapper/<wwid>-part[0-9]+ so I assume this is what SuSE uses (or intends to use in the future). Redhat/Fedora seems to skip the '-p -part' so partitions look like: /dev/mapper/<path>p?[0-9]+. The 'p' being used if <path> already ends in a number. No distro seems to use user_friendly_names=yes by default (in which case <...> would become mpath[0-9]+). And there's /dev/disk/by-name/ of course which would use the WWID if alias wasn't set. All this gives bootloaders like grub and installers a hard time (every distro seems to have similar patches to grub-install, etc.). I wonder if it wouldn't make sense to adopt a common naming scheme across distributions and probably reserve the whole /dev/mapper/mpath[0-9]+p[0-9]+ and /dev/mapper/mpath[0-9]-part[0-9]+ namespaces for multipath devices? Otherwise it might become increasingly harder for tools like parted to find out the partitions of a device (without the almost very same set of patches that every distro carries around). Any thoughts? Cheers, -- Guido
https://www.redhat.com/archives/dm-devel/2007-September/msg00005.html
CC-MAIN-2015-22
refinedweb
226
58.42
Hello, So recently I wrote a script that would change a user's custom role. Initially it was working, but recently it's broken and I'm not sure how to fix it. The error message I'm getting is a RuntimeError (400) "Unable to update user role. 'user' parameter required or invalid." Has anyone else been having this issue? A rough breakdown of the script is below. from arcgis.gis import GIS gis = GIS("","Admin Username,"Admin Password") Username = "A user from my organization" UserRole = "Custom user role id for role I want to change my user to" testUser = gis.users.get(Username) testUser.update_role(role = UserRole) print("Complete!") According to my script the break always happens at the update_role line and it's always that same RuntimeError. I've checked the documentation for the command and I don't see what I'm doing wrong, especially since it used to work just fine. The Admin account does have permissions to change account roles as well. You can still successfully change roles manually, just not through python. Any suggestions? Thanks much! I have the same problem. Every other function I've used works fine, but update_role fails with the same "'user' parameter required or invalid" error. I am using a slightly older arcgis Python library, 1.5.1, but that is what my company is using with its official ArcGIS Pro install so I'm pretty much stuck with it for now. I'm using the Python api to work with my company's ArcGIS Online domain: <domain>.maps.arcgis.com
https://community.esri.com/thread/233845-updaterole-not-working
CC-MAIN-2020-45
refinedweb
262
65.83
Install mezzanine into a virtualenv The default version of mezzanine available on PythonAnywhere is a little old. To get the latest version, you need to use a virtualenv. Also, take a look at this PythonAnywhere forum thread if you are using Mezzanine 4 or higher and running into problems. Creating a virtualenv Start by creating a virtualenv -- a "virtual environment" which has only the python packages you want, rather than the system default ones. This allows you to use the latest version of mezzanine. source virtualenvwrapper.sh mkvirtualenv mezzanine # or optionally, mkvirtualenv mezzanine --python=python3 to use Python 3 # you can also use --system-site-packages, see below - ASIDE: If you use --system-site-packages, you'll get all of the pythonanywhere batteries included system pacakges like numpy, scipy etc, available in your virtualenv. the difference will be that you will then be able to install your own, upgraded version over the top, which is what we're doing with the mezzanine package. You will now be "in" the virtualenv. You can tell whenever your virtualenv is active, because its name appears in the bash prompt: (mezzanine)15:18 ~ $ From this point on, you can use deactivate # to switch off the virtualenv workon mezzanine # to go back into the virtualenv pip install mezzanine # will show "downloading mezzanine... downloading django.. downloading requests etc # it may take several minutes for the install to complete. # check mezzanine has installed correctly pip freeze | grep -i mezzanine # this should show a recent version. If anything goes wrong, make sure you were "in" the virtualenv when you started. Did the prompt have the little (mezzanine)? If not, use workon mezzanine. Starting a mezzanine project Next is building your actual mezzanine site. Creating the project I'm using project_name as the project name, but you can substitute in whatever you want - as long as you do it everywhere! workon mezzanine mezzanine-project project_name cd project_name Setting a timezone Next you'll need to edit the settings.py for your project. Use the Files menu to navigate to project_name/settings.py, and then find the line that defines TIME_ZONE, and set it to something appropriate, eg: TIME_ZONE = 'Europe/London' Creating the database python manage.py createdb --noinput Creating a web app Go to the Web tab on PythonAnywhere, and click Start a new Web App. Choose Manual Configuration. virtualenv In the virtualenv section of the web tab, enter the path to your virtualenv: /home/yourusername/.virtualenvs/mezzanine in our example. WSGI configuration: sys.path + django settings module Once it's loaded, click on the link to your WSGI file, and edit it so that it looks a little like this: import os import sys # add project folder to path path = '/home/yourusername/project_name' if path not in sys.path: sys.path.append(path) # Remove any references to your home folder (this can break Mezzanine) while "." in sys.path: sys.path.remove(".") while "" in sys.path: sys.path.remove("") # specify django settings os.environ['DJANGO_SETTINGS_MODULE'] = 'project_name.settings' # load default django wsgi app for Django < 1.4 import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() # load default django wsgi app for Django >= 1.4 from django.core.wsgi import get_wsgi_application application = get_wsgi_application() Reload Web App Hit the big Reload Web App button, and click on the link to your site. You should now see a live site saying things like "Home" and "Congratulations", but the layout will look all broken, because the CSS isn't loading yet. Configuring static files Add a Static files entry on the Web tab Back on the Web tab, go to the Static Files section, and enter a static file with - URL: /static/ - path: /home/yourusername/project_name/static Run manage.py collectstatic Open, or re-open your Bash console, and run workon mezzanine python manage.py collectstatic Reload web app Back on the Web tab, hit the big Reload button, Now your site should be live, and looking good! Things to think about next - The default database for Mezzanine is SQLite. It's fine for testing, but you probably want to switch to MySQL for production use. Check out the UsingMySQL page. - You'll want to switch DEBUGto False, and that will need you need to fill in ALLOWED_HOSTSin settings.py Customising templates There's a few notes in this forum thread:
https://help.pythonanywhere.com/pages/HowtouseMezzanineonPythonAnywhere/
CC-MAIN-2018-30
refinedweb
720
56.25
Linux ACPI Custom Control Method How To¶ - Author Zhang Rui <[email protected]> Linux supports customizing ACPI control methods at runtime. Users can use this to: override an existing method which may not work correctly, or just for debugging purposes. insert a completely new method in order to create a missing method such as _OFF, _ON, _STA, _INI, etc. For these cases, it is far simpler to dynamically install a single control method rather than override the entire DSDT, because kernel rebuild/reboot is not needed and test result can be got in minutes. 注釈 Only ACPI METHOD can be overridden, any other object types like "Device", "OperationRegion", are not recognized. Methods declared inside scope operators are also not supported. The same ACPI control method can be overridden for many times, and it's always the latest one that used by Linux/kernel. To get the ACPI debug object output (Store (AAAA, Debug)), please run: echo 1 > /sys/module/acpi/parameters/aml_debug_output 1. override an existing method¶ get the ACPI table via ACPI sysfs I/F. e.g. to get the DSDT, just run "cat /sys/firmware/acpi/tables/DSDT > /tmp/dsdt.dat" disassemble the table by running "iasl -d dsdt.dat". rewrite the ASL code of the method and save it in a new file, package the new file (psr.asl) to an ACPI table format. Here is an example of a customized _SB._AC._PSR method: DefinitionBlock ("", "SSDT", 1, "", "", 0x20080715) { Method (\_SB_.AC._PSR, 0, NotSerialized) { Store ("In AC _PSR", Debug) Return (ACON) } } Note that the full pathname of the method in ACPI namespace should be used. assemble the file to generate the AML code of the method. e.g. "iasl -vw 6084 psr.asl" (psr.aml is generated as a result) If parameter "-vw 6084" is not supported by your iASL compiler, please try a newer version. mount debugfs by "mount -t debugfs none /sys/kernel/debug" override the old method via the debugfs by running "cat /tmp/psr.aml > /sys/kernel/debug/acpi/custom_method" 2. insert a new method¶ This is easier than overriding an existing method. We just need to create the ASL code of the method we want to insert and then follow the step c) ~ g) in section 1. 3. undo your changes¶ The "undo" operation is not supported for a new inserted method right now, i.e. we can not remove a method currently. For an overridden method, in order to undo your changes, please save a copy of the method original ASL code in step c) section 1, and redo step c) ~ g) to override the method with the original one. 注釈 We can use a kernel with multiple custom ACPI method running, But each individual write to debugfs can implement a SINGLE method override. i.e. if we want to insert/override multiple ACPI methods, we need to redo step c) ~ g) for multiple times. 注釈 Be aware that root can mis-use this driver to modify arbitrary memory and gain additional rights, if root's privileges got restricted (for example if root is not allowed to load additional modules after boot).
https://doc.kusakata.com/firmware-guide/acpi/method-customizing.html
CC-MAIN-2022-33
refinedweb
523
63.59
iCal, short for iCalendar, is an internet standard file format used to store calendar information. Being a standard format, it is compatible with most online calendars, giving you access to important dates regardless of your preferred client (Google Calendar, Outlook Calendar, Apple Calendar, etc.). Even popular online services use the iCal format to help their users remember important dates. Airbnb, for example, uses the iCal format to store room availability giving users the ability to export their Airbnb calendar and view it on an external calendar. In this tutorial, you’ll learn about the iCal format and how to create an iCal calendar feed using Lumen, a PHP micro-framework by Laravel that allows you to quickly build elegant APIs. Tutorial Requirements For this tutorial, you will need: - A PHP development environment - A global installation of Composer - A global installation of ngrok - A PostgreSQL Database - Postman The iCal Object Below is a sample iCal object: BEGIN:VCALENDAR VERSION:2.0 PRODID:-//hacksw/handcal//NONSGML v1.0//EN BEGIN:VEVENT UID:[email protected] DTSTAMP:19970610T172345Z DTSTART:19970714T170000Z DTEND:19970715T040000Z SUMMARY:Bastille Day Party END:VEVENT END:VCALENDAR The iCal object has three parts: begin, body and end. The iCal object must start with BEGIN:VCALENDAR and end with END:VCALENDAR. The body consists of a sequence of properties and one or more calendar components. In our example above, we have two properties, VERSION and PRODID. While there are many other properties, these two must be present in an iCal object in order for it to parse correctly. PRODID is your company details in the format: Business Name//Product Name//Language. VERSION is the current version of iCal. A calendar can have multiple components, each grouped inside a begin and end delimiter. A component is a collection of properties that express a particular calendar semantic. For example, the calendar component can specify an event, to-do, journal entry, time zone information, free/busy time information or an alarm. In our example above, the event properties are grouped inside the BEGIN:VEVENT and END:VEVENT delimiters. Here is some information on a few properties of an event component: - UID: A unique ID for the event (required). - DTSTAMP: The date the event was created (required). - DTSTART/DTEND: The start and end timestamps of an event in UTC format. - SUMMARY: The event title. For more information on the iCal object and its properties, please check out the official documentation. Setup Starting a Lumen Project Lumen is a fast PHP micro-framework by Laravel. This micro-framework makes it very easy to bootstrap a new project with the ability to handle up to 1900 requests per second. To start a new lumen project via Composer, run the following command: $ composer create-project --prefer-dist laravel/lumen iCal The last part of the command is the name of our project. I have named mine "iCal". Once we run the command, Composer downloads all required dependencies and prepares our application. Let’s check if everything worked as expected. We will start our application by running the command: $ php -S localhost:8000 -t public Our application is now served on localhost port 8000. On your browser, navigate to. You should see the version of lumen you’re using. At the time of writing this tutorial, it is version 5.7.4. Database and Environment Configuration The next step is to create a database and configure our connection to the database. Lumen supports four database systems: MySQL, PostgreSQL, SQLite and SQL Server. I am using PostgreSQL for this tutorial. I have created a database called events. When we created our Lumen application, a .env file was created for us. Once you have created your database, go ahead and update the database credentials in the .env file. Here is what your updated .env should look like: APP_ENV=local APP_DEBUG=true APP_KEY= APP_TIMEZONE=UTC LOG_CHANNEL=stack LOG_SLACK_WEBHOOK_URL= DB_CONNECTION=pgsql DB_HOST=127.0.0.1 DB_PORT=5432 DB_DATABASE=events DB_USERNAME=charlieoduk DB_PASSWORD= CACHE_DRIVER=file QUEUE_DRIVER=sync That is everything we need to do to connect our database. We’re making good progress, however, we have no way of testing our database connection just yet. Let’s go ahead and make a migration. Create a Migration Migrations allow you to build, modify and share the applications database schema with ease. For our events application, we need one table called tech_events. It should have the following columns: - id - name - starts - ends - status - summary - location - uid - created_at - Updated_at To create a migration we will use the make:migration Artisan command: $ php artisan make:migration create_tech_events_table The new migration is placed in the database/migrations folder. If you open up the newly created migration, this is what you get: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateTechEventsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('tech_events', function (Blueprint $table) { $table->increments('id'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('tech_events'); } } Inside the up() method, we create a table and define its columns. The down() method reverses the migrations if we need to. Let’s go ahead and update the up() method with the columns we require. This is what we have: <?php use Illuminate\Support\Facades\Schema; use Illuminate\Database\Schema\Blueprint; use Illuminate\Database\Migrations\Migration; class CreateTechEventsTable extends Migration { /** * Run the migrations. * * @return void */ public function up() { Schema::create('tech_events', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->timestamp('starts')->default(\DB::raw('CURRENT_TIMESTAMP')); $table->timestamp('ends')->default(\DB::raw('CURRENT_TIMESTAMP')); $table->string('status'); $table->text('summary'); $table->string('location'); $table->string('uid'); $table->timestamps(); }); } /** * Reverse the migrations. * * @return void */ public function down() { Schema::dropIfExists('tech_events'); } } In the above code we have added columns using this format $table->string('name'). This means we have a column called name and its datatype is a string. Now that we have defined our schema, let’s do a migration to check if it translates to our database. Run the artisan command: $ php artisan migrate Great! We have now set up our database, connected it and defined the schema for our tech_events table. Create the Tech Events Model We will use Laravel’s Eloquent ORM to interact with our database. To do so we will need to enable the Eloquent ORM by opening the file bootstrap/app.php and uncommenting these lines: <?php $app->withFacades(); $app->withEloquent(); Models are typically inside the app folder. In our app folder, we have a file called User.php. By default, Lumen comes with the User model when we create our application. If we had many models, a good practice would be to create a folder and store our models in it. Since we are only adding one more model, let us leave it in the app folder. Create the file app/TechEvents.php and add the following code in it: <?php namespace App; use Illuminate\Database\Eloquent\Model; class TechEvents extends Model { protected $table = 'tech_events'; /** * The attributes that are mass assignable. * * @var array */ protected $fillable = [ 'name', 'starts', 'ends', 'status', 'summary', 'location', 'uid' ]; /** * The rules for data entry * * @var array */ public static $rules = [ 'name' => 'required', 'starts' => 'required', 'ends' => 'required', 'status' => 'required', 'summary' => 'required', 'location' => 'required', 'uid' => 'required' ]; } We have defined our model. We can now interact with our database using Eloquent. In order to build our calendar feed, we will need some events in our database. Let’s go ahead and seed some data in our database. More on models can be found in the documentation. Seed Tech Events Data Lumen provides an easy way to add fake data using factories. We define our model factories in the database/factories/ModelFactory.php file. Let’s have a look at what our file looks like by default: <, ]; }); We can see a model factory is defined for the user model. It has the name and email fields. The Faker library is used to generate fake data for the fields. It is one of the dependencies that was installed when we created the application. You can confirm this by checking the composer.json file under "require-dev". Let us update the file by adding a model factory for the tech events model: <, ]; }); $factory->define(App\TechEvents::class, function (Faker\Generator $faker) { $startTime = Carbon::createFromTimeStamp($faker->dateTimeBetween('now', '+1 month')->getTimestamp()); return [ 'name' => $faker->word, 'starts' => $startTime, 'ends' => Carbon::createFromFormat('Y-m-d H:i:s', $startTime)->addHours(2), 'status' => 'CONFIRMED', 'summary' => $faker->sentence($nbWords = 6, $variableNbWords = true), 'location' => $faker->word, 'uid' => $faker->domainName, 'created_at' => Carbon::now()->toDateTimeString(), 'updated_at' => Carbon::now()->toDateTimeString() ]; }); We have imported Carbon, a PHP extension for DateTime. We have also declared a variable called $startTime. This generates a random date within a month from now. For this tutorial, our events will only last two hours. In the 'ends' field we add two hours to the $startTime. In order to seed our database with fake data, we need one more step. In the file database/seeds/DatabaseSeeder.php, we have one method called run(). When we run seeders, this is the method that is called. This method uses model factories to generate and insert data into the database. Let’s add our newly created model factory: <?php use Illuminate\Database\Seeder; use App\TechEvents; class DatabaseSeeder extends Seeder { /** * Run the database seeds. * * @return void */ public function run() { factory(TechEvents::class, 10)->create(); } } By calling factory(TechEvents::class, 10)->create() inside the run method, we are indicating that we would like to generate and insert ten records into our database. It is time to check if we have done everything correctly. Run the command: $ php artisan db:seed Your database should look like this: We have now successfully added data to our tech_events table. Building our iCal object The iCal Controller We need one controller for this tutorial. In the directory app/Http/Controllers. Create a controller called ICalController.php and add the following code: <?php namespace App\Http\Controllers; use App\TechEvents; class ICalController extends Controller { /** * Gets the events data from the database * and populates the iCal object. * * @return void */ public function getEventsICalObject() { $events = TechEvents::all(); define('ICAL_FORMAT', 'Ymd\THis\Z'); $icalObject = "BEGIN:VCALENDAR VERSION:2.0 METHOD:PUBLISH PRODID:-//Charles Oduk//Tech Events//EN\n"; // loop over events foreach ($events as $event) { $icalObject .= "BEGIN:VEVENT DTSTART:" . date(ICAL_FORMAT, strtotime($event->starts)) . " DTEND:" . date(ICAL_FORMAT, strtotime($event->ends)) . " DTSTAMP:" . date(ICAL_FORMAT, strtotime($event->created_at)) . " SUMMARY:$event->summary UID:$event->uid STATUS:" . strtoupper($event->status) . " LAST-MODIFIED:" . date(ICAL_FORMAT, strtotime($event->updated_at)) . " LOCATION:$event->location END:VEVENT\n"; } // close calendar $icalObject .= "END:VCALENDAR"; // Set the headers header('Content-type: text/calendar; charset=utf-8'); header('Content-Disposition: attachment; filename="cal.ics"'); $icalObject = str_replace(' ', '', $icalObject); echo $icalObject; } } In the getEventsICalObject() method, we start by using Eloquent’s all() method to fetch all the events from the database and store them in the $events variable. Secondly, we define the iCal format. This is used to transform our timestamps into UTC format. Then we begin building our iCal object by looping through each event and adding them to the iCal object. After closing the object we add the required headers. That includes the file name in .ics format. I have named mine "cal.ics". Lastly, we get rid of unwanted spaces which would otherwise make the .ics file invalid. Creating an Endpoint So far we have worked on the code that fetches the data and creates the iCal object. Since we still have no way of testing this code, we need to create an endpoint. Inside routes/web.php add this line of code: <?php $router->get("ical-events", "ICalController@getEventsICalObject"); We have just declared a route. The method to access this route is GET and the URL to hit the endpoint is. Let’s go ahead and test this out on Postman. Make sure your application is running on port 8000. If it isn’t, you can run the command: $ php -S localhost:8000 -t public Success! We are able to get our events into the iCal format. We need to test this out on different calendars to see if it works. In order to test it on Google and Outlook, we need our application to be accessible through a public URL. This is where ngrok comes in! While still running the application on localhost, open up a new terminal and run the command: $ ngrok http 8000 Test on Google and Outlook Google Calendar To test it out on google: On the left, click the + icon, next to "Add a friend’s calendar" input box. Select "From URL" then enter the URL you got from ngrok. Mine is. Your calendar should now have the events from your database. Outlook Calendar To test it on Outlook, on the top menu bar, click “Add Calendar”. Then click “From Internet”, to enter the URL and name your calendar. Click save. That’s it! Conclusion All done! We have successfully created an iCal feed and tested it out on Google and Outlook. You can now test it out on different calendars, including your local calendaring application such as Apple Calendar. Next Steps? You can create a scheduling application and add the option of downloading an iCal file or giving users a URL to access the iCal feed. Also, you can request users to opt into SMS reminders to their calendar using the Twilio SMS API. This tutorial on sending SMS reminders from PHP is a great place to start. I look forward to hearing about the amazing application you build. You can find the complete code on Github. You can reach me on: Github: charlieoduk Twitter: @charlieoduk Happy Coding!
https://www.twilio.com/blog/how-create-ical-calendar-feed-php-laravel-lumen
CC-MAIN-2020-50
refinedweb
2,271
57.47
I have a RecyclerView row layout like this <Layout> <BackgroundView> <ForegroundView> </Layout> I am using ItemTouchHelper to handle swipes (partial) on the foreground view like @Override public void onSwiped(RecyclerView.ViewHolder viewHolder, int direction) { adapter.onItemSwiped(viewHolder); } @Override public void onChildDraw(Canvas c, RecyclerView recyclerView, RecyclerView.ViewHolder viewHolder, float dX, float dY, int actionState, boolean isCurrentlyActive) { View foregroundView […]’m trying to implement a something like coverflow with recyclerview on android but i can’t find the way to do it(already searched on google). I know there is this library but is no longer maintained and the publisher said that we could implement this behavior with recyclerview, can someone point me to the right […] I implemented ItemDecoration into my RecyclerView along with an Animation that plays whenever the RV is loaded. However, I noticed that the decoration appears already at the bounds before the animation completes, and I want to have the decoration move in with the animation at the same time. How would I do this? So far, […] I don’t want my custom scrollbar thumb to grow and shrink with the list size. While it worked with ListView and GridView, I can’t find any way of doing it with RecyclerView. I could find fast scrollers for linear layout but they are not working with gridlayout. As for example, I am using RecyclerViewFastScroller library. […] I have been seeing very long load times for images in our app using Glide 3.6 + a RecyclerView using a GridLayoutManager. The images are all around 20kb-40kb. It seems that glide queues up all the requests which causes the images to take quite awhile to load once you start scrolling further down the list. […] just implemented Recyclerview in my code, replacing Listview. everything works fine. The objects are displayed. but logcat says 15:25:53.476 E/RecyclerView﹕ No adapter attached; skipping layout 15:25:53.655 E/RecyclerView﹕ No adapter attached; skipping layout for the code ArtistArrayAdapter adapter = new ArtistArrayAdapter(this, artists); recyclerView = (RecyclerView) findViewById(R.id.cardList); recyclerView.setHasFixedSize(true); recyclerView.setAdapter(adapter);“ recyclerView.setLayoutManager(new LinearLayoutManager(this)); as you can see I have […] I have a RecyclerView with a GridLayoutManager. I set a custom ItemDecoration: public class ListDetailsItemDecoration extends RecyclerView.ItemDecoration { private int space; public ListDetailsItemDecoration(int space) { this.space = space; } @Override public void getItemOffsets(Rect outRect, View view, RecyclerView parent, RecyclerView.State state) { int itemPosition = parent.getChildPosition(view); outRect.left = space; outRect.right = space; outRect.bottom = space; if(itemPosition […] need to somehow notify RecyclerView, when I drag and drop item from another RecyclerView onto it. Is it possible? Or should I use classic Drag and drop framework? RecyclerView with blue items is in one fragment and RecyclerView with red items is in other fragment. I also try to use ItemTouchHelper but it’s onMove() […]
http://babe.ilandroid.com/android/recyclerview/page/2
CC-MAIN-2018-26
refinedweb
466
50.02
Customize Chapter Page Headers This example shows how to customize chapter page headers that are generated by the Report API chapter reporter. You can customize chapter page headers for PDF and Microsoft® Word reports. The example generates a report for a fictitious company, ABC Services. The custom header contains the company logo, project name, and report date. The company logo is fixed in the header. The project name and report date are dynamic. They are created when the report is generated. The workflow is: Create a custom chapter reporter class. Modify the headers in the PDF or Word chapter template. Add properties to the custom reporter class for the dynamic content in the custom headers. Write a report program that uses the custom reporter and specifies values for the properties. Create a Custom Chapter Reporter Class Create a skeleton chapter reporter class by calling the mlreportgen.report.Chapter.customizeReporter method. Name the class Chapter and save it in a class folder inside of a package folder. mlreportgen.report.Chapter.customizeReporter('+abc/@Chapter'); This method call also copies the chapter template for each report type to the +abc/@Chapter/resources/templates folder. Modify the Headers in the Chapter Template for a PDF Report To modify the header definitions in the chapter template for a PDF report: Unzip the template file. Edit the header definitions in the docpart_templates.htmlfile using a text editor. Package the extracted files into the template file. Unzip the Template File Change the current folder to the folder that contains the PDF template file, default.pdftx and then unzip the file. cd('+abc/@Chapter/resources/templates/pdf'); unzipTemplate('default.pdftx'); The extracted files are in a folder named default. Copy the Image File Copy the image file for the company logo to the default/images folder. Use the image file, abc_logo.png, attached to this example, or your own image file. Edit the Header Definitions In the MATLAB® editor or a text editor, open docpart_templates.html, which is in the folder named default. To make the headers the same on all pages, in the Section1 dptemplate element, delete SectionFirstPageHeader and SectionEvenPageHeader from the list of header and footer templates. SectionDefaultPageHeader is now the template for all chapter page headers. Find the SectionDefaultPageHeader template. <dptemplate name = "SectionDefaultPageHeader"> <p class="SectionTitleHeader"><StyleRef style-</p> <hr/> </dptemplate> Replace it with this markup: <dptemplate name = "SectionDefaultPageHeader"> <table width="100%"> <tr style="border-bottom: 1pt solid #ccc;"> <td style="text-align:left; padding-bottom:2pt" valign="bottom"> <img src="images/abc_logo.png"></img></td> <td style="text-align:center; padding-bottom:2pt; font-family:arial; font-size: 10pt; font-weight: bold" valign="bottom"> <hole id="Project">PROJECT</hole></td> <td style="text-align:right; padding-bottom:2pt; font-family:arial; font-size: 10pt" valign="bottom"><hole id="Date">DATE</hole></td> </tr> </table> </dptemplate> The markup defines a one-row, three-column table with these characteristics: The table cells contain the company logo, a hole for the project name, and a hole for the report date. The contents of the first, second, and third cells are left-aligned, center-aligned, and right-aligned, respectively. The padding between the bottom of the cell and the cell contents is 2pt. The cell contents are aligned with the bottom of the cell. The font for the text in the second cell is Arial, 10pt, and bold. The font for the text in the third cell is Arial and 10pt. The bottom border of the row is visible. The table does not have borders because the HTML does not specify the borderproperty. Package the Extracted Files If you edited docpart_templates.html in the MATLAB Editor, close the file before you package the files. Make sure that you are in the + abc/@Chapter/resources/templates/pdf folder, which contains the folder named default. Package the files in default back into the original PDF template file. zipTemplate('default.pdftx','default'); Modify the Headers in the Chapter Template for a Word Report In Word, the Section1 template is used for chapters and top-level sections. The template is in the Quick Parts gallery. To modify the headers in the Section1 template: Open the template file in Word. Create a temporary copy of the Section1 template in the body of the template document. Specify whether all pages have the same header. Edit the headers. Save the modified Section1 template to the Quick Parts gallery. Delete the content from the body of the template document and save the template file. Open the Template File Navigate to +abc/@Chapter/resources/templates/docx. Open the template file by using one of these methods: In MATLAB, in the Current Folder pane, right-click the template file and click Open Outside MATLAB. Outside of MATLAB, right-click the template file and click Open. Do not double-click a Word template file to open it. Double-clicking the file opens a Word document file that uses the template. The template document opens to an empty page. Display Formatting Symbols To make paragraph and formatting symbols visible, on the Home tab, click the Show/Hide button . Copy the Section1 Template into the Template Document On the Insert tab, in the Text group, click Quick Parts, and then click the Section1 building block. In the template document, Word inserts a copy of the Section1 template and a dummy Section2 section. The dummy section is ignored when you generate a report. The cursor is in the dummy Section2. Scroll up to the Section1 template. To open the header, double-click it. Specify That All Pages Have the Same Header Under Header and Footer Tools, on the Design tab, clear the Different First Page check box. The header for pages that follow the first page is copied to the first page header. Add a Table to the Header On the Insert tab, Click Table. Move the pointer over the grid until you highlight three columns and one row. Delete the message Error! No text of specified style in document and the paragraph that contains it. The horizontal rule from the original header is the bottom border of a paragraph. To make it invisible, click the paragraph and then, on the Home tab, in the Paragraph group, click the arrow to the right of Borders and then click No Border. Add the Logo to the First Table Cell Copy and paste the company logo into the first table cell. Use the attached abc_logo.png file or your own image file. Add Holes for the Project and Date If the Developer ribbon is not available, click File > Options, and then click Customize Ribbon. Under Customize the Ribbon, select Developer. On the Developer tab, click Design Mode so that you can see hole marks with the title tag when you create a hole. In the second cell, add a few spaces before the entry mark so that the hole that you add is an inline hole. Click in front of the spaces. Click the Rich Text Content Control button . A rich text content control displays. Delete the spaces that you added. Replace the text in the control with text that identifies the hole, for example, Project Name. On the Developer tab, click Properties. In the Content Control Properties dialog box: in the Title field, enter Project. In the Tag field, enter Hole. Select the Use a style to format text typed into the empty control check box. Then, click New Style. In the Create New Style from Formatting dialog box, enter Project in the Name field. For the formatting, specify Arial, 10pt. Click the Bold button. Click OK. In the Content Control Properties dialog box, click OK. Add an inline hole for the date in the third cell. Use the steps that you used to add a hole to the second cell. Add a few spaces before the entry mark. Click in front of the spaces. Click the Rich Text Content Control button . Delete the spaces that you added. Replace the text in the control with text that identifies the hole, for example, Report Date. On the Developer tab, click Properties. In the Content Control Properties dialog box, enter Datein the Title field and Holein the Tag field. Select the Use a style to format text typed into the empty control check box. Then, click New Style. In the Create New Style from Formatting dialog box, enter ReportDate in the Name field. For the formatting, specify Arial and10pt. The table looks like: Specify the Table Cell Bottom Margin To specify the cell bottom margin: Select and right-click the table. Click Table Properties, then on the Table tab, click Options. In the Table Options dialog box, under Default cell margins, in the Bottom field, enter 2pt. Align the Cell Contents Left-align the logo in the first cell. Center the hole in the third cell. Right-align the hole in the third cell. To align cell contents: Select the image or hole in the table. On the Home tab, in the Paragraph group, click the Align Left, Center, or Align Right button. Make the Top and Side Borders of the Table Invisible To display only the bottom border of the table: Select the table. Under Table Tools, on the Design tab, click Borders > No Border. Click Borders > Bottom Border. Set the Line Weight to 1pt. Save the Custom Template Close the header by double-clicking the page outside of the header. To select all of the content in the template, press Ctrl+A. On the Insert tab, click Quick Parts, and then click Save Selection to Quick Parts Gallery. In the Create New Building Block dialog box, in the Name field, enter Section1. Set Gallery to Quick Parts, Category to mlreportgen, and Save in to default. Click OK. When you see the message Do you want to redefine the building block entry?, click Yes. Save the Template File It is a best practice to delete the content from the body of the template document before you save the template file. With the content selected, press Delete. To hide the formatting symbols, on the Home tab, in the Paragraph group, click the click the Show/Hide button . Save the template file. Add Properties to the Custom Reporter Class Because the custom header has holes for dynamic content, you must define properties for the holes. Navigate to the +abc/@Chapter folder. In the class definition file, Chapter.m, add the properties that correspond to the Project and Date holes in the header. Replace the empty properties section with: properties Project = '' Date = '' end Save the class file. Generate the Report Using the Custom Chapter Reporter When you generate the report, create the chapter reporter from the abc.Chapter class. Assign values to the properties that correspond to the holes in the header. Before running the following code, navigate to the folder that contains the +abc folder. Alternatively, to add the folder that contains the +abc folder to the MATLAB path: In the MATLAB Toolstrip, on the Home tab, in the Environment group, click Set Path. In the Set Path dialog box, click Add Folder. In the Add Folder to Path dialog box,click the folder and then click Select Folder. You cannot add packages to the MATLAB path. Add the folder that contains the package folder. Generate a PDF Report import mlreportgen.dom.* import mlreportgen.report.* report = Report("Consulting Report","pdf"); chapter = abc.Chapter(); chapter.Project = "Control Systems Consulting"; chapter.Date = date; chapter.Title="Overview"; add(chapter,"Chapter content goes here."); add(report,chapter); close(report); rptview(report); Generate a Word Report import mlreportgen.dom.* import mlreportgen.report.* report = Report("Consulting Report","docx"); chapter = abc.Chapter(); chapter.Project = "Control Systems Consulting"; chapter.Date = date; chapter.Title="Overview"; add(chapter,"Chapter content goes here."); add(report,chapter); close(report); rptview(report); See Also mlreportgen.report.Chapter | unzipTemplate | zipTemplate
https://la.mathworks.com/help/rptgen/ug/customize-chapter-page-headers-in-pdf-and-word-reports.html
CC-MAIN-2022-27
refinedweb
1,970
59.19
Hello everyone, I get a "Segmentation Fault" when I try to run this program. I think my problem is with the structs and enum. Specifially the enum but I could be wrong. I have enclosed my classes.h file and loadFile.c file. Code:#ifndef CLASSES_H_INCLUDED #define CLASSES_H_INCLUDED #include <stdio.h> #include <stdlib.h> typedef enum {MW, TR} days; typedef struct { int hour, min; } Time; typedef struct { char Dept[5]; int course, sect; days meet_days; Time start, end; char instr[20]; } sched_record; #endif // CLASSES_H_INCLUDEDThe file we are to use is a binary file and it is called classes.db. Now we have a text representation and it is as follows:The file we are to use is a binary file and it is called classes.db. Now we have a text representation and it is as follows:Code:#include "classes.h" void main() { FILE *filePointer; sched_record data; filePointer = fopen ("classes.db", "rb"); if (filePointer == NULL) { puts("Cant open file"); exit(1); } while (!feof(filePointer)) { if(fread(&data, sizeof(sched_record), 1, filePointer)!=0) { printf("\n%s %d %d %s %d %d %s", data.Dept, data.course, data.sect, data.meet_days, data.start, data.end, data.instr); fclose(filePointer); } } } Does anyone know how I might correct this? I appreciate any help.Does anyone know how I might correct this? I appreciate any help.Code:Math 102 10 M 0800 0850 Schulte Eng 033 1 T 0930 1050 Shakespeare Art 308 2 M 0800 1150 VanGogh Anth 055 13 T 1200 1325 Kroeber CS 125 2 T 0800 0850 Hoare Eng 202 10 M 1000 1050 Chaucer Chem 100 5 T 1100 1250 Pauling Phys 395 2 T 1200 1250 Einstein CS 125 4 M 1030 1120 Knuth Math 420 2 T 0800 0950 al-Khowarizmi
https://cboard.cprogramming.com/c-programming/147321-binary-file-work-using-structs-enum.html
CC-MAIN-2017-43
refinedweb
293
74.59
This notebook will play with some ideas of using Twitter data to model information about bat emergence. I went to the "bat bridge" twice while attending #scipy2014 in Austin, TX. There are over a million bats under that bridge, but I didn't get to see them come out en masse on either of our visits. Next time I'm in Austin, I'm hoping to go on nights where there is a high probability of bats swarming. My initial hypothesis is that wind and temperature have something to do with whether we get a swarm. If it's too windy, they don't want to fly up high and get blown about. If it's too hot (it's never too cold in Austin), then they are probably languid and will just order take-out that night. If the bats swarm, then people will probably tweet about the bats swarming, but that's not a given. No tweets could mean either no swarm, or that nobody felt like tweeting (must not be the SXSW week). So let's fake some data and then see what sort of parameters we can recover. from pymc import Bernoulli, Uniform, MCMC, Normal, Matplot, TruncatedNormal, deterministic, observed, poisson_like from sklearn.metrics import f1_score from sklearn.linear_model import LogisticRegression import numpy as np np.random.seed(45) from matplotlib import pyplot as plt %pylab inline Populating the interactive namespace from numpy and matplotlib NUM_DAYS = 300 temperatures = np.random.normal(85, 7, NUM_DAYS) temperatures /= temperatures.max() wind_speeds = np.random.normal(0, 15, NUM_DAYS).clip(.2) wind_speeds /= wind_speeds.max() plt.subplot(2,1,1) plt.hist(temperatures); plt.title('Temperatures') plt.subplot(2,1,2) plt.hist(wind_speeds) plt.title('Wind speeds') plt.tight_layout() # In generating our fake data, we'll just make some random score involving wind speed and temp. # The higher the grumpier! grumpy_bat_score = temperatures * 3 + wind_speeds * 2 plt.hist(grumpy_bat_score); Looking at that curve, I want to make the data so that if the score is below 2.4, you have a high chance of coming out and if it's above 3.2, you have a low chance. So lets make a sigmoid that does something like that. x = np.linspace(1.5, 4.5, 1000) widget_a, widget_b = -5, 2.8 random_numbers = np.random.rand(NUM_DAYS) bat_exit_days = None def plot_sigmoid(a=-5, center=2.8): global widget_a, widget_b, bat_exit_days b = -center * a widget_a, widget_b = a, b values = 1 / (1 + np.exp(-(x*a + b))) plt.plot(x, values) plt.ylim([0,1]) plt.xlim([1.5,4.5]) bat_exit_probability = 1 / (1 + np.exp(-(grumpy_bat_score * a + b))) bat_exit_days = random_numbers < bat_exit_probability print("Bats swarmed out of the bridge {} days out of {} (a = {} b = {})".format(bat_exit_days.sum(), NUM_DAYS, a, b)) Play with the widget and make up a probability distribution that looks good to you. Higher magnitude values of a controll the sharpness of the sigmoid function. The center is the midpoint of the distribution. from IPython.html.widgets import interact interact(plot_sigmoid, a=(-6, -1,.1), center=(1, 4,.1)) Bats swarmed out of the bridge 194 days out of 300 (a = -5.0 b = 14.0) <function __main__.plot_sigmoid> From playing with that widget you can find a value that you're happy with. I chose a distribution that had the bats leaving about a quarter of the days. Remember, this is still in fake land, we are just generating some data to see how well pymc can infer stuff about our real model. Let's actually do a logistic regression using the bat score and the simulated exits and see an upper bound on how good we can do with this modeling (for when we later throw in the twitter noise). wind_and_temp = np.vstack([wind_speeds, temperatures]).T wind_temp_model = LogisticRegression() wind_temp_model.fit(wind_and_temp, bat_exit_days) LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, penalty='l2', random_state=None, tol=0.0001) f1_score(wind_temp_model.predict(wind_and_temp), bat_exit_days) 0.85450346420323331 grumpy_model = LogisticRegression() grumpy_col = grumpy_bat_score.reshape((grumpy_bat_score.size,1)) grumpy_model.fit(grumpy_col, bat_exit_days) f1_score(grumpy_model.predict(grumpy_col), bat_exit_days) 0.86374133949191689 # Showing how the curves differ between the grumpy model and the generating model x = np.linspace(0, 10, 100) (grumpy_a, grumpy_b) = grumpy_model.raw_coef_.ravel() y_1 = 1 / (1 + np.exp(-(x * grumpy_a + grumpy_b))) plt.plot(x,y_1, label="modeled logistic curve") y_2 = 1 / (1 + np.exp(-(x * widget_a + widget_b))) plt.plot(x, y_2, label="actual logistic curve") plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.); The logistic curves look pretty good. When you increase the number of days that you sample, the modeled logistic curve sharpens (gets steeper in the middle) to match the actual logistic curve. Play with the notebook to see. It's cool. Now let's say we didn't fabricate all this data, instead we have temperatures, wind speeds, and bat sightings. Logistic regression will give a model that fits the data well, but it will just be one set of parameters to the sigmoid function. In the first step in our analysis (we'll get to Twitter), we'll instead model the logistic regression parameters as unknowns and see what sort of distributions the data impose on them. def first_bat_model(): wind_effect = Normal('wind_effect', -5, 2) temp_effect = Normal('temp_effect', -5, 2) offset = Normal('offset', -5, 2) @deterministic def swarm_prob(t=temp_effect, w=wind_effect, c=offset): return 1 / (1 + np.exp(-(temperatures*t + wind_speeds*w + c))) bat_swarm = Bernoulli('bat_swarm', swarm_prob, value=bat_exit_days, observed=True) return locals() M_1 = MCMC(first_bat_model()) M_1.sample(iter=1000000, burn=10000, thin=100) [-----------------100%-----------------] 1000000 of 1000000 complete in 142.4 sec Matplot.plot(M_1) Plotting temp_effect Plotting offset Plotting wind_effect predictions = (M_1.swarm_prob.value > .5) f1_score(predictions, bat_exit_days) 0.8571428571428571 So this model does almost as well logistic regression above, even just taking the last sample from the sampling, and we get our snazzy distributions for the parameters. Should really be cross validating this stuff to see if we're actually being predictive and all though. Now we are going to assume that we no longer have real knowledge of whether the bats leave or not. Instead we look at tweets that occur when the bats leave the bridge. If there are zero tweets it could mean that there were no bats. It could also mean that the crowd just doesn't have many people addicted to social media. Looking over the last few days, it seems like when there are tweets about the bats showing up, there are usually 2,3 or 4. So given that bats occur, I think that a Poisson distribution with $\lambda = 3$ sounds fine. tweets = np.random.poisson(lam=3, size=NUM_DAYS) * bat_exit_days def second_bat_model(): wind_effect = Normal('wind_effect', -5, 1) temp_effect = Normal('temp_effect', -5, 1) offset = Normal('offset', 0, 1) @deterministic def swarm_prob(t=temp_effect, w=wind_effect, c=offset): return 1 / (1 + np.exp(-(temperatures*t + wind_speeds*w + c))) avg_tweets_when_sighted = TruncatedNormal('avg_tweets_when_sighted', 2, 1, a=.2, b=20) # For some reason, just doing .asdtype(np.bool) was giving me things that didn't # work with np.select down below (tweets_bool, no_tweets_bool) = (tweets > 0, tweets == 0) # Zero inflated poisson, as sometimes we get no tweets because of no bats, # other times because people just didn't want to tweet # Lifted from Fonnesbeck: @observed(dtype=int, plot=False) def zippo(value=tweets, s=swarm_prob, l=avg_tweets_when_sighted): prob_if_no_tweets = np.log((1-s) + s * np.exp(-l)) prob_if_tweets = np.log(s) + poisson_like(value, l) v = np.select([no_tweets_bool, tweets_bool], [prob_if_no_tweets, prob_if_tweets]) return v.sum() return locals() M_2 = MCMC(second_bat_model()) M_2.sample(iter=2000000, burn=20000, thin=100) [-----------------100%-----------------] 2000000 of 2000000 complete in 1103.9 sec Matplot.plot(M_2) Plotting temp_effect Plotting wind_effect Plotting avg_tweets_when_sighted Plotting offset predictions = (M_2.swarm_prob.value > .5) (predictions == bat_exit_days).sum() 237 f1_score(predictions, bat_exit_days) 0.85517241379310349 The f1 score is about as good as training the model on the actual bat exits as opposed to the data with the noise of Twitter included. The next step here is to start scraping bat tweets until next years conference so that I have a good chance of seeing them! If you want to talk about any of this, e-mail me at [email protected] or tweet to @justinvf. I will probably be creating a text classifier for the tweets so that adds another layer of complexity. Not only will the tweets be randomly dependent on the bats, but the classifier will be outputting a probability that the tweet is even about bats. It seems like it should fit naturally within the whole framework though. A big huge thanks to @fonnesbeck for giving the pymc tutorial at this years SciPy and finding the bug in the zippo function (had commented out the decorator accidentally). Also, thanks to @Cmrn_DP for writing Probabilistic Programming and Bayesian Methods for Hackers
https://nbviewer.ipython.org/github/justinvf-zz/algorithmshop/blob/master/20140716-mcmc-bats/mcmc_bat_moddeling.ipynb
CC-MAIN-2022-27
refinedweb
1,455
59.5
I found this code on IBM's website, it was a training session on servers and clients using java. The code compiles fine and the server seems to start up properly when I use java Server 5000. I think whats happening is the server is running and listening for a connection on port 5000. When I try to run the client I get the following error. Exception in thread "main" java.lang.NoSuchMethodError: main I see a start() method but no main. As far as I know, applications should all have main, it seems as if the person who wrote this kinda confused applets with application. Not that I would really know what happened. If you have time, could you tell me if there's an easy fix for this? I would love to have this client/server working if it isn't too much trouble. As I have looked all over the net for a free client/server applet that will actually let me see the java code and none of the free ones do allow getting to their source. Most of them allow you to customize them somewhat but also have built in advertising that can't be removed. This is the closest I have come to finding one that lets me look under the hood. But alas it doesn't work out of the box and I don't know what to do to fix it. Heres the code: Server: Code: import java.io.*; import java.net.*; import java.util.*; public class Server { // The ServerSocket we'll use for accepting new connections private ServerSocket ss; // A mapping from sockets to DataOutputStreams. This will // help us avoid having to create a DataOutputStream each time // we want to write to a stream. private Hashtable outputStreams = new Hashtable(); // Constructor and while-accept loop all in one. DataOutputStream for writing data to the // other side DataOutputStream dout = new DataOutputStream( s.getOutputStream() ); // Save this stream so we don't need to make it again outputStreams.put( s, dout ); // Create a new thread for this connection, and then forget // about it new ServerThread( this, s ); } } // Get an enumeration of all the OutputStreams, one for each client // connected to us Enumeration getOutputStreams() { return outputStreams.elements(); } // Send a message to all clients (utility routine) void sendToAll( String message ) { // We synchronize on this because another thread might be // calling removeConnection() and this would screw us up // as we tried to walk through the list synchronized( outputStreams ) { // For each client ... for (Enumeration e = getOutputStreams(); e.hasMoreElements(); ) { // ... get the output stream ... DataOutputStream dout = (DataOutputStream)e.nextElement(); // ... and send the message try { dout.writeUTF( message ); } catch( IOException ie ) { System.out.println( ie ); } } } } // Remove a socket, and it's corresponding output stream, from our // list. This is usually called by a connection thread that has // discovered that the connectin to the client is dead. void removeConnection( Socket s ) { // Synchronize so we don't mess up sendToAll() while it walks // down the list of all output streamsa synchronized( outputStreams ) { // Tell the world System.out.println( "Removing connection to "+s ); // Remove it from our hashtable/list outputStreams.remove( s ); // Make sure it's closed try { s.close(); } catch( IOException ie ) { System.out.println( "Error closing "+s ); ie.printStackTrace(); } } } // Main routine // Usage: java Server <port> static public void main( String args[] ) throws Exception { // Get the port # from the command line int port = Integer.parseInt( args[0] ); // Create a Server object, which will automatically begin // accepting connections. new Server( port ); } } Thanks for your time.Thanks for your time() ); // Over and over, forever ... while (true) { // ... read the next message ... String message = din.readUTF(); // ... tell the world ... System.out.println( "Sending "+message ); // ... and have the server send it to all clients server.sendToAll( message ); } } catch( EOFException ie ) { // This doesn't need an error message } catch( IOException ie ) { // This does; tell the world! ie.printStackTrace(); } finally { // The connection is closed for one reason or another, // so have the server dealing with it server.removeConnection( socket ); } } }
http://forums.devx.com/printthread.php?t=153571&pp=15&page=1
CC-MAIN-2015-48
refinedweb
658
66.13
GETSID(2) BSD Programmer's Manual GETSID(2) getsid - get process session #include <unistd.h> pid_t getsid(pid_t pid); The session ID of the process identified by pid is returned by getsid(). If pid is zero, getsid() returns the session ID of the current process. Upon successful completion, the function getsid() returns the session ID of the specified process; otherwise, it returns a value of -1 and sets errno to indicate an error. getsid() will succeed unless: [EPERM] The current process and the process pid are not in the same session. [ESRCH] There is no process with a process ID equal to pid. getpgid(2), getpgrp(2), setpgid(2), setsid(2), termios(4) The getsid() function call is derived from its usage in AT&TNon-Null Sys- tem V UNIX, and is mandated by X/Open Portability Guide Issue 4 ("XPG4")..
https://www.mirbsd.org/htman/sparc/man2/getsid.htm
CC-MAIN-2015-40
refinedweb
141
61.36
fredrikj.net / blog / Hypergeometric 2F1, incomplete beta, exponential integrals June 11, 2009 One of the classes of functions I’m currently looking to improve in mpmath is the hypergeometric functions; particularly 1F1 (equivalently the incomplete gamma function) and the Gauss hypergeometric function 2F1. For example, the classical orthogonal polynomials (Legendre, Chebyshev, Jacobi) are instances of 2F1 with certain integer parameters, and 2F1 with noninteger parameters allows for generalization of these functions to noninteger orders. Other functions that can be reduced to 2F1 include elliptic integrals (though mpmath uses AGM for these). With a good implementation of 2F1, these functions can be implemented very straightforwardly without a lot of special-purpose code to handle all their corner cases. Numerical evaluation of 2F1 is far from straightforward, and the hyp2f1 function in mpmath used to be quite fragile. The hypergeometric series only converges for |z| < 1, and rapidly only for |z| << 1. There is a transformation that replaces z with 1/z, but this leaves arguments close to the unit circle which must be handled using further transformations. As if things weren't complicated enough, the transformations involve gamma function factors that often become singular even when the value of 2F1 is actually finite, and obtaining the correct finite value involves appropriately cancelling the singularities against each other. After about two days of work, I’ve patched the 2F1 function in mpmath to the point where it should finally work for all complex values of a, b, c, z (see commits here). I’m not going to bet money that there isn’t some problematic case left unhandled, but I’ve done tests for many of the special cases now. The following is a very simple example that previously triggered a division by zero but now works: >>> print hyp2f1(3,-1,-1,0.5) 2.5 The following previously returned something like -inf + nan*j, due to incorrect handling of gamma function poles, but now works: >>> print hyp2f1(1,1,4,3+4j) (0.492343840009635 + 0.60513406166124j) >>> print (717./1250-378j/625)-(6324./15625-4032j/15625)*log(-2-4j) # Exact (0.492343840009635 + 0.60513406166124j) Evaluation close to the unit circle used to be completely broken, but should be fine now. A simple test is to integrate along the unit circle: >>> mp.dps = 25 >>> a, b, c = 1.5, 2, -4.25 >>> print quad(lambda z: hyp2f1(a,b,c,exp(j*z)), [pi/2, 3*pi/2]) (14.97223917917104676241015 + 1.70735170126956043188265e-24j) Mathematica gives the same value: In[17]:= NIntegrate[Hypergeometric2F1[3/2,2,-17/4,Exp[I z]], {z, Pi/2, 3Pi/2}, WorkingPrecision->25] -26 Out[17]= 14.97223917917104676241014 - 3.514976640925973851950882 10 I Finally, evaluation at the singular point z = 1 now works and knows whether the result is finite or infinite: >>> print hyp2f1(1, 0.5, 3, 1) 1.333333333333333333333333 >>> print hyp2f1(1, 4.5, 3, 1) +inf As a consequence of these improvements, several mpmath functions (such as the orthogonal polynomials) should now work for almost all complex parameters as well. The improvements to 2F1 also pave the way for some new functions. One of the many functions that can be reduced to 2F1 is the generalized incomplete beta function: An implementation of this function (betainc(a,b,x1,x2)) is now available in mpmath trunk. I wrote the basics of this implementation a while back, but it was nearly useless without the recent upgrades to 2F1. Evaluating the incomplete beta function with various choices of parameters proved useful to identify and fix some corner cases in 2F1. One important application of the incomplete beta integral is that, when regularized, it is the cumulative distribution function of the beta distribution. As a sanity check, the following code successfully reproduces the plot of several beta CDF:s on the Wikipedia page for the beta distribution (I even got the same colors!): def B(a,b): return lambda t: betainc(a,b,0,t,regularized=True) plot([B(1,3),B(0.5,0.5),B(5,1),B(2,2),B(2,5)], [0,1]) The betainc function is superior to manual numerical integration because of the numerically hairy singularities that occur at x = 0 and x = 1 for some choices of parameters. Thanks to having a good 2F1 implementation, betainc gives accurate results even in those cases. The betainc function also provides an appropriate analytic continuation of the beta integral, internally via the analytic continuation of 2F1. Thus the beta integral can be evaluated outside of the standard interval [0,1]; for parameters where the integrand is singular at 0 or 1, this is in the sense of a contour that avoids the singularity. It is interesting to observe how the integration introduces branch cuts; for example, in the following plot, you can see that 0 is a branch point when the first parameter is fractional and 1 is a branch point when the second parameter is fractional (when both are positive integers, the beta integral is just a polynomial, so it then behaves nicely): # blue, red, green plot([B(2.5,2), B(3,1.5), B(3,2)], [-0.5,1.5], [-0.5,1.5]) To check which integration path betainc “uses”, we can compare with numerical integration. For example, to integrate from 0 to 1.5, we can choose a contour that passes through +i (in the upper half plane) or -i (in the lower half plane): >>> mp.dps = 25 >>> print betainc(3, 1.5, 0,) The sign of the imaginary part shows that betainc gives the equivalent of a contour through the lower half plane. The convention turns out to agree with that used by Mathematica: In[10]:= Beta[0, 1.5, 3, 1.5] Out[10]= 0.152381 + 0.402377 I I’ll round things up by noting that I’ve also implemented the generalized exponential integral (the En-function) in mpmath as expint(n,z). A sample: >>> print expint(2, 3.5) 0.005801893920899125522331056 >>> print quad(lambda t: exp(-3.5*t)/t**2, [1,inf]) 0.005801893920899125522331056 The En-function is based on the incomplete gamma function, which is based on the hypergeometric series 1F1. These functions are still slow and/or inaccurate for certain arguments (in particular, for large ones), so they will require improvements along the lines of those for 2F1. Stay tuned for progress. In other news, mpmath 0.12 should be in both SymPy and Sage soon. With this announcement I’m just looking for an excuse to tag this post with both ‘sympy’ and ‘sage’ so it will show up on both Planet SymPy and Planet Sage :-) Posts purely about mpmath development should be relevant to both audiences though, I hope.
http://fredrikj.net/blog/2009/06/hypergeometric-2f1-incomplete-beta-exponential-integrals/
CC-MAIN-2017-13
refinedweb
1,116
53.61
#include <sys/types.h> #include <sys/time.h> #include <sys/resource.h> #include <sys/wait.h> pid_t wait3(int *status, int options, struct rusage *rusage); pid_t wait4(pid_t pid, int *status, int options, struct rusage *rusage); pid_t wait3(int *status, int options, struct rusage *rusage); pid_t wait4(pid_t pid, int *status, int options, struct rusage *rusage);); wait3(status, options, rusage); is equivalent to: waitpid(-1, status, options); waitpid(-1, status, options); Similarly, the following wait4() call: wait4(pid, status, options, rusage); wait4(pid, status, options, rusage); waitpid(pid, status, options);. As for waitpid(2). Including <sys/time.h> is not required these days, but increases portability. (Indeed, <sys/resource.h> defines the rusage structure with fields of type struct timeval defined in <sys/time.h>.) The prototype for these functions is only available if _BSD_SOURCE is defined. 4.3BSD fork (2) getrusage (2) sigaction (2) signal (2) wait (2) Advertisements
http://www.tutorialspoint.com/unix_system_calls/wait3.htm
CC-MAIN-2016-50
refinedweb
150
57.47
Red Hat Bugzilla – Bug 74485 Existing semaphore (0) is modified at Apache 1.3.23-14 startup Last modified: 2007-04-18 12:46:52 EDT From Bugzilla Helper: User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows 98) Description of problem: When my Linux server boots up, my application (Samsung Contact) is started first. This creates 2 semaphores with id=0 and id=32769. When Apache starts up, it changes the permissions on semaphore 0 from 666 to 600 and ownership from openmail to apache. This causes errors to be generated within Samsung Contact because a daemon can no longer operate on the semaphore it created. If Samsung Contact is stopped and restarted, it's semaphores are created with non-zero ids and subsequent restarts of Apache do not cause a problem. A strace on /usr/sbin/httpd shows that there are 3 calls to semctl to change the ownership. However, within the strace, there are only ever 2 semgets. Somehow, the daemon is also calling semctl on semaphore 0. Trying to track this down with the 1.3.23-14 source RPM, there is no code that matches the behaviour seen in the strace. There are 2 calls to semctl for each semid: 1 to with IPC_STAT and 1 with IP_SET where the permissions and ownership are changed. There are calls to semctl in http_main.c but they do not match up with the strace output. Attempting to compile the source with debugging symbols turned on does not seem to reproduce the problem. (Help would be appreciated on enabling debugging successfully :-) ) Compiling the source from the SRPM, will reproduce the problem. We are also seeing that after Apache has been shut down, the semaphores it created are being left. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1) Stop Apache or boot up without Apache running. 2) Create a semaphore using the attached C program. If this is run just after boot up, it should have id 0. If not, modify the program so that semid=0 when the call to semctl with IPC_SET is made. 3) Run ipcs -s to confirm the permissions and ownership. 4) Start up Apache. 5) Run ipcs -s to see that the permissions and ownership of semaphore 0 have changed. Expected Results: Apache should not be "stomping" over an existing semaphore. Additional info: #include <stdio.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/sem.h> #if defined(__GNU_LIBRARY__) && !defined(_SEM_SEMUN_UNDEFINED) /* union semun is defined by including <sys/sem.h> */ #else /* according to X/OPEN we have to define it ourselves */ union semun { int val; /* value for SETVAL */ struct semid_ds *buf; /* buffer for IPC_STAT, IPC_SET */ unsigned short *array; /* array for GETALL, SETALL */ /* Linux specific part: */ struct seminfo *__buf; /* buffer for IPC_INFO */ }; #endif int main(int argc, char *argv[]) { int semid; char c; union semun arg; struct semid_ds buf; semid = semget(IPC_PRIVATE, 1, IPC_CREAT | 0666); buf.sem_perm.uid = 100; buf.sem_perm.gid = 11; arg.buf = &buf; semctl(semid,1, IPC_SET, arg); // semctl(semid, 1, IPC_RMID); exit(0); } *** This bug has been marked as a duplicate of 74496 ***
https://bugzilla.redhat.com/show_bug.cgi?id=74485
CC-MAIN-2018-09
refinedweb
523
66.84
Dodo, a message bar for iOS / SwiftDodo, a message bar for iOS / Swift This is a UI widget for showing text messages in iOS apps. It is useful for showing short messages to the user, something like: "Message sent", "Note saved", "No Internet connection". - Dodo includes styles for success, info, warning and error type messages. - The bar can have buttons with custom tap handlers. - Bar styles can be customized. - You can provide custom animations for showing and hiding the bar or use one of the default animation presets. - Supports iOS 9+. At last the Dodo said, `EVERYBODY has won, and all must have prizes.' From Alice's Adventures in Wonderland. Original illustration by John Tenniel, 1865. Source: Wikimedia Commons. SetupSetup There are three ways you can add Dodo to your project. Add source Simply add DodoDistrib.swift file into your Xcode project. Setup with Carthage Add github "evgenyneu/Dodo" ~> 13.0 to your Cartfile and run carthage update. Setup with CocoaPods If you are using CocoaPods add this text to your Podfile and run pod install. use_frameworks! target 'Your target name' pod 'Dodo', '~> 13.0' Legacy Swift versionsLegacy Swift versions Setup a previous version of the library if you use an older version of Swift. UsageUsage Add import Dodo to your source code if you used Carthage or CocoaPods setup methods. Dodo is an extension of UIView class. You can reach it by using using the dodo property in any instance of UIView or its subclass. It can be, for example, the view property of your view controller. Show and hide message barShow and hide message bar view.dodo.success("Everybody has won and all must have prizes.") view.dodo.info("Extinction is the rule. Survival is the exception.") view.dodo.warning("This world is but a canvas to our imagination.") view.dodo.error("The perception of beauty is a moral test.") view.dodo.hide() If you are showing the bar in the root view you may need to provide top or bottom anchors. This will prevent the message bar from overlapping with the status or the tab bar. view.dodo.topAnchor = view.safeAreaLayoutGuide.topAnchor view.dodo.bottomAnchor = view.safeAreaLayoutGuide.bottomAnchor view.dodo.success("I solemnly swear to avoid the notch.") Alternatively, you can specify the anchors from the layout guides: view.dodo.topAnchor = topLayoutGuide.bottomAnchor view.dodo.bottomAnchor = bottomLayoutGuide.topAnchor view.dodo.success("I solemnly swear to avoid the notch.") StylingStyling Set dodo.style property to style the message bar before it is shown. See the styling manual for the complete list of configuration options. // Set the text color view.dodo.style.label.color = UIColor.white // Set background color view.dodo.style.bar.backgroundColor = DodoColor.fromHexString("#00000090") // Close the bar after 3 seconds view.dodo.style.bar.hideAfterDelaySeconds = 3 // Close the bar when it is tapped view.dodo.style.bar.hideOnTap = true // Show the bar at the bottom of the screen view.dodo.style.bar.locationTop = false // Do something on tap view.dodo.style.bar.onTap = { /* Tapped on the bar */ } Add buttons or iconsAdd buttons or icons Set style.leftButton and style.rightButton properties to show buttons or icons. As with other style properties please style the buttons before the message is shown. // Use a built-in icon view.dodo.style.leftButton.icon = .close // Supply your image view.dodo.style.leftButton.image = UIImage(named: "CloseIcon") // Change button's image color view.dodo.style.leftButton.tintColor = DodoColor.fromHexString("#FFFFFF90") // Do something on tap view.dodo.style.leftButton.onTap = { /* Button tapped */ } // Close the bar when the button is tapped view.dodo.style.leftButton.hideOnTap = true Customize animationCustomize animation Configure the animation effect of the bar before it is shown. See the animation wiki page for more information. // Use existing animations view.dodo.style.bar.animationShow = DodoAnimations.rotate.show view.dodo.style.bar.animationHide = DodoAnimations.slideRight.hide // Turn off animation view.dodo.style.bar.animationShow = DodoAnimations.noAnimation.show Unit testingUnit testing Sometimes it is useful to verify which messages were shown by your app in unit tests. It can be done by setting an instance of DodoMock class to view.dodo property. See the unit testing manual for more details. Known limitationsKnown limitations - Dodo messages can not be shown in a UITableViewController. Using Dodo from Objective-CUsing Dodo from Objective-C This manual describes how to show Dodo messages in Objective-C apps. Demo iOS appDemo iOS app This project includes a demo app. Thanks 👍 - sai-prasanna for Swift 2.2 update. Quotes creditsQuotes credits Albert EinsteinAlbert Einstein Information is not knowledge. Carl SaganCarl Sagan Extinction is the rule. Survival is the exception. George S. PattonGeorge S. Patton Success is how high you bounce when you hit bottom. Henry David ThoreauHenry David Thoreau This world is but a canvas to our imagination. The perception of beauty is a moral test. Joe NamathJoe Namath When you win, nothing hurts. Lewis CarrollLewis Carroll Everybody has won and all must have prizes. Malcolm ForbesMalcolm Forbes Failure is success if we learn from it. William BlakeWilliam Blake If the doors of perception were cleansed everything would appear to man as it is, Infinite. Alternative solutionsAlternative solutions Here are some other message bar libraries for iOS: - cezarywojcik/CWStatusBarNotification - frankdilo/FDStatusBarNotifierView - jaydee3/JDStatusBarNotification - KrauseFx/TSMessages - peterprokop/SwiftOverlays - terryworona/TWMessageBarManager LicenseLicense Dodo is released under the MIT License. •ᴥ••ᴥ• This project is dedicated to the dodo, species of flightless birds that lived on the island of Mauritius and became extinct in the 17th century.
https://cocoapods.org/pods/Dodo
CC-MAIN-2019-35
refinedweb
906
52.46
Python 1.5 Reference Manual string argument passed to the built-in function eval and to the exec statement are code blocks. The file read by the built-in function execfile name spaces, the local and the global name space, that affect execution of the code block. A name space is a mapping from names (identifiers) to objects. A particular name space may be referenced by more than one execution frame, and from other places as well. Adding a name to a name space is called binding a name (to an object); changing the mapping of a name is called rebinding; removing a name is unbinding. Name spaces are functionally equivalent to dictionaries (and often implemented as dictionaries). The local name space of an execution frame determines the default place where names are defined and searched. The global name space specified name space; global names are searched only in the global and built-in namespace.[1] A target occurring in a del statement is also considered bound for this purpose (though the actual semantics are to "unbind" the name). When a global name is not found in the global name space, it is searched in the built-in namespace. The built-in namespace associated with the execution of a code block is actually found by looking up the name __builtins__ is its global name space; this should be a dictionary or a module (in the latter case its dictionary is used). Normally, the __builtins__ namespace is the dictionary of the built-in module __builtin__ (note: no 's'); if it isn't, restricted execution mode is in effect, see [Ref:XXX]. When a name is not found at all, a NameError exception is raised. The following table lists the local and global name space used for all types of code blocks. The name space for a particular module is automatically created when the module is first imported. Note that in almost all cases, the global name space is the name space of the containing module -- scopes in Python do not nest!Notes: n.s. means name space (1) The main module for a script is always called __main__; ''the filename don't enter into it.'' (2) The global and local name space for these can be overridden with optional extra arguments. (3) The exec statement and the eval() and execfile() functions have optional arguments to override the global and local namespace. If only one namespace is specified, it is used for both. The built-in functions globals() and locals() returns a dictionary representing the current global and local name space, respectively. The effect of modifications to this dictionary on the name space are undefined.[2] the offending piece of code from the top). When an exception is not handled at all, the interpreter terminates execution of the program, or returns to its interactive main loop. In this case, the interpreter normally prints a stack backtrace. Exceptions are identified by string and raise statements in "Compound statements" on page 47. Generated with Harlequin WebMaker
http://docs.python.org/release/1.5/ref/ref-6.html
crawl-003
refinedweb
502
61.67
1. Main welcome to the shop menu to select which type of shopping you'd like to do 2. Attack skills menu 3. Other skills menu 4. Upgrade menu Once the player has completed the shopping they wished to do I use a Break command to back out into the previous menu. Before I started adding submenus the process worked out well enough. Now, however, the break commands generate errors. Specifically: SyntaxError: 'break' outside loop Here is the relevant coding, although I suspect the issue is a misunderstanding on my part and not the code itself: - Code: Select all # The main menu of the shop def shop(exp): shopping = 'yes' print("Welcome to the Shop.") shopping = input("1 for attacks, 2 for skills> ") while shopping == '1': ATTACKS_menu(exp) while shopping == '2': SKILLZ_menu(exp) # The Attacks purchasing menu def ATTACKS_menu(exp): num = 1 for i in ATTACKZ: # format: "1 ) Name, Cost" print(num, ")", str([i[0]])[2:-2], [i[4]], "exp\n") num += 1 print("What would you like to buy?") print("When you've finished shopping type '0'.") choice = int(input("> ")) - 1 # -1 to compensate for numbering if choice == -1: break # this is where the error seems to come from else: purchase = choice buying(ATTACKZ[purchase][4], exp, purchase) I've left out "buying()" and "ATTACKZ" because I don't believe they're related to the issue. Oh, and of course "shop()" is referenced further on in the coding. I believe that is everything. Questions, explanations, and suggestions are all welcome. Thanks in advance, -Wommbatt
http://www.python-forum.org/viewtopic.php?p=11289
CC-MAIN-2017-22
refinedweb
255
63.19
Created on 2015-07-26 10:20 by Mark.Shannon, last changed 2015-12-23 14:16 by serhiy.storchaka. Setting an item in an ordered dict via dict.__setitem__, or by using it as an object dictionary and setting an attribute on that object, creates a dictionary whose repr is: OrderedDict([<NULL>]) Test case attached. Linking related issues and Attached revised file that runs to completion on 2.7 and 3.x. Marco, #-prefixed issue numbers like this, #24721, #24667, and #24685, are easier to read. There is a bug in _PyObject_GenericSetAttrWithDict() Objects/object.c where a calls are made to PyDict_SetItem() and PyDict_DelItem() without checking first checking for PyDict_CheckExact(). * In PEP 372, OrderedDict was consciously specified to be a subclass of regular dicts in order to improve substitutability for dicts in most existing code. That decision had some negative consequences as well. It is unavoidable the someone can call the parent class directly and undermine the invariants of the subclass (that is a fact of life for all subclasses that maintain their own state while trying to stay in-sync with state in the parent class -- see for an example). With pure python code for the subclass, we say, "don't do that". I'll add a note to that effect in the docs for the OD (that said, it is a general rule that applies to all subclasses that have to stay synchronized to state in the parent). In C version of the OD subclass, we still can't avoid being bypassed (see) and having our subclass invariants violated. Though the C code can't prevent the invariants from being scrambled it does have an obligation to not segfault and to not leak something like "OrderedDict([<NULL>])". Ideally, if is possible to detect an invalid state (i.e. the linked link being out of sync with the inherited dict), then a RuntimeError or somesuch should be raised. FTR, this will likely involve more than just fixing odict_repr(). __repr__() allocates a list with the size len(od) and fills it iterating linked list. If the size of linked list is less then the size of the dict, the rest of the list is not initialized. Even worse things happened when the size of linked list is greater then the size of the dict. Following example causes a crash: from collections import OrderedDict od = OrderedDict() class K(str): def __hash__(self): return 1 od[K('a')] = 1 od[K('b')] = 2 print(len(od), len(list(od))) K.__eq__ = lambda self, other: True dict.__delitem__(od, K('a')) print(len(od), len(list(od))) print(repr(od)) Proposed patch fixes both issues. Ping. Review posted. Aside from a couple minor comments, LGTM. Thanks for doing this. Incidentally, it should be possible to auto-detect independent changes to the underlying dict and sync the odict with those changes. However, doing so likely isn't worth it. New changeset 88d97cd99d16 by Serhiy Storchaka in branch '3.5': Issue #24726: Fixed issue number for previous changeset 59c7615ea921. New changeset 965109e81ffa by Serhiy Storchaka in branch 'default': Issue #24726: Fixed issue number for previous changeset 76e848554b5d. Thanks for your review Eric. test_delitem_2 was not added because it fails in just added TestCase for COrderedDict subclass. Added tests for direct calls of other dict methods as Eric suggested. During writing new tests for direct calls of other dict methods I found yet one bug. Following code makes Python to hang and eat memory. from collections import OrderedDict od = OrderedDict() for i in range(10): od[str(i)] = i for i in range(9): dict.__delitem__(od, str(i)) list(od) New changeset 1594c23d8c2f by Serhiy Storchaka in branch '3.5': Issue #24726: Revert setting the value on the dict if New changeset b391e97ccfe5 by Serhiy Storchaka in branch 'default': Issue #24726: Revert setting the value on the dict if Wrong issue. The correct one is issue25410. Here is a patch that fixes an infinite loop reported in msg254071. May be this is not the best solution. It makes the behavior of Python and C implementation differ (the former just iterates a linked list, the latter raises an error). But to reproduce Python implementation behavior we need to add refcounters to linked list nodes.
https://bugs.python.org/issue24726
CC-MAIN-2019-22
refinedweb
709
65.12
The choreography 2 dataset The data presented in this page can be downloaded from the public repository zenodo.org/record/29551. - If you use this database in your experiments, please cite it (DOI:10.5281/zenodo.29551) or the following paper: Mangin, P.Y. Oudeyer, Learning semantic components from sub symbolic multi modal perception to appear in the Joint IEEE International Conference on Development and Learning an on Epigenetic Robotics (ICDL EpiRob), Osaka (Japan) (2013) (More information, bibtex) Presentation This database contains choreography motions recorded through a kinect device. It contains a total of 1100 examples of 10 different gestures that are spanned over one or two limbs, either the legs (e.g. walk, squat), left or right arm (e.g. wave hand, punch) or both arms (e.g. clap in hands, paddle). Each example (or record) contained in the dataset consists in two elements: - the motion data, - labels identifying which gesture is demonstrated. Description of the data The data has been acquired through a kinect camera and the OpenNI drivers through its ROS <> interface, which yields a stream of values of markers on the body. Each example from the dataset is associated to a sequence of 3D positions of each of the 15 markers. Thus for a sequence of length T, the example would corresponds to T*15*7 values. - The position of the following list of markers was recorded: head, neck, left_hip, left_hip, left_shoulder, left_elbow, left_hand, left_knee, left_foot, right_hip, right_shoulder, right_elbow, right_hand, right_knee, right_foot, right_hand A list of gestures and their descriptions can be found at the end of this document. Format This data is accessible in three data formats: - text - numpy - Matlab ###The text format The set of examples consists in: - a json file describing metadata and labels, - a directory containing one text file for each example. These are distributed in a compressed archive (tar.gz). An example of a json file is given below. They all have a similar structure. { "marker_names": [ "head", "neck", ... ], "data_dir": "mixed_partial_data", "name": "mixed_partial", "records": [ { "data_id": 0, "labels": [ 20, 26 ] }, { "data_id": 1, "labels": [ 19, 28 ] }, ... ] } It contains the following data: - name: name of the set of examples, - marker_names: list of name of the markers in the same order as they appear in data, - data_dir: path to the data directory, - records: list of records. Each record contains: - a data_id fields, - a labels field containing a list of label as integers. For each record listed in th json file there exists a text file in the ‘data_dir’ directory, which name is the ‘data_id’ plus a ‘.txt’ extension. The text files contains the sequence of positions of the marker. Each set of values at a given time is given as a line of space separated floating numbers (formated as ‘5.948645401000976562e+01’). Each line contains 7 successive values for each marker which are there 3D coordinates together with a representation of the rotation of the frame between previous and next segment. The rotation is encoded in quaternion representation as described on the ROS time frame page . Thus each line contains 7xM values with M the number of markers. The numpy format In this format each set of examples is described by two files: a json file and a compressed numpy data file (.npz). The json file is very similar to the one from the text format, the only difference is that the ‘data_dir’ element is replaced by a ‘data_file’ element containing the path to the data file. The data file is a numpy compressed data file storing one array for each example. The name of the array is given by the ‘data_id’ element. Each data array (one for each record) is of shape (T, M, 7) where T is the length of the example and M the number of markers. The following code can be used to load a set of example in python. import os import json import numpy as np FILE = 'path/to/mixed_full.json' with open(FILE, 'r') as meta_file: meta = json.load(meta_file) # meta is a dictionary containing data from the json file path_to_data = os.path.join(os.path.dirname(FILE), meta['data_file']) loaded_data = np.load(path_to_data) data = [] labels = [] for r in meta['records']: data.append(loaded_data[str(r['data_id'])]) # numpy array labels.append(r['labels']) # list of labels as integers print "Loaded %d examples for ``%s`` set." % (len(data), meta['name']) print "Each data example is a (T, %d, 3) array." % len(meta['marker_names']) print "The second dimension corresponds to markers:" print "\t- %s" % '\n\t- '.join(meta['marker_names']) return (data, labels, meta['marker_names']) The Matlab format In the Matlab format, a set of examples is described by a single ‘.mat’ file containing the following elements: - a ‘name’ variable (string) containing the name of the set of examples, - a ‘marker_names’ variable containing a list of marker names (strings), - a ‘data’ variable containing a list of data arrays (one for each record) of size (T, M, 7) where T is the length of the example and M the number of markers, - a ‘labels’ variable which is a list of list of labels (one list of labels for each example). For more information, feel free to contact me at olivier.mangin at inria dot fr. Appendix ###List of gestures A table with illustrations of gestures is presented in the gesture illustration page. .
http://olivier.mangin.com/data/choreo2/
CC-MAIN-2018-22
refinedweb
879
61.97
You can hardly name an area in modern computer science where XML is not used. With such a wide applicability domain, the requirements of XML authors are naturally quite diverse. No software, "industry standard" or no "industry standard," can reasonably claim to satisfy all such requirements. Don't feel envious if your authoring tool isn't the latest buzz; chances are you don't need most of the features you are reading about in press releases, but instead need something completely different. In our own area, web development, we can distinguish at least three very different usage patterns and corresponding sets of requirements. Most users of XML authoring tools will likely belong to one of the following three classes: site developersthose who create the site's source definition, schemas, and stylesheets; content authorsthose who write original content for the web site; and site editorsthose who maintain the site, update pages, post stories submitted by authors, and so on. The features that these categories of users want in an XML authoring tool are not only different but in some aspects even contradictory. Developer's workbench. The ideal XML editor for a site developer is, above all, an XML editor. It must be a powerful tool fluent in both XML in general and many XML vocabularies and formalisms in particular. It must not be restrictive in any way; if the tool cannot perform a task automatically, it must at least not prevent the developer from doing it manually. Smart tools are good, but they should not try to be smarter than their user . Usually, source-oriented editors ( 6.1.1 ), such as Topologi's CME, [1] best suit this category of users. [1] [1] In short, a developer's XML editor must be a versatile workbench, with all sorts of devices and appliances for any imaginable taskfrom sophisticated and almost intelligent power tools to a mere screwdriver. Author's writing desk. An XML editing application for a content author is a different story altogether. Since the job of authors is creating content for web sites, what they need is, above all, a content editor aware of XML. Note the word "aware"; such a tool must not require (or demonstrate ) more XML knowledge than absolutely necessary. Authoritative and therefore restrictive, yet friendly and forgiving these are the qualities that will help such a tool to fulfill its primary purpose: help the author concentrate on content while producing valid and sensibly structured XML documents. For most authors, word processor XML editors ( 6.1.4 ) such as Morphon [2] work best, although for database-like XML, form-based editors ( 6.1.3 ) are preferable. [2] [2] So, if the developer's editor is a workbench, then for the author, a good metaphor would be an austere writing desk with nothing but an ink pot and a sheet of white paper (or an empty form to fill). A dictionary would be handy too. Editor's assembly line. A small site with occasional updates does not need any specific maintenance software, and this is especially true for an XML-based site whose source is so transparent. If, however, you frequently update a huge site, what you need is not a standalone editing tool but a complete content management system (CMS). CMS software is not specific to XML; in fact, much of it still does not support XML too well. Those systems that are XML-aware (e.g., Lenya [3] ) implement one of the traditional XML editing approaches (such as form-based editing, 6.1.3 ). [3] cocoon.apache.org/lenya; formerly known as Wyona. [3] cocoon.apache.org/lenya; formerly known as Wyona. What differentiates CMS tools from regular editors is their ability to work with many documents at once. Ability to combine documents into projects, storage and retrieval automation, versioning, scheduled updatesthese features make a CMS similar to an assembly line where long queues of documents are being worked on in a semiautomatic fashion. What lies ahead. CMS software is in a world of its own, but it is not directly related to XML and therefore is not analyzed in this book. Instead, we'll start this chapter with an overview of the main categories of XML authoring tools. Several approaches to XML editing exist today. Some tools implement more than one approach and let you switch between them on the fly; others focus on one approach only. Below we'll examine these approaches, discuss their advantages and disadvantages, and look at some example implementations . It's as simple as that: What you have in front of you is the full and complete source document, the way W3C intended it to be. Nothing but straightforward, uncompromising , truly open source XMLas in, for instance, this book's markup examples. You can do it too. There's no arguing that XML source editing is more suitable for developers than for authorsbut not by much. After all, one of the main goals of XML was to make documents as human-readable and human-editableas possible. By properly designing your source definition (choosing logical names , separating site-wide metadata, abbreviating addresses, etc.), you can push this editability even higher. In fact, the entire book you are reading is devoted to the ways of making XML transparent and accessible to anyone , not only developers. Source editing of XML is not the same as plain text editing. Most of the convenience of source-oriented editors is in their XML-specific features. Below is an attempt at classifying these goodies . Generic XML features are those capabilities that can be useful for any XML document, no matter what schema it conforms to (if any). These are the most basic and frequently used commands, such as closing the currently open element, navigating to the start tag or end tag, commenting or uncommenting a fragment, highlighting well- formedness errors, indentation, and manipulating character data (e.g., replacing all special characters with their numeric character references and vice versa). Some of these features require that the entire document be well- formed , but many will work regardless. Schema-specific features , often called guided editing , take advantage of knowing the schema of the document you're working with. Only a grammar-based schema ( 2.2.1 ) can be used in this waySchematron is useless for guided editing (except for simple validation). Guided editing features usually include listing element types or attributes that are valid at a specific point, automatic insertion of required constructs and fixed values, and validation of the edited document (highlighting errors and providing suggestions on how to fix them). Syntax coloring is a simple but extremely handy feature that can make a world of difference in terms of usability. Unfortunately, many editors treat syntax coloring simplistically, offering separate colors only for generic classes of constructs such as comments, elements, entities, and character data. Such generic syntax coloring is the absolute minimum you might require from an XML editor. Much more useful is specific syntax coloring, which allows you to separately color distinct namespaces, elements, and attributes. Coloring of some element might determine or affect the color of its children or character data. In some situations, even monolithic character data can be usefully parsed and coloredfor example, the expressions inside { curly braces } in attribute values in XSLT. Traditionally, source editors use a single monospaced font for displays. However, more sophisticated editors can assign not only different colors but also different font sizes and faces to source constructs. This is the approach used for the source code examples in this book (you cannot use color in a black-and-white book anyway); of the editors mentioned in this chapter, XEmacs (Figure 6.1) demonstrates this capability. Such advanced syntax coloring is reminiscent of the word processor XML editing mode ( 6.1.4 )except that it does not attempt to hide anything. XPath tools are not confined to XSLT stylesheets; they come in handy for various editing operations. Running an XPath expression against the document you are editing to see the result(s) highlighted is a good complement to (or even a substitute for) the traditional plain-text or regexp search. It is especially useful, of course, for writing and debugging XSLT stylesheets; yet after a while, you'll probably learn to "think in XPath" and start using XPath expressions in other XML editing tasks (see also 6.3.2 ). External processing may include XSLT transformation, validation with external tools, and running various previewing or visualization applications. Basically it is just a way to save you switching to a command line to run a command on the document you are currently editing. Project management is an optional but very useful addition to the feature set of a source-oriented XML editor. It allows you to define a group of documents as a project , after which some commands may act on the project as a whole. For example, you might be able to transform or validate all *.xml files of the project (similar to the batch processing mode of the stylesheet, 5.6 ), or run a text search or an XPath expression against the entire project. Moreover, sometimes you can define various relationships between members of a project, such as dependence (if one document is changed, the one depending on it is also considered changed and should be transformed again; compare 6.5 ). Most of these features are a boon for all three categories of XML users (developers, authors, editors). Developers will especially enjoy XPath tools and external processing. Authors may benefit from syntax coloring (so that the document markup is appropriately subdued in appearance and does not interfere with the text) and, of course, guided editing (so that there's no need to remember the exact names of element types or their usage patterns). Finally, editors will appreciate good project management tools that may make an XML editor comparable to a CMS. Source XML editors come in two main flavors: generic text editors enhanced for XML and specialized XML editors. It's time for some nice screenshots! Emacs [4] is a venerable tool. One of the oldest text editors in existence, it is immensely powerful, customizable, and extensibleand as widely used nowadays as ever. If I am to provide only one example of a text editor, it's got to be Emacs. [4] [4] Along with GNU Emacs, there is a popular variant called XEmacs; [5] it has some benefits compared to GNU Emacs, but overall, the user-visible differences between the two editors are minimal. All information in this section applies to both GNU Emacs and XEmacs, but the screenshot (Figure 6.1) features XEmacs. [5] [5] The most popular extension for editing XML (and SGML) in Emacs is called PSGML. [6] It implements a validating XML parser that can use a document's DTD (but not a schema in XSDL or any other schema language) for guided editing. Most generic XML editing commands are also available. [6] psgml.sf.net [6] psgml.sf.net Other than that, PSGML has little to offer by itselfbut it can work together with other Emacs tools to provide additional functionality. Thus, external processing is not included in PSGML, but you can either program it into Emacs yourself or use other Emacs packages such as XSLT-process ( 6.4.3.2 ). Similarly, the syntax coloring provided by PSGML is generic, but you can define your own coloring regexps to cover the namespaces, element types, or other constructs that you use most frequently. XPath is not yet supported by any Emacs tool, but external XPath utilities ( 6.3.2 ) may be called from within Emacs. An XEmacs window with our stylesheet ( style.xsl , Example 5.21) is shown in Figure 6.1. It demonstrates custom, specific syntax coloring (e.g., XSLT instructions are easy to distinguish from HTML literal result elements) using both different colors and different font faces as well as a menu of generic XML editing commands. Overall, Emacs requires a significant investment in terms of learning time and effort, but the return on this investment may be really good, giving you power and freedom that are hard to achieve with more specialized tools. For an example of a specialized XML editor, let's look at < oXygen /> [7] (Figure 6.2). Written in Java, it is pretty typical of this kind of software. Mostly source-oriented, <oXygen/> also offers a tree editing mode ( 6.1.2.1 ). Here's this editor's scorecard: [7] [7] The guided editing features of <oXygen/>in particular, context-sensitive element and attribute suggestionsare branded under the name "Code Insight." They can use both DTDs and XSDL schemas as the grammar definitions for a document. The interesting part is that <oXygen/> can generate a DTD itself from a well-formed XML document (see also 6.3.3 ). This means that a partially written document can "guide itself," helping the author keep its structure consistent. Syntax coloring is generic. You can define colors for elements, attributes, attribute values, etc., but you cannot differentiate, for example, XSLT instructions from literal result elements in a stylesheet. Unlike a generic text editor such as Emacs, you cannot implement this functionality yourselfthis is the price you pay for the convenience of an all-in-one package. The XPath capability is very handy. Right above the document editing window, you type your expression into a text field and hit Enter . A frame pops up at the bottom of the window listing the results of the query. As you select one of these results, the corresponding fragment in the editing window is highlighted. The editor has a built-in XSLT processor that can conveniently be used for transforming the documents you edit, optionally rendering an XSL-FO transformation result into PDF (using FOP [8] ). Besides, the external processing feature in <oXygen/> lets you run any program on your documentfor example, you can validate a document with an external Schematron validator (by itself, <oXygen/> does not support Schematron). [8] xml.apache.org/fop [8] xml.apache.org/fop Project management is quite simpleno file dependencies, no batch transformation or validation; <oXygen/> projects are little more than a convenient way to open a group of files at once. Topologi's Collaborative Markup Editor [9] is another source-oriented XML editor written in Java, notable for its extensive code formatting features and groupwork support. Perhaps most interestingly, CME is one of the very few XML editors to support interactive Schematron validation (in addition to other schema types); it highlights offending constructs and displays the corresponding diagnostic messages in the status bar. Like <oXygen/>, CME can deduce a grammar from an existing document using the feature called "Examplotron." [9] [9] Transforming Editor. Previous examples of source-oriented XML editors (both generic and specialized) demonstrated some of the ways to combine traditional text editor functionality with XML-specific additions, such as evaluating XPath expressions, for efficient editing of XML documents. However, the powerful concepts of XML and XPath are applicable to more than XML editing. I have written a proposal [10] for a new kind of all-purpose text editor that uses trees of nodes as its data model and an XPath-enabled language such as XSLT for transforming these trees. The two key ideas are representing the document being edited as a number of synchronized trees called views , each reflecting a different level of abstraction over document content, and automatic propagation of changes made to any of these views by transforms that link the views together. [10] [10] An XPath interface to all active document views allows the user to program new editing functionality at the appropriate abstraction level, using any XPath-enabled scripting language. Views take much of the boring work out of creating editor commands; for example, you can use XPath to access a view where your document is preparsed into words, or paragraphs, or XML constructs. You can modify, rearrange, or syntax-color any element of any view, and the change will be reflected in all other views of the document down to the lowest -level "characters" view that directly corresponds to the editor's screen display. Any feedback from readers who find this idea interesting or might be able to help with the implementation will be much appreciated. [11] [11] [email protected] [11] [email protected] The main advantage of source editing is its transparency: Nothing is hidden; the document is visible down to the smallest detail. However, the flip side of this advantage is that too much detail may sometimes distract you from the task you want to perform. A lot of syntax details of serialized XML (such as the exact layout of whitespace inside tags) do not affect the meaning of the document, yet they take up screen space and require extra keystrokes when editing. On the other hand, the hierarchical structure of an XML document may not be obvious from the mess of names and angle brackets on your screeneven with specific syntax coloring ( 6.1.1.1 ). In view of this, several approaches to XML editing have emerged that try to reduce the amount of information that you have to mentally parse when looking at a document. Also, these approaches attempt to make the markup look more consistent, more distinct from the data, and more explicitly hierarchical. One thing that is common to all these approaches is the use of various graphic icons or metaphors to represent the structures described by the XML markup. The most obvious graphical representation of an XML document is, of course, a tree. In a source view, the tree structure is not explicit; for example, you have to count the unclosed open elements in order to find out at which level of the tree hierarchy you are standing. Many XML editors therefore offer a separate tree view of the document. An example of such a tree view is provided by the <oXygen/> editor (Figure 6.3). The branches of the tree can be expanded or collapsed as needed. Attributes are represented as separate leaves of the tree, as is the data content of elements. Branches can be copied -or-moved by drag-and-drop or copy-and-paste and, of course, all names and values are editable without leaving the tree editor. <oXygen/> is a pretty straightforward realization of the tree metaphor. It may be handy for a quick overview of the structure of a document, but it is hardly suitable for real document editing sessionsicons are noisy , and the tree looks kind of awkward . Is there a better alternative? Let's have a look at the open source Java-based XML editor called Pollo [12] (Figure 6.4). Its display is somewhat tree-like, but the advantages it offers over a traditional tree representation are significant. Instead of icons hanging from the branches of a tree, elements in Pollo are represented by colored frameswhich is quite natural if you consider that an element in XML is supposed to enframe its content. Instead of drawing connection lines symbolizing tree branches, Pollo simply nests these element frames into one another like matryoshka dolls . [12] pollo.sf.net [12] pollo.sf.net This approach is similar to a tree view in that the nesting level of any node is immediately visible. However, since element frames are painted with different colors, you can get a visual clue as to exactly which elements are the ancestors of the current one, not only how many of them there are. All elements of one type are represented by frames of the same color. These colors can be generated by the program randomly , or you can set an exact color for each element type in a "display specification" associated with your schema. Thus, Pollo's implementation of specific syntax coloring ( 6.1.1.1 ) creates a visually rich but consistent and easy-to-navigate display. Attributes are conveniently positioned on the top bar of an element's frame. At the bottom of the window, an editing area lets you change the values of attributes and edit text nodes. Just like a branch in a tree, any frame with its content can be collapsed into a plain horizontal bar. Admittedly, Pollo's XML display is not the best for freeform documents (for example, mixed content looks awkward when inline elements are stacked vertically between two text nodes). However, for predictably structured XMLsuch as configuration files, a web site master document, or a Cocoon sitemap ( 7.2.3 )this interface is very intuitive and convenient. Overcaffeinated! Why is so much XML software written in Java? One reason is Unicode: The XML specification requires that any XML processor must understand Unicode, and Java does that natively. Another reason is that Java, being a nice high-level language, makes it easy to write complex programs (and XML programs are complex, even though XML itself is so simple). But most importantly, it's a snowball effect: The more XML software is already written in Java, the more likely it is that a new XML project will choose this language too (especially contagious in this respect is, of course, open source software). However, other languages' XML snowballs are already rolling (Python shows a lot of potential), and they may one day overtake the Java snowball . Yet another approach to representing XML graphically is similar to the frames metaphor in that each element is enclosed in a graphical envelope. This time, however, these envelopes are not arranged in any semblance of a tree; instead, the opening and closing tags of each element are shown as icons bearing the name of the element type. It might be argued that this approach is betterif only marginallythan direct source editing. It helps markup stand out from the text and hides most nonessential syntax details of XML. Also, it removes some of the clutter from document presentation by hiding attributes (they are usually only displayed for one element at a time by a special command). Iconic tags are a convenient way to edit text-oriented documents concentrating on the data but keeping the markup structure in sight. This approach to presentation is often combined with CSS-controlled formatting of element content, as in word-processor-like editors ( 6.1.4 ), for additional visual clues on the roles of element types. The Morphon [13] XML editor demonstrates this feature in Figure 6.5. Microsoft Word 2003 Professional Edition operates similarly, but uses Word's own style tools rather than CSS. [13] [13] Up to this point, all XML editing paradigms we discussed (source editing, graphical editing) were primarily developer-oriented . An author or editor might use them too, of course; yet, without XML experience, it is easy to be overwhelmed by a rich detailed display (even in a graphical mode) and a plethora of commands and options (even with guided editing). It may be difficult to fully concentrate on the content when what your editing tool shows you is so in-your-face XML. The remaining two approaches to XML editing that we'll now discuss are therefore more author-oriented than developer-oriented. Their goal is to hide as many nonessential details as possible, yet not let the user stray from a valid document structure. The first of these approaches is form-based XML editing. It is perhaps the simplest possible interface for the user: no need to know XML, no need to think what is what and how to name anythingjust fill out the form. Given the sheer number of forms we have to do in our lives, this must be relievingly easyprovided the form is well laid out and the fields are clearly labeled and commented. Generally, a form-based XML editing interface is a good idea if: you need to manually create lots of similar XML documents (and cannot automate this process); the users of this interface have minimal experience with XML or are too numerous to rely on their level of experience (e.g., in a distributed data entry project); and your document structure is regular and database-like, not freeform (in particular, mixed content does not usually mix well with forms). Naturally, to enable an efficient form-based interface, the developer must create a logically laid out, nicely formatted, and helpfully commented form design. Form-based XML editors, such as Ponton XE [14] and Microsoft InfoPath, [15] provide various tools for this task. The starting point is usually a schema or DTD that is transformed into a form with one input field per attribute or element. You can then rearrange these fields, format them as appropriate, add explanatory labels, and so on. [14] [14] [15] [15] All types of interface widgets can be creatively used for a rich but logical form interface. For example, portions of the form sheet may expand or collapse, making it similar to the tree XML view ( 6.1.2.1 ). Indeed, a form is sometimes accompanied by a parallel tree view of the same document. Tabular forms. Closely related to form-based editing is the table metaphor, often found even in those XML editors that are otherwise freeform text-oriented. When a document contains a sequence of similarly structured elements, a table can be compiled from their content and attributes. This makes it easy to compare and modify parallel structures in predictably structured (database-like) XML. Smart forms. You may want to embed various validation checks into your form, such as calculations or comparisons triggered by the completion of a field, a group of fields, or the entire form. Note that the structural validity of the resulting document is already ensured by the structure of the form itself, so there's little sense in DTD validation. XSDL is more useful, as it can perform data type checks of the form's values. However, XSDL cannot work with an incomplete document and therefore does not support on-the-fly checks in a form being filled in. There are two main approaches to building "smart forms" capable of controlling their interaction with the user. One is form scripting : Just as you use JavaScript with HTML forms, you can use a scripting language in your XML form to perform any interface actions or data processing in response to events (such as the user entering a value). In fact, a simple form-based XML editor might be built out of a plain old HTML form coupled with a script that saves the form input as XML. Another example of a script-based editor is Microsoft's InfoPath ( 6.2.3.4 ); although it attempts to reduce the amount of programming necessary to create a form and in many cases eliminate it completely, the resulting automatically coded form scripts may be quite entangled. Another approach to implementing smart forms is more attractive: Instead of programming various constraints and dependencies, you can simply declare them using a Schematron-like language with XPath expressions for accessing components of a form. For example, if you want a price field to be recalculated when a new currency is chosen from a drop-down list, you just state that the price field (identified by its XPath address within the form) is bound to the currency field with a simple formula. No event tracking, no function callsjust a static declaration. This is the approach of the XForms language, which we'll look at in some detail in the next section. The W3C XForms [16] standard has emerged as a versatile and powerfulyet not overly complextechnology that enables, among other things, an excellent interface for form-based XML editing. The XForms standard is positioned by the W3C as the next generation of HTML forms, so an XForm must be able to send its input to a web server just as an HTML form does. However, unlike HTML forms, XForms can do lots of other useful things. For our purposes, it is important that: [16] [16] The result of filling out an XForm is stored in an XML document that is based on a template, called an instance , which is embedded in (or referenced from) the form and may be optionally controlled by an XSDL schema. Along with using a schema, an XForm can verify its input with Schematron-like declarative constraints, applying arbitrary calculations to any values in the form. The filled-out instance document can be saved to a local file. XForms constructs can be embedded into any other XML vocabularyin particular, into XHTML while using CSS for styling. Let's see what is involved in building an XForms interface for editing our own predictably structured XMLthe master document of our sample site. Example 6.1 can only display and edit the environment elements, but you can expand it to implement an almost complete master document editor. Figure 6.6 shows how this form is rendered by X-Smiles [17] a nice Java-based XML browser offering a fairly complete implementation of XForms (as well as many other XML standards). [17] [17] < ?xml <head> <link rel="stylesheet" type="text/css" href="master.css"/> <title> Master Editor </title> <xfm:model <!-- Source document to be edited: --> <xfm:instance <!-- Save edited instance to: --> <xfm:submission <!-- Constraint: children of 'environment' must not be empty --> <xfm:bind </xfm:model> </head> <body> <p><em> Welcome to the Master Editor! </em> This form interface allows you to edit a subset of a master document. </p> <h1> 1. Environments </h1> <p> Each environment defines a set of paths used by the stylesheet. In each stylesheet run, the current environment is selected by a command-line parameter, e.g., <code> env=final </code>.</p> <!-- Repeat for each/site/environment in the instance: --> <xfm:repeat <div class="env"> <div> <span> <!-- Bind this input field to/site/environment/@id: --> <xfm:input <xfm:label Environment: </xfm:label> </xfm:input> </span> <span> <!-- Bind this list to/site/environment/os: --> <xfm:select1 <xfm:label>   Operating system: </xfm:label> <xfm:choices> <xfm:item> <xfm:label> Linux </xfm:label> <xfm:value> Linux </xfm:value> </xfm:item> <xfm:item> <xfm:label> Windows </xfm:label> <xfm:value> Windows </xfm:value> </xfm:item> <xfm:item> <xfm:label> FreeBSD </xfm:label> <xfm:value> BSD </xfm:value> </xfm:item> </xfm:choices> </xfm:select1> </span> </div> <!-- Four text fields for *-path elements: --> <div> <xfm:input <xfm:label> Source path (where *.xml are taken from): </xfm:label> </xfm:input> <xfm:input <xfm:label> Output path (where *.html are placed): </xfm:label> </xfm:input> </div> <div> <xfm:input <xfm:label> Images path (relative to output and target): </xfm:label> </xfm:input> <xfm:input <xfm:label> Target path (for relative links in HTML): </xfm:label> </xfm:input> </div> </div> </xfm:repeat> <div> <!-- Button to insert a new empty environment: --> <xfm:trigger <xfm:label> New environment </xfm:label> <xfm:insert </xfm:trigger> <!-- Button to delete the highlighted environment: --> <xfm:trigger <xfm:label> Delete environment </xfm:label> <xfm:delete </xfm:trigger> </div> <div> <!-- Submit saves the instance: --> <xfm:submit <xfm:label> Save </xfm:label> </xfm:submit> <!-- Reset undoes all changes and reverts to loaded values: --> <xfm:trigger> <xfm:label> Revert </xfm:label> <xfm:reset </xfm:trigger> </div> </body> </html> This section is not an XForms tutorial, but only a teaser to whet your appetite. Still, comparing the X-Smiles rendering with the source in Example 6.1 might be a good first lesson in XForms. We will now discuss the main components of this XForms-in-XHTML example without going into too much detail. The role model. Within the head of the XHTML document in Example 6.1, the xfm:model element describes the model of the form. An XForms model combines the XML instance that the form will populate with data, its schema (not used in this example), any additional constraints, and the submission action to be taken when the form is completed. Loading and saving. In this case, the xfm:instance element takes an external document at file:/home/d/web/_master.xml as the instance. This means that the form, when activated, will load this document and distribute its data into the corresponding form controls as default values. You can therefore provide an empty instance document as a template, or you can link your form to an existing master document with real data and use the form to change some of its values. Conversely, when the form is filled out and the submission action is triggered by the user, the xfm:submission element will save the resulting XML into the local file at /home/d/web/_master.xml . Since this is the same file as that referred to in xfm:instance , the form will effectively edit that document and save it back. If you revisit the form later, it will load the document again and display it with all the changes you made last time. Constraining input. An XForms model can also contain arbitrary constraints, exemplified here by the xfm:bind element. Such a constraint is very much like a Schematron rule in that it uses XPath to specify its context (the nodeset attribute) and the expression that must be true in that context (the constraint attribute). In the example, we declare that all children of an environment must be nonempty . A conformant XForms browser will refuse to submit the form until this constraint is satisfied. The xfm:bind element can also be used for many other purposes, such as assigning a data type to the selected nodes, controlling whether these nodes are included into the submission, or calculating values of nodes based on other nodes in the instance. Please type. In the body of the document, interspersed with arbitrary text and XHTML markup, XForms controls constitute the visible part of the form. Each control uses an XPath expression in its ref attribute to link itself to a node of the XML instance in the form's model. This link works both ways: If a node value is changed by a control or by a calculation inside the model, this change is reflected in all controls that reference that node. In our example, the xfm:repeat element iterates over all environment elements in the source. Within it, several xfm:input fields and one xfm:select1 list are linked to the child elements and an attribute of each environment . Growing the document. XForms not only makes it possible to fill in values of elements and attributes that are already present in the instanceyou can add new elements, too. Two button controls after xfm:repeat allow you to remove the current environment or add a new empty one after the current. An XForms browser is supposed to keep track of user input and always designate one of the repeat ed sections as current. For example, X-Smiles uses yellow highlighting for the section that you are currently editing (in Figure 6.6, it is the staging environment in the middle). When you're done. Finally, the last two buttons do form submission (i.e., saving the document; you will be prompted to confirm if the file already exists) and form reset (i.e., returning to the values loaded from the instance document, losing all changes you've made in this session). Before the form is submitted, all constraints defined in the model are verified . For example, in the screenshot, the img-path field within the staging environment is empty, so the browser paints it red and refuses to submit the form until a value is provided. Limitations. This example demonstrates how you can quickly and painlessly implement a rich form-based editing interface for your XML documents using XForms. Sure enough, this approach has its share of problems as well. The most important ones are these: The locations for the source instance and the document to be saved are hardwired into the form. That means you cannot select an arbitrary document for editing and save it to an arbitrary location (unless your XForms processor provides this functionality as an extension). This may actually be an advantage. Remember that the goal of the form-based interface is to make editing simple . From this viewpoint, there is nothing wrong with the fact that you don't need to worry about filenames and cannot mess up a document by saving it in the wrong directory. Just select one of the forms from your XForms browser's bookmarks, edit, and press one button to save. Are there examples of database-like documents that don't need to be created anew or moved from one place to the other, but only edited where they are? The master document of a web site is one such example; others are various configuration files in XML. Thus, configuring the X-Smiles browser itself is done via an XForms page that displays the options from an XML configuration file and lets you edit and save them. As mentioned previously, form editing is hardly appropriate for freeform XML. Unfortunately, XForms cannot handle any mixed content, even if it is but a small part of an otherwise database-like document with a predictable structure. For example, with XForms you can edit the contact-webmaster element of our master document (see Example 3.2, page 143) but you will lose its child mailto link and any other inline markup. XForms browsers may provide extensions [18] to overcome this limitation. Another approach might involve using one of the text-to-XML converters ( 6.2.1 ) to produce mixed content from structured text entered in an XForms textarea . [18] One such proposed extension is described at. info /writing/htmlarea. [18] One such proposed extension is described at. info /writing/htmlarea. The InfoPath forms editor, which does not use XForms, is one that is capable of handling mixed content, as long as it conforms to XHTML. If filling out forms is the fastest and most natural way to author database-like XML, then for freeform XML it is the word processor interface that is most familiar to users. Editing XML with the convenience of a word processorand yet producing valid and sensibly structured XML documentsis one of the main directions of development in today's XML authoring tools. Syntax formatting. At first sight, word processor display is in direct opposition to XML editing: The former is largely about appearance, while the latter is strictly about content. However, undiluted abstractions rarely work entirely as intended. The ubiquity of syntax coloring in all sorts of text editors is a clear indication that appearance does matter even when you deal with purely abstract structures, because it helps a human reader parse and navigate those structures. From this viewpoint, word-processor-like XML editing is nothing but "syntax coloring on steroids"or "syntax formatting" if you wish. Of course, to be applicable to the highly regular XML, the appearance aspect of a document must itself be regular and consistent. Unattached bits of "formatting for formatting's sake" are inadmissible. Do we have a robust technology that would allow us to assign rich formatting properties to XML elements without changing the XML document itself in any way? Yes, we doit's called CSS. [19] [19] [19] Cascading Style Sheets. Granted, from the design and typography perspective, CSS is less sophisticated than, say, XSL-FO. But we don't need a high level of sophistication for presenting different elements differently. What may be more important is that, unlike XSL-FO which is only good for printed documents, CSS can naturally accommodate various presentation modes, including the screen presentation of information. This paradigmwhereby you edit a nicely formatted CSS-controlled presentation of your document and get valid semantic XML as a resultis adopted by many XML editors these days. It has its limitations, though. Memory for faces and memory for names. The most important limitation is that the number of different formatting styles that you can reliably remember and recognize is usually much less than the number of element type names you can memorize and apply. That is, names attached to markup constructs are easier to remember for many people, even though visual formatting styles may look sexier. When creating markup, you can make use of guided editingfor example, by choosing from a list of markup constructs valid at the current point (such lists may display the names of constructs, or the corresponding formatting samples, or both). However, when editing existing markup, you will often find it difficult to guess what element you are in, judging solely by the formatting style at the cursor. Certainly, distinguishing a heading from a paragraph of text is easy. But some widely used vocabularies, such as DocBook, contain hundreds of element types. Assigning a recognizable set of visual properties to each element type may therefore be difficult if not impossible . As a response to this problem, many word processor XML editors can optionally display element tags without removing CSS formatting. The tags are usually rendered as icons. You can use them while you are learning a new vocabulary and then switch them off, or you can enable them periodically to remind yourself of the inner workings of your document. The Morphon XML editor is a typical example (Figure 6.5, page 298). What you see is what you pay for. Other limitations of the word processor XML editing paradigm may seem less important, especially for those who have always used traditional word processors for authoring. Yet they are limitations, and you as a developer should be aware of them in advance. XML comments are invisible and inaccessible. Consequently, if you want the authors to be able to comment their content, you have to provide a special element type for this (to be ignored or converted into XML comments by the stylesheet). Attributes are also invisible, although they can affect presentation of content. For example, element[attr="foo"] in CSS2 selects an element based on its attribute value, much as element[@attr="foo"] matches it in XSLT. An XML editor may of course provide a command to view and modify attribute values for an element, but by the very nature of the word processing approach, this can only be done through a special dialog, and not as a routine editing operation. Nesting of elements is not obvious. For example, if I see a green sentence inside an italic paragraph, does that mean that the "green" element is a child of the "italic," or is this two "italic" elements with a "green" one in between? It may be very difficult to combine unambiguously several formatting styles that correspond to several element ancestors of the current text fragment. Despite the "cascading" in "CSS," many editors do not even attempt to visualize nesting in any way, effectively reducing XML to the flat styles of conventional word processors ( 6.2.3.3 ). On the other hand, as we'll see below ( 6.1.5.1 ), it is possible to use the CSS frame and background properties to visualize the hierarchical structure of the top-level elements of a document. You will need CSS style sheets for visualizing your XML if you want to author your custom-vocabulary XML in a word-processor-like XML editor. But it is also very useful to be able to view your XML documents quickly, without transformation, in an XML/CSS-capable web browser. But we already have a web site? Creating CSS for XML is not much of a graphic design job; the style sheets may have very little in common with the way this same material looks on the web pages after transformation. Our goal is to render source XML in a consistent and visually unambiguous way so it is easy to review and edit. This means the document structure must be expressive and laconic at the same time. HTML text in the XML structure. It may, however, make sense for our CSS style sheet for XML to imitate to some extent the character formatting of the transformed HTML pages. This imitation will let site authors and editors see at once where elements in their XML editor window will end up on the web page. On the other hand, the visualization of document structure cannot and should not be in any way influenced by the layout of the web page, if only because a web page will contain components (such as navigation) that are absent from that page's source XML. Thus, we have to find a clear yet unobtrusive way to use CSS to reflect the hierarchical structure of XML. The Pollo XML editor ( 6.1.2.2 ) suggests the idea of using nested rectangular frames with different visual properties. Indeed, with CSS, you can easily present content blocks as boxes with different border types, specify colors of their borders and backgrounds, and adjust margins for better recognizability. Compared to XSLT, CSS is very limited when it comes to manipulating data. For instance, you cannot pull information from one place in a document and insert it into another (let alone into a different document). Luckily, we don't need this ability for straightforward XML visualization, especially given that our page documents (Example 3.1, page 141) are so simple. Thus, you cannot create clickable links from abbreviated link addresses ( 3.5.3 ) in your XML because CSS lacks facilities for proper unabbreviation. (A simple XSLT stylesheet can be added to handle thisbut then, why use CSS at all?) But perhaps you don't need the visualized links to be clickable; much more useful for editing is to see the link address in its original abbreviated form. However, you'll still want such a link to stand out from the surrounding text so that its link status is clear. Similarly, CSS cannot be used to fetch images and insert them into the displayed document. [20] Instead, we will simply show the (abbreviated) image references as they are given in the source. Technical validity of both links and image references can be checked by a Schematron schema ( 5.1.3 ); what we are interested in when editing the document is that these references actually make sense , and this is where seeing the original abbreviated addresses can really help. . A CSS style sheet for visualizing the structure of our sample page documents (such as the one in Example 3.1) is given in Example 6.2. Note that unlike the XHTML+CSS combination, CSS applied to a generic XML document has no default properties associated with any element types. You'll have to define everything explicitly, including the display: block property for block-level constructs such as paragraphs. page { background-color: white; padding: 5pt; margin: -5pt; } page title { letter-spacing: 0.5em; margin: 5pt; } block { display: block; padding: 10pt; margin: 5pt; border: lightgray 4px solid; } section { display: block; padding: 0pt 10pt 10pt 10pt; margin: 5pt; border: black 2px dashed; } section > head { display: block; background-color: #cccccc; padding: 5pt 10pt 5pt 10pt; margin: 0pt 0pt 0pt -10pt; font-weight: normal; font-size: large; } section > subhead { display: block; background-color: #eeeeee; padding: 5pt 5pt 5pt 1cm; margin: 0pt -10pt 0pt -10pt; font-style: italic; } p { display: block; padding: 5pt; margin: 5pt; border: black 2px dotted; } block[src]:before { content: "[Orthogonal block reference: " attr(src) " ]"; color: gray; font-family: monospace; font-size: small; } section[image]:before { content: "[Image: " attr(image) " ]"; display: block; padding: 3pt; color: gray; font-family: monospace; font-size: small; } int, link[linktype="internal" ] { color: green; border-bottom: 1px solid; } ext, link[linktype="external" ] { color: blue; border-bottom: 1px solid; } int:after, link[linktype="internal"]:after { content: "[int: " attr(link) "]"; color: gray; font-family: monospace; font-size: x-small; } ext:after, link[linktype="external"]:after { content: "[ext: " attr(link) "]"; color: gray; font-family: monospace; font-size: x-small; } em { font-style: italic; } code { font-family: monospace; } If your CSS visualization is to be used by content authors or site maintainers, it makes sense to create a "legend" XML document that uses all of your source XML vocabulary and explains the formatting conventions of the CSS visualization. A browser screenshot of such a sample document is shown in Figure 6.7.
https://flylib.com/books/en/1.501.1.56/1/
CC-MAIN-2020-29
refinedweb
7,985
51.68
Daniel Carrera writes: > mtn rebase <rev> OK, that's an improvement on my proposal. > The command "db kill_rev_locally" is long so I don't like it. What > would be the consequences of a divergence? Is it ok if I simply run > "mtn rebase" and then go on merrily on my way making my other > branches? If so, then I would be entirely happy with rebase. After "mtn rebase p:", there are two ways you could create a divergence: $ mtn commit creates a second head in the same branch; later on, "mtn checkout", "mtn update" and other commands will complain that there are two heads and require you to select a head manually. You can resolve that either with "mtn merge", "mtn suspend" or "mtn disapprove"; it's up to you. This is sometimes called "light-weight" branching; it is appropriate for short-lived divergences that you intend to resolve at one point in the future. The second way is: $ mtn commit -b new_branch The divergence is then "permanent"; you now have two branches with one head each; monotone will not complain about that. You can still merge whenever you want with "mtn propagate". This is sometimes called "heavy-weight" branching and is for intentional divergences that you think should live for a while (e.g. stable/maintenance vs. unstable/development). Note that "heavy-weight" is not that heavy; the only difference is the value of the "branch" cert. So "heavy-weight" does not consume any additional space in the database compared to "light-weight"; but it does consume from the branch namespace. -- Ludovic Brenta.
http://lists.gnu.org/archive/html/monotone-devel/2008-10/msg00122.html
CC-MAIN-2016-26
refinedweb
264
68.7
. The first common cause is that the license does not have these DeepSee options enabled. To confirm if these options are enabled, you can run the following code from terminal: do $system.License.Decode() This table describes what License options must be enabled to access each DeepSee component: Once a license with these options is activated, the SMP links should be available. The second common cause is that the user does not have sufficient privileges. To access DeepSee Analyzer, the user must have: %DeepSee_Analyzer To access DeepSee Architect, the user must have: %DeepSee_Architect Further details about DeepSee Resources can be found here: Issue #2: DeepSee must be enabled before use. All web applications that use DeepSee must be enabled before using the web interface. There are two ways to enable DeepSee. Pre-2015.1: 2015.1 And later: There are multiple ways a “DeepSee Disabled” namespace will present itself when trying to access DeepSee through a browser. The first way is seeing a Forbidden message or HTTP 403 Forbidden from your browser when trying to access a DeepSee Management Portal page. Different browsers and different Caché versions will present this error in different ways. Here is the same URL being requested from Firefox and Internet Explorer: There have also been cases where a CSP error page is shown with a status of 403, which is the Web Server error code for Forbidden. Work has been done in the SMP to avoid making these Forbidden errors unreachable. If a namespace is selected that does not have DeepSee Enabled and you click on the “DeepSee” option, you should see a message that says “The <namespace> namespace does not support DeepSee.” This means that DeepSee is not enabled and one of the previously mentioned methods should be used to enable DeepSee. It is now time to access DeepSee and share any interesting solutions and insights with the community! Nice. This makes the first contact easier.
https://community.intersystems.com/post/accessing-new-deepsee-namespace-first-time
CC-MAIN-2019-43
refinedweb
322
59.84
. Import search path To use a package or another module, one needs to import it by specifying its name. Python will look for this name in a series of directories specified in the sys.path variable, which is accessible after import sys. In default, sys.path includes the following directories in order: - The home directory of the program PYTHONPATHdirectories (if set) - Standard library directories - The contents of any .pthfiles (if present) - The site-packages home of third-party extension The home directory depends on how you run the code. If you are running a program, it is the directory containing the program’s top-level script file. If you are working interactively, it is the current working directory. A complete exposition of these directories is available in Mark Lutz’s Learning Python (5th edition, P679–680). It is worth mentioning that Python searches names only in these immediate directories, not any subdirectories. What to import In Python, there are three different importable entities, and they all share the same syntax: - Module. A module is a file, either located in a package or not. - Package. A package is a folder containing a top-level __init__.pyfile. It generally contains one or more modules and/or other packages. In this post, I refer to them as submodules and subpackages, respectively. - Namespace package. The namespace package is available only since Python 3.3. It is one or more folders sharing the same name and containing no top-level __init__.pyfiles. In this post, I will focus on modules and packages. As for namespace packages, they are not a requirement for most programmers. Only the team leader of a large, loosely coupled team needs to master the concept of namespace packages. Unless specified, all packages in this post refer to regular packages, not including the namespace packages. Package structure There are two types of package structures (single-level and multi-level) along with a non-package structure. The non-package structure is just a single, self-contained file: module.py It often includes a section of unit test code of the form: if __name__ == '__main__': Here is the single-level structure: Package |---__init__.py |---_submodule1.py |---_submodule2.py ... Here is the multi-level structure: Package |---__init__.py |---subpackage1 |---__init__.py |---_submodule1-1.py ... |---subpackage2 |---__init__.py |---_submodule2-1.py ... ... All subpackages need to have a __init__.py file, just like the top-level package. File known as __init__.py This file is a must since Python 3.3 for a folder to be a package. Folders without this file are regarded as potential namespace packages. This file has the following four functionalities: - Package declaration. It declares a package. A package has priority over a module with the same name in the same directory during the import process. That is, __init__.pyworks as a safeguard to ensure that this very package, not anything else, is imported. - Package initialization. This file is automatically executed when the package is imported. Therefore, it is the ideal place to create data files, open connections to databases, and set RNG seeds. - Namespace initialization. Names are generally defined in each submodule of the package. However, there exist some names having no apparent association with any submodules. These names can be defined in __init__.py. In addition, during an interactive IPython session, the <tab>-autocompletion can identify only these names when only the package (no submodules) is imported. - Exporting names. It has a module attribute known as __all__, which is a list of strings. This attribute declares all names exported when the package is imported by from *statement. Of course, all the names in __all__need to be defined or imported from elsewhere for them to be importable. Absolute versus relative import While relative import refers to the import statement using relative paths, absolute import refers to the import statement using absolute paths. Both methods of import refer to package import, not standalone module import. While relative import makes the package more robust against the migration of the importee, absolute import makes the package more robust against the migration of the importer. In addition, relative import is shorter to type. There is an important change about relative import from Python 2.X to Python 3.X. In 2.X, an undotted import is relative-then-absolute. In 3.X, an undotted import is absolute-only. The full setups can be summarized in the following table: To adopt the 3.X behavior in a 2.X module, one needs to run from __future__ import absolute_import at the beginning of the module (probably after the docstring). This technique also works in an interactive session. Absolute import is valid in all kinds of modules, whereas relative import is invalid in programs or modules imported not as a portion of a package. For instance, when it is imported by a program in the same directory, it is imported as a regular module, not a submodule or a package. From the table, we can see that Python 3.X distinguishes relative import from absolute import and shows no ambiguities. Therefore, the programmers have to choose between these two import methods. Skillful programmers will use relative import for weight lifting modules and absolute import for the test modules. This convention is the one adopted by scikit-learn. Package versus program The above revolution in Python 3.X is not without consequences. The dotted import not only makes the module no longer able to serve as both a package submodule and a program but also requires programs calling them to use absolute import. It explains why the test modules use absolute import. On the one hand, absolute import allows the weight lifting source code to use relative import. Also, when relative import is used by the importee, the importer cannot use module import. On the other hand, absolute import allows each test module to be run as an individual program. That is, you can run individual tests separately rather than invoke the whole test suite every time. The rule of thumb is that if a module is intended as both a package module and a program, it should use absolute import. For package-only modules, the convention is to use relative import. For program-only modules, the convention is to use absolute import and preferably also put them outside the package.
http://www.zhengwenjie.net/import/
CC-MAIN-2021-17
refinedweb
1,051
58.18
On platforms where freebl is a separate DSO from libNSS3, (e.g. Solaris for Sparc and HPUX for PARisc) the code that loads the freebl DSO allocates and then leaks a BLLibrary structure. I can see several ways to fix this: a) free it in freebl_LoadDSO prior to returning PR_SUCCESS, or b) have a static BLLibrary structure instead of dynamically allocating and freeing it, or c) keep a static copy of the pointer to the BLLibrary, and have BL_Cleanup() (a function down at the bottom of loader.c) call bl_UnloadLibrary with it. In this case, the PRCallOnce flag in freebl_RunLoaderOnce should also be cleared by BL_Cleanup, so that if an application reinitializes NSS after calling BL_Cleanup, the library will get reloaded properly. I think c is the cleanest way to do this, but I have no strong preferences. Marking PR3 because this leak is trivial at worst. Changed the QA contact to Bishakha. Set target milestone to NSS 3.5. Moved to 3.7 and assigned to Nelson. Moved to target milestone 3.8 because the original NSS 3.7 release has been renamed 3.8. Created attachment 113765 [details] [diff] [review] patch v1 This approach unloads the freebl shared library when BL_Cleanup is called. It continues to use PRCallOnceType to avoid loading the freebl shared library more than once. When the shared library is unloaded, the PRCallOnceType is reset by memsetting it. Wan-Teh, is this acceptable? Comment on attachment 113765 [details] [diff] [review] patch v1 > void > BL_Cleanup(void) > { > if (!vector && PR_SUCCESS != freebl_RunLoaderOnce()) > return; > (vector->p_BL_Cleanup)(); >+ bl_UnloadLibrary(&blLib); >+ memset(&once, 0, sizeof once); > } The behavior after zeroing the PRCallOnceType variable is undefined. In the current NSPR implementation, it does what you want. I wanted to check in the patch for this bug so it would be fixed in NSS 3.8. But I am concerned that it might conceivably break profile switching for mozilla on the affected platforms, even though the NSS test programs pass with this patch. NSS test programs do not test the ability to restart NSS after doing a shutdown, so if this patch left NSS (or the blapi shared lib) in some state where it could not be restarted, that would not be detected with the NSS test programs. So, if someone case test that this patch doesn't break profile switching on solaris and hpux, then I'd like to see this patch applied for NSS 3.8. I think we should implement option a or b. It is not necessary to unload the freebl DSO during NSS_Shutdown. Unloading the freebl DSO is equivalent to unloading the softoken, which we don't do. Perhaps we should have two flavors of NSS_Shutdown: shutdown for profile switching, and shutdown for program termination. It is simply not necessary to unload the freebl DSO for profile switching. It would be nice but not necessary to unload the freebl DSO for program termination. Remove target milestone of 3.8, since these bugs didn't get into that release. *** Bug 274005 has been marked as a duplicate of this bug. *** This leak was reported internally at Sun against Solaris 10. I was in favor of option a) also, until I read this discussion . I agree with Wan-Teh that unloading libfreebl*.so is the equivalent of unloading the softoken. We don't ever do that in NSS, as libnss3 is implicitly linked with libsoftokn3 . However, the softoken module can be used independently of libnss3, for example with the JDK 1.5 PKCS#11 engine . In this type of environment, it would be reasonable to expect C_Finalize to unload libfreebl.so . I see that PR_CallOnce is being used here to load freebl only once. I'm wondering what the behavior will be in an application that does not link with NSPR, for example a Java application . In theory, in that case, unloading libsoftokn3.so from memory should also unload libnspr4.so, right ? And thus there should be no problem trying to reload the softoken, as NSPR would get initialized again . Or is something else going to happen ? It would be good to provide a fix for this in 3.11, especially as we are planning on reworking freebl . (In reply to comment #12) wherein Julien wrote: > I was in favor of option a) also, until I read this discussion . what option do you now favor? option b? You mentioned having c_finalize also unload freebl. Does doing so, or not doing so, change the right fix for this? Right now, I favor leaving this P3/minor bug alone. As ugly as this is, maybe we should make it a WONTFIX. I don't think it makes much sense to fix the leak for the freebl string and still leak the entire freebl shared library ;) If we want to fix this right, we need to: 1) unload libfreebl in C_Finalize 2) make sure libfreebl is refcounted in the loader when it's loaded from libssl so it doesn't get unloaded early 3) think about the Java case some more. NSPR isn't safe for multiple shutdown/reinitializations. See the many bugs on PR_Cleanup in bugzilla for more info . When softoken is loaded from java, it would be a problem if libnspr also gets unloaded from the process. What happens is that when threads that had previously called NSPR functions end (which could have been created from the Java side), a thread termination callback gets invoked, but it points to an area of memory that has been freed, and then we get a coredump. The workaround for this problem that we have used in existing applications that don't implicitly load NSPR was to ... leak NSPR by dlopen'ing it so it never gets unloaded ... :) It doesn't make much sense to fix the libfreebl leak if we still have to leak libnspr, IMO . Right now libnspr is leaked by virtue of libfreebl being leaked. This is not an issue in C applications whose main executable implicitly links with NSPR, but it is an issue in Java applications that dynamically load softoken. So, IMO, fixing this right means that we need to fix many NSPR shutdown issues, and that is going to be a lot of work. The new shutdown callback registration function for bug 326482 will be part of the solution for this bug, too, I think. FYI, coverity has flagged this as CID 894: <>. retarget to 3.12 because of FIPS Coverity CID 499 When using the SSL bypass mode, this leak occurs a second time. Re: comments 15/2 and 20 : This leak now exists twice, once in libsoftoken, and once in libssl for bypass . But there is no need for refcounting in the loader, since there are actually two instances of the loader code as well. refcounting is done for us in the OS within dlopen () when libfreebl is loaded more than once . NSC_Finalize would be a valid entry point to free the library structure and dlclose() from libsoftoken . In this case, there is no need for registering a shutdown callback as suggested in comment 16 . Unfortunately, there is no equivalent shutdown entry point in libssl to do what's needed. We would need to register some kind of SSL shutdown function to do this, if we don't already. Re: comment 15, I now think that this might make sense to fix even if we leak NSPR and separately from that problem. This leak shows up on every NSS init in a program that repeatedly initializes NSS and shuts down . The NSPR "leak" would only show as a one-time leak. Since NSPR doesn't support shutdown and unloading properly and may never do so, I would even advocate that NSPR be changed to always stay resident upon initialization (eg. by dlopen'ing itself to bump the refcount). Wan-Teh, Re: comment 7, do you foresee a solution that doesn't depend on the current implementation of PR_CallOnce / PRCallOnceType ? Either this should be made public, or maybe we could add a PR_ResetOnce(PRCallOnceType*) which would currently be implemented as memset ? To fix this leak in the ssl bypass case is much harder. Even though we have an NSS facility to register a callback function at NSS shutdown time, there is no initialization entry point in libssl to register that callback !!! The way that freebl gets loaded in libssl is through the function table in loader.c which uses PR_CallOnce on freebl_RunLoaderOnce(). That function would be an appropriate place to call NSS_RegisterShutdown, except that we only want to do it when loading freebl from libssl, but not from softoken, since libsoftoken cannot depend on libnss . Right now the same freebl.a gets linked into both libsoftoken and libssl . So, this would entail having two slightly different versions of the freebl loader :-( I think we can using a statically linked function callback to solve this. Each of libssl and libsoftokn would have their own. The libsoftokn one wouldn't do anything. I will attach a patch that does that. NSS_RegisterShutdown will need to behave properly for this to be fixed, so I'm adding bug 353608 as a dependency of this bug. The best solution is to use platform-dependent techniques such as filtee libraries or or load/unload the freebl3 DLL/shared library in the DllMain and _init/_fini routines of the softokn3 and ssl3 DLLs/shared libraries. But this is a lot of work. So you can zero the PRCallOnceType variable to restore it to the pristine state. I suggest that you do it like this: static PRCallOnce pristineCallOnce; PRCallOnce callOnce; .... /* reset */ callOnce = pristineCallOnce; The reason is that in pthreads, the equivalent pthread_once_t needs to be initialized with the macro PTHREAD_ONCE_INITIALIZER. So the equivalent pthread code would look like: static pthread_once_t pristineOnce = PTHREAD_ONCE_INITIALIZER; pthread_once_t once = PTHREAD_ONCE_INITIALIZER; ... /* reset */ once = pristineOnce; I'm changing the platform to "all/all", since we now have a libfreebl dynamic library on all platforms. I'm also changing the title to reflect the real issue which is not just leaking the little BLLib structure but the entire dynamic library. Created attachment 239608 [details] [diff] [review] Fix structure leak, and more 1) Use a static structure for BLLib 2) Unload the freebl dynamic library in BL_Cleanup 3) Add an init callback functionality in the freebl loader called FREEBL_InitCallback . Both ssl and softoken must define this function in order to link. softoken defines an empty function since it already calls BL_Cleanup in C_Finalize . ssl uses this feature to register an NSS shutdown callback with NSS_RegisterShutdown to call BL_Cleanup . Note that I chose arbitrary name and source files for the callback function names and source file locations. Feel free to suggest better ones. 4) reset the vector pointer to NULL in BL_Cleanup . Otherwise, the sequence of C_Initialize, C_Finalize, C_Initialize will crash in the 2nd C_Initialize since vector is non-NULL, and freebl doesn't get reloaded . I found this out with pk11mode - no other NSS program tests this code sequence. We should definitely include Glen's pk11mode program in our QA . However, this change may not be the right fix . I think the following code sequence which is repeated multiple times in loader.c doesn't make sense : if (!vector && PR_SUCCESS != freebl_RunLoaderOnce()) return ; return (vector->p_RNG_RNGInit)(); If vector is non-NULL, we don't execute freebl_RunLoaderOnce , and thus we don't call PR_CallOnce. We just go directly to the next line and dereference vector. However, on platforms without total store order, vector may be non-NULL, but vector->p_RNG_RNGInit may be NULL . The only way to ensure that vector->p_RNG_RNGInit is also non-NULL is to always call freebl_RunLoaderOnce() first. So, the order of the 2 tests needs to be reversed . This will obviously impact performance, but it is the only thing that will make the code work reliably on non-TSO platforms, assuming freebl_RunLoaderOnce and PR_CallOnce are implemented correctly on all platforms (which the later is not : see bug 273649). However, when considering the softoken's use of the loader only, we may decide after careful inspection of the PKCS#11 specification that a thread must first call C_Initialize before any other operations can succeed. Thus, it may be OK to test only vector and not call freebl_RunLoaderOnce() at all, since C_Initialize will always call it, and the program isn't supposed to have any pending PKCS#11 operations before C_Initialize succeeds. Unfortunately, the same does not hold true for libssl's use of the loader in the bypass case. libssl does not have an initialization function like C_Initialize, and thus must truly be able to do a late dynamic loading of freebl . This freebl load from ssl currently happens only in programs that use bypass, the first time a socket makes use of the bypass feature. And it could happen in any number of threads - there is no SSL API requirement to begin doing a handshake first in one thread before other threads can do so . So, to fix this properly for ssl on non-TSO platforms, the loader linked to ssl should always call PR_CallOnce for each function in the table. This would obviously be a performance hit where it is least wanted, but it is needed for correctness, unless we add a libssl initialization function to always do the dynamic freebl load upfront . 5) I tested this fix and it passes all.sh on Solaris and Windows, both OS_TARGET=WINNT and OS_TARGET=WIN95. I also tested it with the pk11mode test program that Glen has been working on. 6) By plugging the leak of freebl3.dll, this fix also causes nspr4.dll to be unloaded from memory if softoken is used in non-NSPR programs . Non-NSPR programs that unload the softoken on Windows will crash in one of the NSPR background threads since they are running code that is no longer in memory. This is not technically a regression - this bug already existed in the NSS 3.10 softoken, and was only accidentally hidden in 3.11 by loading and leaking freebl3.dll, which caused nspr4.dll to be leaked also . It is not confirmed if unloading NSPR on other platforms than Windows with OS_TARGET=WINNT has any ill effects and which ones. I tested with OS_TARGET=WIN95 and it was OK . I couldn't test Unix because pk11mode doesn't have native code to load softoken - it uses NSPR's PR_LoadLibrary, and thus it isn't possible to get NSPR unloaded in pk11mode on Unix to check the effect of NSPR being unloaded . We might choose to treat this NSPR unloading issue in a different bug than this one, as there are multiple possible solutions to it - either a change in softoken to leak NSPR, or a change in NSPR to leak itself. If we choose to fix it in NSPR, the NSPR bug should be made a dependency of this one. If we choose to fix it in softoken, the change can be made in another patch as part of this bug. Wan-Teh, I would like your opinion on this. Comment on attachment 239608 [details] [diff] [review] Fix structure leak, and more The callback related code in this patch is hard to understand. It's a high price to pay for this leak, which is a one-time leak unless an application loads and unloads libsoftokn3.so or libssl3.so repeatedly (not to be confused with calling C_Initialize and C_Finalize on libsoftokn3.so repeatedly). To relinquish control on key3db and cert8.db, it's sufficient to call C_Finialize. You don't need to unload libsoftokn3.so. I would call NSS_RegisterShutdown(SSL_BypassShutdown, NULL) in SSL_ImportFD, which I believe any user of the SSL library must call. Alternatively, call NSS_RegisterShutdown(SSL_BypassShutdown, NULL) in SSL_OptionSet etc. when the SSL_BYPASS_PKCS11 option is enabled. In freebl/blapi.h, we have > typedef struct { > PRLibrary *dlh; > } BLLibrary; > >+static BLLibrary blLib; It's time to eliminate the BLLibrary type and just use PRLibrary. > static BLLibrary * > bl_LoadLibrary(const char *name) Change this to return PRLibrary *. > static PRFuncPtr > bl_FindSymbol(BLLibrary *lib, const char *name) > { > PRFuncPtr f; > > f = PR_FindFunctionSymbol(lib->dlh, name); > return f; > } Delete bl_FindSymbol. Replace by direct PR_FindFunctionSymbol calls. > static PRStatus > bl_UnloadLibrary(BLLibrary *lib) > { > if (PR_SUCCESS != PR_UnloadLibrary(lib->dlh)) { > return PR_FAILURE; > } >- PR_Free(lib); >+ lib->dlh = NULL; > return PR_SUCCESS; > } Similarly, delete bl_UnloadLibrary and call PR_UnloadLibary directly. Wan-Teh, Re: comment 28, I agree with you that it is not strictly required to unload libfreebl if the application only calls C_Initialize and C_Finalize without repeatedly loading/unloading the libsoftokn3 PKCS#11 module. Another possibility would be to unload freebl in a library unload callback (DllMain, etc) as you mentioned in comment 25 . But it is also much more complicated to implement that way. Also, the PKCS#11 specification defines C_Finalize's job to be "clean up miscellaneous Cryptoki-associated resources" in section 6.9, table 8 . I think that libfreebl3.so qualifies as one of those resources to be cleaned up . So IMO, it is appropriate and desirable to unload libfreebl in C_Finalize . I like your suggestions about calling NSS_RegisterShutdown somewhere else for SSL_BypassShutdown are good . I know the callback mechanism in the patch is not elegant. But registering in SSL_OptionSet or SSL_ImportFD runs the risk of having repeated acquisitions of the shutdown function list lock, as opposed to doing it as part of BL_Init which will be done only once. The solution may be to do a PR_CallOnce on the routine that will call NSS_RegisterShutdown, and have SSL_BypassShutdown reset the once object . It is probably better to do it in SSL_OptionSet(SSL_BYPASS_PKCS11, ...) which will be less common than SSL_ImportFD . This way, applications that don't use bypass (most or all of them today) will continue not to unnecessarily load libfreebl from libssl. Re: BLLibrary vs PRLibrary, that change isn't required to fix the leak, but I agree with you that it should be done - it probably should have been done last year when I converted the loader to be 100% NSPR and eliminated the native code . Created attachment 239727 [details] [diff] [review] Incorporate Wan-Teh's feedback 1) This patch no longer has the freebl init callback mechanism that the previous one did. 2) I cleaned up the unnecessary wrapper functions in loader.c 3) I added some error checking for the new functions in ssl, which are now in sslsock.c and invoked only when SSL_OptionSet(sock, SSL_BYPASS_PKCS11, PR_TRUE) or SSL_OptionSetDefault(SSL_BYPASS_PKCS11, PR_TRUE) are called . If servers are using the model socket option with SSL_OptionSet, SSL_BypassSetup should be a one-time call only . Comment on attachment 239727 [details] [diff] [review] Incorporate Wan-Teh's feedback I agree with 99% of this patch, but there are a few things that must be changed. 1. Any time we assert that a pointer is non-NULL, and that assertion is NOT followed by code that tests the same condition as the assert and handles it, we create a klocwork bug report. So in BL_Cleanup, we need a line that reads "if (blLib)" following this new assertion: >+ PORT_Assert(blLib); >+ PR_UnloadLibrary(blLib); >+ blLib = NULL; 2. SSL_OptionSet should treat any non-zero value as true, and not only treat PR_TRUE as true. Tests like this one: >+ if (PR_TRUE == on) { should either be if (on) { or if (PR_FALSE != on) { I stronly perfer the former. Remember, the c compiler doesn't enforce that PRBools must contain only PR_TRUE or PR_FALSE. Remember the principle of least astonishment. Nelson, re: comment 31 1) BL_Cleanup is a function that returns void . I thought about adding an if (blLib) test after the assertion I added, but there just isn't any action I could think of doing for optimized code, and I didn't want to go through the trouble of changing the BL_Cleanup return type. IMO, this is one of the cases where it is appropriate to have an assertion without a test. We should find a way to teach klocwork about that, or mark its report as invalid if it comes. 2) a) Even though the C compiler does not strictly enforce enum values, IMO, any code that assigns values other than PR_TRUE and PR_FALSE to a PRBool, or depends on them, should be changed to use a different type that is explicitly defined to allow other values. That includes some broken code in SSL_OptionSet for certain. SSL options. Unfortunately, this is a public API, so the type argument cannot easily be changed. However, the only legal values defined for SSL_BYPASS_PKCS11 are also the only legal values for PRBool, which are PR_TRUE and PR_FALSE . b) if (on) is the equivalent of if (0 != on) That code is making an assumption that 0 is one of the legal values in the enum, which may or may not be the case. The code shouldn't be aware of the particular values in the enum, and only name the constants. If the values assigned to enum were ever to change, or if 0 is not one of the legal values, if (on) will not do what's intended. c) I could live with if (PR_FALSE != on) I assume you prefer it because it means : if (PR_TRUE == on) || (any_other_value_defined_in_the_future_for_SSL_BYPASS_PKCS11_but_also_illegal_value_for_PRBool == on) IMO, it is not good practice to use other values than PR_TRUE and PR_FALSE . The precedent set for SSL_REQUIRE_NEVER / SSL_REQUIRE_ALWAYS / SSL_REQUIRE_FIRST_HANDSHAKE / SSL_REQUIRE_NO_ERROR should be an exception, one that should not be perpetuated in other SSL options . But if we ever want to do that horrible thing again, then the code that examines the value of "on" in SSL_OptionSet and SSL_OptionSetDefault will have to be changed in order to allow for other values and do other things with them, and that means the code in my patch will have to be changed anyway. We can make that change then, and hopefully never. In the meantime, for SSL_BYPASS_PKCS11, if (PR_FALSE != on) is the equivalent of if (PR_TRUE == on) Julien, many text books on good programming teach against the practice of having two boolean values in a variable that can hold more than two values. They include examples that look approximately like this: if (value == 0) do false case if (value == 1) do true case and then point out: what happens if value == 2? They teach that the proper solution is to pick one value and always test for equality or inequality to that. All values equal to that value are treated one way, all values not equal to it are treated the other. Therefore, for SSL, I have chosen the value zero as the one value. PR_FALSE is zero by definition. Don't ask me to consider that PR_FALSE might be defined to be some non-zero value. Julien, Are you saying that it is appropriate to crash if blLib is NULL when BL_Cleanup is called in non-debug builds? Re: comment 34, there is no crash in non-debug builds that would be avoided by testing blLib . If blLib is NULL, PR_UnloadLibrary will fail and set the PR_INVALID_ARGUMENT_ERROR error code. Thus, blLib being NULL is only an error for BL_Cleanup, but it has no way to report that error anywhere given that it is a void function. Created attachment 240377 [details] [diff] [review] Incorporate Nelson's feedback 1) I decided to still add the check for blLib, because the implementation for PR_UnloadLibrary is in a different module, and may still crash in a different version of NSPR than the one for which I checked the source, however unlikely that may be. 2) I added assertions in loader.c that PR_UnloadLibrary succeeded. I hope no code checking tools will complain about that ! 3) re: comment 33, the previous patch didn't include if (on == false_value ) { do_false_case } if (on == true_value ) { do_true_case } But rather : if (on == true_value ) { do_true_case } else { do_false_case } This means either do_true_case or do_false_case are always executed. You object and recommend using instead : if (on != false_value { do_true_case } else { do _false_case } In both cases, either the true or false code path is always executed. If on is a random_value distinct from true_value and false_value, then my test construct would call do_false case, while yours would call do_true case. Given that 2 (or any random_value not PR_TRUE or PR_FALSE) is an undefined value for PRBool, I don't agree that it is preferrable to call do_true_case over do_false_case for undefined values . I have implemented your suggestion in the interest of getting the patch out quickly. It is no worse and no better than the original code. If you want to be strict, an undefined input value needs to be handled as a third case, an error case, ie: if (on == true_value ) { do_true_case } else if (on == false_value) { do_false_case } else { do_error_case }; I think that would clearly be a better solution than either my original patch or your suggestion, but it is also a change that should be made for all options that take a PRBool, ie. all of them except SSL_REQUIRE_CERTIFICATE, I believe. I only wanted to change the implementation for SSL_BYPASS_PKCS11 in this patch, so I abstained from making that change. Lastly, not everyone commits the numeric value of every enum to memory. That's the compiler's job. I personally find the following to be unreadable code : if (on) When on is a PRBool, this actually means : if ((PRBool) 0 != on) That is confusing unless you know the definition of PRBool . To make sense of this expression requires the reader to check prtypes.h to figure out that PR_FALSE is 0, and that the expression actually means : if (PR_FALSE != on ) IMO, all code using PRBool should be independent of the numeric values assigned to PR_FALSE and PR_TRUE. The only assumption that is reasonable to make which is obvious from the names PR_FALSE and PR_TRUE is that they are distinct values. I realize a lot of existing NSPR code doesn't comply with this, but for new code, it is easy to comply by always spelling out the enum names rather than referring to numeric values such as 0. Comment on attachment 240377 [details] [diff] [review] Incorporate Nelson's feedback Thanks, Julien. r=nelson Comment on attachment 240377 [details] [diff] [review] Incorporate Nelson's feedback r=wtc. Please make the following changes. In freebl/loader.c >+static PRLibrary* blLib; (Optional) in C, it's more common to say "PRLibrary *blLib". In bl_LoadLibrary >- if (NULL == lib->dlh) { >+ if (NULL == lib) { > #ifdef DEBUG_LOADER > PR_fprintf(PR_STDOUT, "\nLoading failed : %s.\n", name); > #endif >- PR_Free(lib); > lib = NULL; > } Delete "lib = NULL;". >+static PRCallOnceType once; (Optional) use a longer variable name for 'once' since now it's in the file scope. In BL_Cleanup >+ PORT_Assert(blLib); >+ if (blLib) { >+ PRStatus status = PR_UnloadLibrary(blLib); >+ PORT_Assert(PR_SUCCESS == status); >+ blLib = NULL; >+ } (Optional but recommended) I would remove the assertion and the if (blLib) test. You set 'vector' and 'blLib' together in freebl_LoadDSO, so it doesn't make sense to assert/test 'blLib' but not assert/test 'vector' in BL_Cleanup. In ssl/sslsock.c >+static PRCallOnceType pristineCallOnce; Add 'const'. >+static PRCallOnceType once; (Optional) use a longer variable name. >- ss->opt.bypassPKCS11 = on; >+ if (PR_FALSE != on) { >+ if (PR_SUCCESS == SSL_BypassSetup() ) { >+ ss->opt.bypassPKCS11 = on; >+ } else { >+ rv = SECFailure; >+ } >+ } else { >+ ss->opt.bypassPKCS11 = PR_FALSE; >+ } (Optional) you can rewrite the code like this: if (PR_FALSE != on && PR_SUCCESS != SSL_BypassSetup()) { rv = SECFailure; break; } ss->opt.bypassPKCS11 = on; >- ssl_defaults.bypassPKCS11 = on; >+ if (PR_FALSE != on) { >+ if (PR_SUCCESS == SSL_BypassSetup()) { >+ ssl_defaults.bypassPKCS11 = on; >+ } else { >+ return SECFailure; >+ } >+ } else { >+ ssl_defaults.bypassPKCS11 = PR_FALSE; >+ } (Optional) similarly, you can rewrite the code like this: if (PR_FALSE != on && PR_SUCCESS != SSL_BypassSetup()) { return SECFailure; } ssl_defaults.bypassPKCS11 = on; Comment on attachment 240377 [details] [diff] [review] Incorporate Nelson's feedback I also recommend that you change "PR_FALSE != on" to "on". When we edit someone else's code, we should imitate his style. If you look the code before and after your changes, you will see a lot of tests like "if (on)" and "if (on && ...)". Now the new "PR_FALSE != on" tests stand out. Created attachment 240394 [details] [diff] [review] Patch as checked in to the trunk Thanks for the reviews, Nelson and Wan-Teh . I removed the unnecessary lib = NULL assignment, and renamed the once variables. Checked in to the trunk : Checking in freebl/loader.c; /cvsroot/mozilla/security/nss/lib/freebl/loader.c,v <-- loader.c new revision: 1.30; previous revision: 1.29 done Checking in ssl/sslsock.c; /cvsroot/mozilla/security/nss/lib/ssl/sslsock.c,v <-- sslsock.c new revision: 1.49; previous revision: 1.48 done There is a conflict on the NSS_3_11_BRANCH in loader.c which I'm trying to resolve. I will also attach the patch for that branch when checked in. Do you need to fix this bug on the NSS_3_11_BRANCH? I'll leave it up to you. Just wanted to remind you that we need to be strict on NSS 3.11 branch checkins. Wan-Teh, Yes, I would like to fix it on the branch as well because a lot of application groups are doing leak checks. It is better if they don't see this leak and constantly come back to complain to us. I found that the conflict in loader.c is due to bug 338798 . This is very confusing because the bug was marked as fixed in 3.11.2, but only parts of the changes went to NSS_3_11_BRANCH, and other parts to the trunk only. The loader patch only went to the trunk. Created attachment 240396 [details] [diff] [review] Patch as checked in to NSS_3_11_BRANCH Checking in freebl/loader.c; /cvsroot/mozilla/security/nss/lib/freebl/loader.c,v <-- loader.c new revision: 1.26.2.3; previous revision: 1.26.2.2 done Checking in ssl/sslsock.c; /cvsroot/mozilla/security/nss/lib/ssl/sslsock.c,v <-- sslsock.c new revision: 1.44.2.5; previous revision: 1.44.2.4 done I'm marking this bug fixed, 5 years after it was first opened, finally ;). But the story does not end here. I opened several bugs for the pre-existing problems found during the creation of this fix : 1) NSPR unloading will cause a crash Since we didn't decide how to proceed to fix this, I created 2 bugs Bug 354613 is about fixing this problem only for softoken . Bug 354614 is about fixing it the problem in NSPR itself. If we fix the later, it would automatically fix the former. We may fix the former first and the later in the future. 2) problem with incorrect PR_CallOnce usage in loader.c Bug 354609 Comment on attachment 240377 [details] [diff] [review] Incorporate Nelson's feedback This patch will result in the real BL_Cleanup (defined in lib/freebl/rsa.c) being called twice at NSS shutdown if SSL_BYPASS_PKCS11 is enabled. Right now the real BL_Cleanup is not reference counted. Fortunately, the only function that BL_Cleanup calls, RSA_Cleanup, has a test to prevent double frees. We may want to add a comment to the real BL_Cleanup to remind ourselves that BL_Cleanup may be called twice at NSS shutdown and any function BL_Cleanup calls must prevent double frees. We also need to verify that the softoken cannot call BL_Cleanup while the SSL library is using freebl, and vice versa. (One scenario I checked is that the application has set the ssl_defaults.bypassPKCS11 option to true and performs a FIPS/non-FIPS mode switch.) My code inspection showed that we're also safe here. If you didn't consider these issues when you wrote or reviewed this patch, you may want to do another review. Another idea is to add a new function BL_Unload. void BL_Unload(void) { if (blLib) { vector = NULL; PRStatus status = PR_UnloadLibrary(blLib); PORT_Assert(PR_SUCCESS == status); blLib = NULL; once = pristineCallOnce; } } BL_Cleanup would be changed back. The softoken would call BL_Unload after calling BL_Cleanup. The SSL library would call BL_Unload instead of BL_Cleanup. Wan-Teh, I did consider the issue and concluded that calling BL_Cleanup twice was safe. The SSL library only needs to unload freebl at NSS_Shutdown time, it does not need to cleanup the freebl internal state which is shared with softoken. This may cause problems in the future depending on the order of execution if BL_Cleanup is ever made a non-void function to check for errors. I like the idea of a separate BL_Unload function and will implement this, so I'm reopening this bug. Created attachment 240536 [details] [diff] [review] Incremental patch. Move unloading code out of BL_Cleanup and into BL_Unload Now that the unloading happens in a separate function, the freebl library is no longer guaranteed to have been previously loaded when BL_Unload gets invoked, as was the case in BL_Cleanup. SSL_BypassShutdown may legitimately call BL_Unload without freebl having ever been loaded, because it gets registered when an SSL socket gets configured for SSL_BYPASS_PKCS11, rather than when libssl actually uses bypass and calls one of the freebl functions (as in attachment 239608 [details] [diff] [review]). If the SSL socket is configured for SSL_BYPASS_PKCS11 and then destroyed, SSL_BypassShutdown gets registered, but freebl is never loaded. So, at NSS_Shutdown time, SSL_BypassShutdown will call BL_Cleanup but blLib will be NULL. Because of this case, I had to remove the PORT_Assert(blLib) which existed in BL_Cleanup from BL_Unload. The upside is that freebl will no longer get loaded and immediately unloaded in SSL_BypassShutdown / BL_Cleanup in this case. It's not necessary to separate BL_Unload from BL_Cleanup. Just change BL_CLeanup to do this: - if (!vector && PR_SUCCESS != freebl_RunLoaderOnce()) - return; - (vector->p_BL_Cleanup)(); + if (vector) + (vector->p_BL_Cleanup)(); then to the unload code in the function after that. This also solves the problem seen where we sometimes load freebl just to clean it up. :) Nelson, Re: comment 49, No, it isn't strictly required currently. However, your patch does not solve the problem that we don't want to call (vector->p_BL_Cleanup)(); from SSL_BypassShutdown . Even though it is currently safe to call this function twice, logically that operation is only supposed to be performed once, from softoken's C_Finalize. attachment 240536 [details] [diff] [review] solves that problem . Comment on attachment 240536 [details] [diff] [review] Incremental patch. Move unloading code out of BL_Cleanup and into BL_Unload I wish we didn't have to add another new public function to blapi.h to solve this. But I don't have a better solution to offer at this time. So, r=nelson Wan-Teh, Nelson, Thanks for the reviews. Nelson, it would be possible not to add a separate function if BL_Cleanup was changed to take an argument such as PRBool unloadOnly . But this would amount to have BL_Cleanup not call the real cleanup function, so I think it would be confused, and thus attachment 240536 [details] [diff] [review] is better, IMO . I checked it in to the tip : Checking in freebl/blapi.h; /cvsroot/mozilla/security/nss/lib/freebl/blapi.h,v <-- blapi.h new revision: 1.25; previous revision: 1.24 done Checking in freebl/loader.c; /cvsroot/mozilla/security/nss/lib/freebl/loader.c,v <-- loader.c new revision: 1.31; previous revision: 1.30 done Checking in softoken/pkcs11.c; /cvsroot/mozilla/security/nss/lib/softoken/pkcs11.c,v <-- pkcs11.c new revision: 1.134; previous revision: 1.133 done Checking in ssl/sslsock.c; /cvsroot/mozilla/security/nss/lib/ssl/sslsock.c,v <-- sslsock.c new revision: 1.50; previous revision: 1.49 done And to NSS_3_11_BRANCH : Checking in freebl/blapi.h; /cvsroot/mozilla/security/nss/lib/freebl/blapi.h,v <-- blapi.h new revision: 1.23.2.2; previous revision: 1.23.2.1 done Checking in freebl/loader.c; /cvsroot/mozilla/security/nss/lib/freebl/loader.c,v <-- loader.c new revision: 1.26.2.4; previous revision: 1.26.2.3 done Checking in softoken/pkcs11.c; /cvsroot/mozilla/security/nss/lib/softoken/pkcs11.c,v <-- pkcs11.c new revision: 1.112.2.19; previous revision: 1.112.2.18 done Checking in ssl/sslsock.c; /cvsroot/mozilla/security/nss/lib/ssl/sslsock.c,v <-- sslsock.c new revision: 1.44.2.6; previous revision: 1.44.2.5 done I'm closing this bug once again. A drawback of adding BL_Unload is that it is tied to the current implementation, which dynamically loads the freebl shared library on all platforms. It is possible to have a different implementation that doesn't need to load the freebl shared library dynamically. For example - if the platform has only one freebl shared library, we could directly link with it. - on Solaris we could link with a default freebl shared library and use the filter/filtee library method. Also the commands such as bltest and fipstest that use the freebl API would also need to call BL_Unload. (Right now they don't even call BL_Cleanup.) If we don't add BL_Unload, we need to add a comment that explains the rules of calling BL_Cleanup when there are more than one users of the freebl shared library. It must be safe to call BL_Cleanup more than once. Created attachment 241391 [details] [diff] [review] Patch for Mac OS X There are two kinds of shared libraries on Mac OS X. 1. Dynamic shared library (.dylib). We usually link "statically" with a dynamic shared library even though it can also be loaded at run time. All of our shared libraries except libnssckbi.dylib are dynamic shared libraries. 2. Loadable bundle. This can only be loaded at run time. NSPR 4.6.x or earlier has a bug in PR_UnloadLibrary. If the shared library is a dynamic shared library, PR_UnloadLibrary incorrectly returns PR_FAILURE (bug 351609). This bug has been fixed on the NSPR trunk (NSPR 4.7 pre-release). Since libfreebl3.dylib is now built as a dynamic shared library, we get an assertion failure in BL_Unload at loader.c:996. There are three solutions. 1. comment out the assertion for Mac OS X. 2. backport the NSPR fix to the NSPR_4_6_BRANCH. This requires releasing NSPR 4.6.4 and getting mozilla.org drivers' approval for MOZILLA_1_8_BRANCH checkin (because MOZILLA_1_8_BRANCH is kept in sync with NSPR_4_6_BRANCH right now). 3. build libfreebl3.dylib as a loadable bundle. I am nervous about making such a change in a patch release even though my testing shows this change works. This patch contains solutions 2 and 3. Let me know which solution (1, 2, or 3) you prefer. Wan-Teh, I think we should keep the assertion in the loader because it detects a real bug. I would prefer if you just backported the fix from NSPR 4.7 to NSPR_4_6_BRANCH . We can release NSPR 4.6.4 when NSS 3.11.4 or 3.11.5 ships . There are a few other bugs that need to be fixed in NSPR 4.6.4 as well that I'm working on. Comment on attachment 241391 [details] [diff] [review] Patch for Mac OS X Either of the changes in this patch is acceptable to me, but I prefer the prlink.c solution . Comment on attachment 241391 [details] [diff] [review] Patch for Mac OS X OK, I checked in the NSPR change in this patch on the NSPR_4_6_BRANCH for NSPR 4.6.4. See bug 351609. Comment on attachment 241391 [details] [diff] [review] Patch for Mac OS X I checked in the NSS change in this patch on the NSS trunk (NSS 3.12) only. libfreebl3.dylib should be built as a loadable bundle just like libnssckbi.dylib because it is only dynamically loaded. We don't generate the import library freebl3.lib for freebl3.dll on Windows for the same reason. Comment on attachment 241391 [details] [diff] [review] Patch for Mac OS X (In reply to comment #59) > (From update of attachment 241391 [details] [diff] [review]) > > I verified that bltest tests now run in 32 bit mode on MAC OS X 10.4.8 on 2x2.66 GHz Dual-Core Intel xeon. The bltest runs in 64 bit mode except all of the RC4 tests fail. in both the 32 bit and 64 bit builds I found failures with tstclnt: error looking up host: A directory lookup on a network address has failed. These errors may be due to my environment I am using DHCP, and will look into this. I will open a MAC OS X 64 bit RC4 bug and also a bug for 64 bit pk11mode failure. comment 59 says the fix was backed out, so I am reopening this bug. If it really still is fixed, please explain how that can be with the fix backed out. The patch (attachment 241391 [details] [diff] [review]) contains two independent changes. Either one is sufficient to fix the PR_UnloadLibrary failure on Mac OS X. So it is fine to back out one of the changes because it introduced a new problem with the test program bltest. does this fix unload a dll on windolls? if a dll is unloaded on windows, this may be exploitable via any crash like null deref and exceptions handlers. in a bug dveditz claimed you *never unload dlls*. Georgi, NSS has a way to unload shared libraries on all platforms, and (IINM) FireFox has UI with which to do it. If you have an issue with this, please file a new bug. (In reply to comment #64) > Georgi, NSS has a way to unload shared libraries on all platforms, > and (IINM) FireFox has UI with which to do it. If you have an issue > with this, please file a new bug. > well, i don't do windows, so i can't file a bug. just wanted to point out that *iirc* dveditz claimed in a bug i can't access anymore firefox doesn't unload DLLs. Article: Exploiting the Otherwise Non-exploitable on Windows From the article: To gain control of the top-level UEF, Fx and Gx will need to be deregistered asymmetrically. To accomplish this, DLL #1 must be unloaded before DLL #2.
https://bugzilla.mozilla.org/show_bug.cgi?id=115951
CC-MAIN-2016-44
refinedweb
6,926
65.12
OPC UA Client The origin can also browse all available nodes to provide the node details that you need to configure the origin. When you configure the OPC UA Client origin, you specify connection information and tag information to associate with each client request. You select the processing mode for the origin and specify the NodeIds for the nodes you want to use. You can use one of several different methods to provide the NodeIds. You can configure channel properties, such as the maximum chunk or message size. You can optionally configure the security policy that you want to use, related TLS properties, and an optional client private key alias. Processing Mode - Polling - The origin polls the OPC UA server at regular user-defined intervals, returning the current status of every specified node. - In polling mode, each record contains data from each specified node as a field. As soon as the origin generates a record, it passes the record to the pipeline to avoid delays in processing. - For example, say you set the OPC UA Client to poll the server every minute, and you specify five NodeIds. When the pipeline runs, the origin generates a record every minute, with the status of each of the five NodeIds in each record, regardless of whether any changes occurred since the last poll. - Subscribe - The origin subscribes to the specified nodes. The OPC UA server sends an update each time a change occurs with one of the specified nodes. When node changes occur, the server sends each change to the origin separately. - In subscribe mode, each record contains a single node change. As soon as the origin generates a record, it passes the record to the pipeline to avoid delays in processing. - For example, say you set the OPC UA Client origin to subscribe to ten nodes. After you start the pipeline, the pipeline sits idly until the OPC UA server sends data about a change to a subscribed node. - Browse nodes - Browse nodes mode is a tool to aid pipeline development. In browse nodes mode, the origin connects to the OPC UA server to retrieve all available node details, such as the node identifier and namespace index. - This mode provides easy access to the node details that you need to configure the NodeIds in the origin. - You can use browse nodes mode in data preview to view node details and then configure the origin. Or, you can run a pipeline in browse nodes mode to write node details to a file. You can alternatively use external methods to retrieve node details from the OPC UA server. - For example, say previewing your OPC UA server in browse nodes mode returns the following information: You can then use this information to configure the nodes that you want to subscribe to, as follows: Providing NodeIds - Manual - Manually enter the NodeId information. Use this method when you have a specific set and low volume of nodes that you want to use. You can use simple or bulk edit mode. - File - Provide a file of NodeId information. Use this method when you have a relatively static set of nodes that you want to use. You can update the file as needed, but will need to restart the pipeline to capture the latest nodes. - The file must be in a directory local to Data Collector. By default, the origin expects you to secure the information in a runtime resource file. - Enter the NodeId information in the file using the following JSON format: [ { "identifierType": "<NUMERIC | STRING | UUID | OPAQUE>", "namespaceIndex": <namespaceIndex>, "field": "<field name>", "identifier": "<node identifier>" }, { "identifierType": "<NUMERIC | STRING | UUID | OPAQUE>", "namespaceIndex": <namespaceIndex>, "field": "<field name>", "identifier": "<node identifier>" } ] - Tip: This is the same format used when entering node information manually in bulk edit mode. To verify the format, you can enter two NodeIds manually in simple format, then switch to bulk edit mode. - Browse Nodes - Specify a root NodeId, allowing the origin to browse for all available nodes under the root node. Use this method when you want to process data from a dynamic set of nodes that are under a single root node. - When you browse nodes, you specify an refresh interval. The refresh interval indicates how long the origin should wait before browsing again for an updated list of nodes to process. Security - Basic128Rsa15 - Basic 256 - Basic256Sha256 - None For more information about OPC UA security policies, see the OPC UA documentation. When using a security policy, you must configure the associated TLS properties. When necessary, you can specify a private key alias. Configuring an OPC UA Client Origin Configure an OPC UA Client origin to process data from an OPC UA server. - In the Properties panel, on the General tab, configure the following properties: - On the OPC UA tab, configure the following properties: - On the NodeIds tab, select the NodeId Fetch Mode.For more information about the different ways you can provide node information, see Providing NodeIds. - When using the Manual mode, use simple or bulk edit mode to enter the nodes that you want to use. Click the Add icon to add additional nodes. - When using the File mode, configure the following property: - When using Browse mode, configure the following properties: - On the Channel Config tab, you can configure the following properties: - On the Security tab, optionally configure the following properties:
https://streamsets.com/documentation/datacollector/latest/help/datacollector/UserGuide/Origins/OPCUAClient.html
CC-MAIN-2019-39
refinedweb
886
60.85
After a little back and forth, we realized that our web server lacked IPv6 support. We weren't listening to the requests made on IPv6 A few weeks ago one of our readers reached out on our support channel to tell us that our site wouldn't work for them no matter what. It simply wouldn't load on their browser (Chrome). After a little back and forth, we realized that our web server lacked IPv6 support. We weren't listening to the requests made on IPv6. If you don't know already IPv6 stands for Internet Protocol version 6, and it is intended to replace IPv4 network which is what our original web, as-is, has had a run on for the last two decades. Google is out with stats on IPv6 adoption lately (as of October 2018) and the numbers are rising steadily. Over twenty five percent of the Internet is now using IPv6 and from the graph it appears that well over half would be onboard in the coming few years. More importantly, a % of those who are on IPv6 already are exclusively so and cannot see your content if the website isn't configured to serve on the new protocol. (Updated per tweet.) Through this quick post we will configure our web app/site for the new protocol. This is how I set it up on Bubblin. The first step is to add an AAAA Record on your DNS Manager. We needed a public IP on IPv6 so I made a request to our hosting provider (Linode) to provide me with one. Once they responded, I went ahead and added our public IPv6 on the Remote Access Panel, like so: I added the ugly looking records with IPv6 option (bottom three) as screenshot-ted above. Since changes to DNS take some time to percolate we'll leave the DNS manager here and focus on configuring our app-server nginx for IPv6 next. Now Bubblin is delivered on a strict https protocol so we are effectively a permanent on redirecting all our traffic (from http →) to https. We use the Letsencrypt and Certbot to secure Bubblin with industry-grade SSL. Shown below is an excerpt from our nginx.conf.erb on production: … # $ sudo vi ~/.etc/nginx/sites-available/bubblin_production # add listen [::]:80 ipv6only=on; for requests via insecure protocol (http). server { listen 80; listen [::]:80 ipv6only=on; server_name <%= fetch(:nginx_server_name) %> www.<%= fetch(:nginx_server_name) %>; rewrite ^(.*) permanent; } # add listen [::]:443 to listen for requests over IPv6 on https. server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name www.<%= fetch(:nginx_server_name) %>; # Other SSL related stuff here. rewrite ^ permanent; } # add listen [::]:443 ssl http2; on the final server block. server { // Plenty of nginx config here. listen 443 ssl http2; # managed by Certbot listen [::]:443 ssl http2; # Add HSTS header with preloads add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"; } Notice the listen [::]:80 ipv6only=on; directive inside the server block and the HSTS directive at the bottom. To test your nginx configuration: $ sudo nginx -t // success $ sudo nginx -s reload Hoping that your DNS percolated by the time the nginx was cofigured (sometimes it may take up to 24 hours), now it is time to test if the website is available on IPv6: $ curl -6 The HTML page from your site should be served correctly. That's all folks. Hi, I'm Marvin Danig, CEO and Cofounder of Bubblin. You might want to follow and connect with me on Twitter or Github? P.S.: Reading more books on web will help your attention span. ✅ 30s ad ☞ AJAX using Javascript and JQuery + 2 Projects ☞ Modern JavaScript: Building Real-World, Real-Time Apps ☞ Essentials in JavaScript ES6 - A Fun and Clear Introduction ☞ JavaScript the Basics - JavaScript for Beginners ☞ Beginning ES6, The Next Generation of JavaScript AWS In this article, you'll see a simple example of how to implement server-side pagination in React with a Node.js backend API The example contains a hard coded array of 150 objects split into 30 pages (5 items per page) to demonstrate how the pagination logic works. Styling of the example is done with Bootstap 4.Running the React + React (React) Pagination ComponentClient-Side (React) Pagination Component // React client needs to do is fetch the pager information and current page of items from the backend, and display them to the user. Below is the React home page component ( /client/src/HomePage/HomePage.jsx) from the example. The loadPage() method determines the current page by checking for the value in the url query params or defaulting to page 1, then fetches the pager object and pageOfItems for the current page from the backend API with an HTTP request. The componentDidMount() React lifecycle hook kicks off the first call to loadPage() when the React component loads, then the componentDidUpdate() React lifecycle hook calls loadPage() when the page is changed with the pagination links. The component renders the current page of items as a list of divs, and renders the pagination controls using the data from the pager object. Each pagination link sets the page query parameter in the url using the Link React Router component with the search parameter. The CSS classes used are all part of Bootstrap 4.3, for more info see. import React from 'react'; import { Link } from 'react-router-dom'; class HomePage extends React.Component { constructor(props) { super(props); this.state = { pager: {}, pageOfItems: [] }; } componentDidMount() { this.loadPage(); } componentDidUpdate() { this.loadPage(); } loadPage() { // get page of items from api const params = new URLSearchParams(location.search); const page = parseInt(params.get('page')) || 1; if (page !== this.state.pager.currentPage) { fetch(`/api/items?page=${page}`, { method: 'GET' }) .then(response => response.json()) .then(({pager, pageOfItems}) => { this.setState({ pager, pageOfItems }); }); } } render() { const { pager, pageOfItems } = this.state; return ( <div className="card text-center m-3"> <h3 className="card-header">React + Node - Server Side Pagination Example</h3> <div className="card-body"> {pageOfItems.map(item => <div key={item.id}>{item.name}</div> )} </div> <div className="card-footer pb-0 pt-3"> {pager.pages && pager.pages.length && <ul className="pagination"> <li className={`page-item first-item ${pager.currentPage === 1 ? 'disabled' : ''}`}> <Link to={{ search: `?page=1` }}First</Link> </li> <li className={`page-item previous-item ${pager.currentPage === 1 ? 'disabled' : ''}`}> <Link to={{ search: `?page=${pager.currentPage - 1}` }}Previous</Link> </li> {pager.pages.map(page => <li key={page} className={`page-item number-item ${pager.currentPage === page ? 'active' : ''}`}> <Link to={{ search: `?page=${page}` }}{page}</Link> </li> )} <li className={`page-item next-item ${pager.currentPage === pager.totalPages ? 'disabled' : ''}`}> <Link to={{ search: `?page=${pager.currentPage + 1}` }}Next</Link> </li> <li className={`page-item last-item ${pager.currentPage === pager.totalPages ? 'disabled' : ''}`}> <Link to={{ search: `?page=${pager.totalPages}` }}Last</Link> </li> </ul> } </div> </div> ); } } export { HomePage }; The tutorial code is available on GitHub In /> </parent> > Welcome to React Next Landing Page, built with React, Next Js, Gatsby Js & Styled Components. NO jQuery!, We created reusable react components, and modern mono repo architecture, so you can build multiple apps with common components. You can use these landing for your react app. It’s super easy to deploy, we have provided complete firebase integration with it. You can host your next app into firebase along with other hosts like Now —
https://morioh.com/p/8de8165f4624
CC-MAIN-2019-47
refinedweb
1,198
56.86
). T Thanks, it worked. B @metalsadman Yup you’re right, needed an update, Thank you! 👐🏼 K @metalsadman Thank you for your quick answer, but the problem was on another place. It was connected rather with Vue.js than with Quasar Framework. Here is my problematic code (very simplified): The root problem was that attributes set later (they were not initialized in object { _id: 22 }) are missing Vue’s reactiveGetters and reactiveSetters. Here’s my solution: S and this version has the title expand to take the extra space (instead of having whitespace at the bottom of cards) This is an old thread. Quasar v0.14 has @keydown and @keyup events emitted, so no longer needing a @enter event. Always check documentation. Point @keydown to a function with one parameter (which will be the event itself), so check if it’s Enter key there. After including axios (“axios”: “^0.16.1”,…) like import axios from ‘axios’ the error occures. Now I have manually added the missing modules now
https://forum.quasar-framework.org/user/tonyskulk/topics
CC-MAIN-2021-25
refinedweb
167
67.55
There");} You also need to add: using System.Threading; Next you need to open the Program.cs file and change the application run to read: static void Main(){ Application.EnableVisualStyles(); Application.SetCompatibleText Rendering. Exploratory testers often say that a vague error message along the lines of "Something is wrong - I have to close now" is a sign that it was generated by an exception and a precise error message e.g. "Price cannot be negative" is a a sure sign that it was generated by a specific conditional test. This gives you a very clear idea of the state-of-the-art in exception handling. We can do better. Careful design of program logic and using exceptions only when absolutely essential is good advice. Exploratory Software Testing C# Design and Development: If you would like to be informed about new articles on I Programmer you can either follow us on Twitter, on Facebook or you can subscribe to our weekly newsletter. Each provides a full list of what's new each week - usually five hot book reviews, five thought-provoking articles and five not to be missed news items - in a compact click-to-read form.1636414> <ASIN:1430225491> <ASIN:0321718933> <ASIN:0470467274> <ASIN:0470563486> <ASIN:1430225254> <ASIN:0470127902> Hands on C# Mike James delves deep into the topic of exception handling Take Exception to everything BY MIKE JAMES. Try-catch. For example: catch(DivideByZeroException). For example: catch (DivideByZeroException myException) MessageBox.Show("division by zero isn't allowed " + myException.Source); catch (OverflowException) MessageBox.Show("Number too big"); finally MessageBox.Show("Something might be. Using The need to clean up after an exception is so common a requirement that there is another way to do it for objects that implement the IDisposable Bitmap B = new Bitmap(10, 10); MessageBox.Show(B.Size.ToString()); if (B != null) { ((IDisposable)B).Dispose(); } which can also be expressed with using: Bitmap B; using( B = new Bitmap(10, 10)) What is more you can have multiple objects within the using statement and create objects within it: using( Bitmap B = new Bitmap(10, 10)) If a finally block isn't about doing things that always have to be done how do you handle any exceptions that don't have specific catch clauses? The solution is to include a catch all clause at the end of the all of the specific catch clauses. For example catch { MessageBox.Show("something else wrong"); If none of the other catch clauses handle the exception then the final catch and the finally are executed. If you place the general catch clause higher in the list of catch clauses then you will see a compile time error message to the effect that you can't have specific catches after a general one. This is an example of the more general rule that more specific catch clauses have to come before more general ones. The reason is that as soon as a catch clause matches the type specification the try catch is complete and no more catch clauses are checked. So a more general catch clause occurring before a more specific one means that the more specific one will never be used. What is an exception? Most exceptions are generated by the CLR, i.e. the runtime. In many ways the whole idea of an exception is to provide a way for the runtime to tell the application that something is wrong. You can think of the archetypal exception as being either hardware generated - a disk error - or one stage removed from being a hardware problem - shortage of memory. The whole point is that these are external conditions or errors which have nothing much to do with the logic of your program. Why should you build allowances for hardware faults into your program logic? It is in this sense that an exception is exceptional. What has happened in practice is that this idea has drifted into a weaker idea of "things that don't happen very often". For example, is a divide by zero error something exceptional or should your program logic expect it, test for it and deal with it? You could just as easily add an if statement in front of the division as surround the code with a try catch. On the other hand you would be hard pressed to consider a statement such as: if(!file error) {} or any condition that mentioned the hardware state, as reasonably part of your program logic - but we do regard it as reasonably as part of an exception handling routine. So despite the idea that exceptions are a way for the lower level hardware and software to communicate to our application any difficulties they might be having exceptions have become the norm as a way of signalling almost any sort of error. As a result the framework and third party classes raise exceptions and so can any application you care to write. To give you some idea of how arbitrary the division between error, exception and perfectly normal behaviour is - consider the checked keyword. When you are doing integer arithmetic the chances are good that you will generate an overflow at some point. Is this an exception? If you surround the arithmetic by checked() then an overflow does raise an exception but surrounding it by unchecked() ignores any overflow. Notice that checking for arithmetic overflow using prior if statements would be a difficult task, so in this case the distinction is between perfectly normal code and an exception. Notice that the reason why overflow isn't always an exception is that low level algorithms often use integer arithmetic to do arithmetic modulo a power of 2. This is a very grey area and perhaps a high level language really shouldn't reflect what goes on under the bonnet in quite this way. The good news is that by default overflow produces an exception which is how it should be. The checked keyword gives us the best illustration of the distinction between an error and an exception: · An error is something that you could easily check for · An exception is something that is difficult to check for before the instruction that actually fails. Notice that this definition depends on what facilities the high level language gives you to check for error conditions. For example, all we have to do to convert numeric overflow from an exception to an error is provide an IsTooBig predicate. Custom exceptions To raise an exception all you have to do is define a custom exception type or use one of the existing types and simply use throw exception type. For example: throw new NullReferenceException(); You can also use throw within a catch block to pass the exception on to a higher level handler - if there is one. It is always worth being clear that throwing an exception from within a handler means that you are giving up on the exception and allowing some other part of the system deal with it. The possible consequence of this is that it isn't handled and a runtime error occurs. For example: throw myException; passes the exception on to some other handler. Of course if you want to throw a completely customised exception you need to create your own type that derives from another exception class. You also have to do the work in constructing an object of the type. It's accepted practice to end all exception classes with Exception so, for example, if you wanted to raise a custom exception "TooLongException", in case a task is taking too long you would define something like: class TooLongException : Exception public TooLongException(){} public TooLongException(string message):base(message){} public TooLongException(string message, Exception inner):base(message,inner){} Exceptions usually have three constructors but you don't have to follow this rule. The default constructor simply returns a new object of the type and allows the exception to be raised simply as: throw new TooLongException(); The second constructor allows a message to be included when the exception is raised: throw new TooLongException( "much much too long"); and finally the third constructor that allows a thrown exception to be passed on as part of the exception: NullReferenceException innercause= new NullReferenceException(); "much much too long", innercause); Of course in a real example the inner exception would have been generated elsewhere and passed up to the catch handler that is throwing the TooLong exception. This allows the next handler to determine why the TooLong exception might have occurred. Considering the building of a custom exception brings us to the subject of what information should be packaged into an exception object. There are some standard properties that are inherited from Exception but you can add as many extras as seem appropriate. The most commonly used are: · the message property to pass on an error message to the user · Helplink which is the URL of a helpfile · InnerException which stores any exception passed to the exception · Source and TargetSite which give the object and function that caused the exception. More complicated is the HResult property which is used to return the standard Windows error code if the exception was caused by an API. The most complicated piece of information that an exception object contains is the stack trace - a string containing a list of what method called what to get you to the location where the exception occurred. You can add to the properties and methods of the exception class but most programmers don't and there are some good reasons why not. The state problem Mention of the stack trace brings us to the central problem of exception handling. Exceptions in C# and Windows in general are "structured" in the sense that they are nested according to the call stack. If a method raises an exception and doesn't have an exception handler then the system looks back through the call stack to find the first method up the chain that does. This is perfectly reasonable behaviour and in fact without out it exception handling would be a waste of time. For example, suppose you write a try block: do things and a set of carefully constructed catch blocks that handle every possible exception. Now suppose that "do things" makes method calls. If any exception that occurred within a method that didn't handle it wasn't passed up the call stack then our efforts at comprehensive exception handling would be nonsense. However exception handling has some deep problems if you are planning to allow the program to continue. In practice most programmers simply use exception handling as a way of allowing the application to make a soft landing rather than crash. Typically they provide a more informative, or more friendly, error message that tells the user who to contact and what information to provide. Of course this is all Windows dressing because if the exception really is an exception and not an error state that the programmers were too lazy to detect then there will be nothing that can be done to correct it - other than improve or fix the hardware. Suppose, however, that you are trying to use exceptions to allow an application to continue. This is a much more difficult problem. The first idea you need to accept is exception safe code. If the code in the try block raises an exception then you have to ask what side effects are there when you reach the catch block. Side effects are the change in state that the try block has implemented up to the point that it raised the exception. For example, if the try block opened a file then it is still open. Side effects include deeper problems such as memory leaks and orphaned resources in general. Notice that the try block has its own scope so if you stick to the rule that the code in a try block will not access anything not created in the block it is almost (but not) guaranteed to be exception safe. Notice that the very act of propagating the exception up the call stack of necessity unwinds the state back to the method that handles the exception. That is, if you have a catch block that is triggered by an exception thrown by a method called within the try block then all trace of the internal state of that method has gone. This doesn't make cleaning up after the method an attractive proposition. Basically if your aim it to continue the application you need to handle exceptions at the lowest level and avoid unwinding the call stack. A two-step process The result;: b=2; result = a/b; but now the second attempt isn't protected. You could use: try { b = 2; result = a / b; } catch (DivideByZeroException) { result = 0; }; result = 0; }. The ultimate custom exception handler There"); static void Main() Application.EnableVisualStyles(); Application.SetCompatibleTextRendering. Careful design of program logic and using exceptions only when absolutely essential is good advice. Mike James has over 20 years of programming experience, both as a developer and lecturer, and has written numerous books including Foundations of Programming. His PhD is in computer science.
http://www.i-programmer.info/programming/c/1034-take-exception-to-everything.html?start=4
CC-MAIN-2017-22
refinedweb
2,183
57.81
In a recent episode of Numb3rs, the television drama sometimes involving the theoretical use of mathematics to solve real world crime problems "Charlie" the Mathematician gave a public seminar demonstrating some math's puzzles. One of those demonstrated was the "Monty Hall Paradox" Wikipedia describes this puzzle as: "The Monty Hall problem is a puzzle involving probability loosely based on the American game show Let's Make a Deal. The name comes from the show's host, Monty Hall. A widely known, but problematic (see below) statement of the problem is from Craig F. Whitaker of Columbia, Maryland in a letter to Marilyn vos Savant's September 9, 1990, column in Parade Magazine (as quoted by Bohl, Liberatore, and Nyd question in a nutshell is, after the host removes one of the wrong doors, is there any advantage to changing your answer. Is it now a 50/50 chance? Or does staying or changing your selection increase OR decrease your odds of winning? I wrote a simple console application that run 1 million iterations of The Monty Hall gameshow segment. The proof carries out the following process: Did the contestant win after switching? Result when SWITCHING your choice after the host has removed one incorrect answer: Wins: 666576 Losses: 333424 Total: 1000000 If you initially think that there is no advantage in switching doors, or your probability is 50/50, or your probability remains at 1 out of 3 - you are in great company, but still INCORRECT. Switching your initial choice after the host has disclosed one incorrect door DOUBLES your chances of winning to 2 out of 3. I must admit, until I wrote this proof I wasn't exactly confident. It clearly demonstrates that approximate 66% of the time you would WIN if you SWITCHED your answer, and only win 33% of the time if you kept your initial choice. I was also amazed at the almost perfect ratio 66.6 / 33.3 which matched the theoretical 2 out of 3 win/loss prediction. Kudos to the authors of the built-in .NET Random class. I didn't initially believe the results because the numbers worked out so nice, but I've double-checked my code and made certain that I've used random decisions for all moving parts. I'd be interested if anyone can point-out anything wrong with my code though. using System; namespace MontyHall { class Program { static void Main(string[] args) { // local variables to hold result and random generator Random random = new Random(); int wins = 0; int losses = 0; // iterate our MontyHall routine for (int i = 0; i < 1000000; i++) { // changeDoor: // 0 = no, the contestant stays with their initial pick, // after the offer to switch after // the disclosure of a "Goat" door // 1 = yes, the contestant chose to switch doors after // the disclosure of a "Goat" door //int changeDoor = 0; int changeDoor = 1; // calculate whether or not the contestant wins the Car - // random pickedDoor: 0, 1 or 2 for the door // the contestant initially picked // changeDoor: 0 = no, 1 = yes. The contentment decides // to change their selection after disclosure of a "Goat" door // random carDoor: 0, 1 or 2 for the door containing the car // random goatDoorToRemove: 0 = leftmost Goat door, // 1 = rightmost Goat door. Monty discloses // one incorrect door, this value indicates which one. bool result = MontyHallPick(random.Next(3), changeDoor, random.Next(3), random.Next(1)); if (result) wins++; else losses++; } Console.WriteLine("Wins: {0} Losses: {1} Total: {2}", wins, losses, wins+losses); Console.ReadLine(); } public static bool MontyHallPick(int pickedDoor, int changeDoor, int carDoor, int goatDoorToRemove) { bool win = false; // randomly remove one of the *goat* doors, // but not the "contestants picked" ONE! int leftGoat = 0; int rightGoat = 2; switch (pickedDoor) { case 0: leftGoat = 1; rightGoat = 2; break; case 1: leftGoat = 0; rightGoat = 2; break; case 2: leftGoat = 0; rightGoat = 1; break; } int keepGoat = goatDoorToRemove == 0 ? rightGoat : leftGoat; // would the contestant win with the switch or the stay? if (changeDoor == 0) { // not changing the initially picked door win = carDoor == pickedDoor; } else { // changing picked door to the other door remaining win = carDoor != keepGoat; } return win; } } } General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cs/montyhall.aspx
crawl-002
refinedweb
689
58.32
After: Deposit in a bank would require a change in different spreadsheets, the traditional way requires locking all these spreadsheets changing a value then unlocking them Whereas in functional programming, in the saga, you can break the transactions into small steps, and if there was any error along the way you can reverse all the steps that you’ve taken. Now, to manage all this, saga uses a process manager to keep track of what’s been done and what other actions need to be taken. What is Redux-Saga? Redux-saga is a redux middleware library, that is designed to handle side effects in your redux app nice and simple. It achieves this by leveraging an ES6 feature called Generators, allowing us to write asynchronous code that looks synchronous, and is very easy to test. Why use Redux-Saga? - Facilitates side-effects (API calls, database transactions) in your redux application. - Advanced tool (forking processes, yielding thread) covers almost all the real-world cases. - More sophisticated than redux-thunk and provides way more features than redux-thunk. Configure and installation of Redux saga Step 1: Initially, you have to install the redux-saga using the command given below: sudo npm install redux-saga –save Step 2: Let’s go to our getstore.js file. Here, first, we import redux-saga middleware utility from redux-saga. Import it we can say that saga middleware is equal to the invocation of the create saga middleware. So here in the given example, it shows how we can create a saga-middleware and add the saga-middleware to the middleware chain. You may add the log here to check everything is working fine or not. import createSagaMiddleware from ‘redux-saga’; export const getStore = () => { const sagaMiddleware = createSagaMiddleware(); const middlewares = [sagaMiddleware]; } Now, we will create a directory named sagas under the src folder, and create one file in it, you can name it anything you like but here in my example, I set it as current-directory-saga.js. In this file, we will create a function generator and in which we are pausing the code for a certain period of time. For that, we used delay which is another utility for redux-saga and we add yield here which takes care of the statement in a way that it will not jump to the next line of code until it completes, i.e, it can delay a subsequent code and can work only in generator functions. eg: import {delay} from ‘redux-saga’; export function* currentUserSaga() { while(true) { yield delay(1000); console.log(‘user saga loop’); } } Now, we will create another file name index.js file under the saga directory here in this file we will import saga file which we have created earlier like we saw before. import {currentUserSaga} from ‘./create-user-saga’; In the next step, we will create another file named initSaga.js in the root directory src. This file will contain a method that takes the saga middleware and runs all the saga through it. So here, we will import all the saga by importing index.js file in which we create a function name initSaga like shown below: import * as sagas from ‘./sagas’; export const initSagas = (sagaMiddleware) => { Object.values(sagas).map(sagaMiddleware.run.bind(sagaMiddleware)); } So it will take all the values of exported sagas and then each of them calls the saga middleware and making sure you keep the scope correct. In the last step, we will again go to getStore.js file and import initSaga and call at the bottom of this method. import {initSaga} from ‘./initiSagas’; export const getStore = ()=>{ const sagaMiddleware = createSagaMiddleware(); const middleWares = [sagaMiddleware, thunk]; if (getQuery()[‘logger’]) { middleWares.push(logger)} const composables = [applyMiddleware(…middleWares)] const enhancer = compose( … composables ); const store = createStore( reducer, defaultState, enhancer ); console.log(‘sagaMiddleware’); initSagas(sagaMiddleware); return store; }; NOTE: Saga can only be initialized when saga-middleware is placed inside the store. I would come up with another blog in the coming days and talk about some Redux-Saga side effects. I hope you were able to grasp something from this article. Thanks for Reading…
https://blog.knoldus.com/create-your-saga-with-redux-saga/
CC-MAIN-2020-40
refinedweb
682
54.63
Health Issue: The Debate on Vaccinations Info: 3074 words (12 pages) Essay Published: 29th Nov 2017 in Nursing Current Trend in Health Care: MMR Vaccines - Brittany Core Nothing is more heartbreaking than a young life that has been taken by the infection of a killer disease. Diseases kill children every year. Many diseases are bacteria, inhaled by the victim, infecting several areas of the body. The bacteria lives and grows while its victim dies. Other diseases are caused by viruses; a non-living infection that attacks the immune system and other living cells. Children are much more vulnerable to disease because of their weak immune systems. They’re weak because they have not lived life long enough to build immunities for such infections. However, in medicine, there are always risks. So, parents argue that vaccinations should not be mandatory for children. For many years, immunizations have continued to keep the spread of disease low. They have lowered the amount of deaths and saved lives. On the other hand, what if it was against families’ religion or they say their child is a “tough one” and they can handle the severe symptoms of disease? Those are the arguments made by people who believe that vaccines should not be mandatory for children. Are those arguments strong enough to counter all the children’s lives that have been saved by intelligent medicine? Unless America wants to unleash the beast of infectious killers, vaccinations for children should be mandatory to keep it from spreading and eventually killing. Research shows that the benefits of vaccination outweigh the risks because vaccines can prevent serious illness and disease in individuals, vaccinations can also prevent widespread outbreaks of diseases in populations and the side effect of vaccinations, though occasionally serious, are very rare. If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more In 1912, measles became a nationally notifiable disease in the United States, requiring U.S. healthcare providers and laboratories to report all diagnosed cases (Measles History, 2014). In the first decade of reporting, an average of 6,000 measles-related deaths were reported each year (Measles History, 2014). In the decade before 1963 when a vaccine became available, nearly all children got measles by the time they were 15 years of age (Measles History, 2014). It is estimated 3 to 4 million people in the United States were infected each year. Also each year an estimated 400 to 500 people died, 48,000 were hospitalized, and 4,000 suffered encephalitis (swelling of the brain) from measles (Measles History, 2014). In 1954, John F. Enders and Dr. Thomas C. Peebles collected blood samples from several ill students during a measles outbreak in Boston, Massachusetts (Measles History, 2014). They wanted to isolate the measles virus in the student’s blood and create a measles vaccine. They succeeded in isolating measles in 13-year-old David Edmonston’s blood (Measles History, 2014). In 1963, John Enders and colleagues transformed their Edmonston-B strain of measles virus into a vaccine and licensed it in the United States (Measles History, 2014). In 1968, an improved and even weaker measles vaccine, developed by Maurice Hilleman and colleagues, began to be distributed (Measles History, 2014). This vaccine, called the Edmonston-Enders (formerly “Moraten”) strain has been the only measles vaccine used in the United States since 1968 (Measles History, 2014). The MMR shot protects your child from measles, a potentially serious disease (and also protects against mumps and rubella), prevents your child from getting an uncomfortable rash and high fever from measles, keeps your child from missing school or childcare and keeps you from missing work to care for your sick child (Vaccine and Immunizations, 2015). The measles, mumps, and rubella vaccine is recommended for children 12 months to 12 years old (MMR, 2013). Children should receive the first dose of mumps-containing vaccine at 12-15 months and the second dose at 4-6 years (Mumps Vaccination, 2012). All adults born during or after 1957 should have documentation of one dose (Mumps Vaccination, 2012). Adults at higher risk, such as university students, health care personnel, and international travelers, and persons with potential mumps outbreak exposure should have documentation of two doses of mumps vaccine or other proof of immunity to mumps (Mumps Vaccination, 2012). Pregnant women and persons with an impaired immune system should not receive the MMR vaccine (Mumps Vaccination, 2012). It is a single shot, often given at the same doctor visit as the varicella or chickenpox vaccine (MMR, 2013). Measles can be dangerous, especially for babies and young children (Vaccine and Immunizations, 2015). For some children, measles can lead to pneumonia, lifelong brain damage, deafness and death (Vaccine and Immunizations, 2015). Measles is a respiratory disease caused by a virus. The virus lives in the mucus in the nose and throat of an infected person (Measles, n.d). Measles remains a common disease in many countries throughout the world, including some developed countries in Europe and Asia (Measles, n.d). While the disease is almost gone from the United States, measles still kills nearly 200,000 people each year globally (Measles, n.d). However, children younger than 5 years of age and adults older than 20 years of age are more likely to suffer from measles complications (Measles, n.d). Measles virus causes rash, cough, runny nose, eye irritation, and fever (MMR Vaccine (Measles, Mumps, Rubella), 2015). It can lead to ear infection, pneumonia, seizures (jerking and staring), brain damage, and death (MMR Vaccine (Measles, Mumps, Rubella), 2015). Pregnant women can give birth prematurely or have a low-birth-weight baby (Measles, n.d). Mumps is a contagious disease that is caused by the mumps virus. The mumps virus affects the saliva glands, located between the ear and jaw, and may cause puffy cheeks and swollen glands (MMR, 2013). Mumps virus causes fever, headache, muscle pain, loss of appetite, and swollen glands (MMR, 2013). It can lead to deafness, meningitis (infection of the brain and spinal cord covering), painful swelling of the testicles or ovaries, and rarely sterility (MMR, 2013). Most people who have mumps will be protected (immune) from getting mumps again (Mumps Vaccine, 2006). There is a small percent of people though, who could get infected again with mumps and have a milder illness (Mumps Vaccine, 2006). Rubella, also known as German measles or three day measles is an infectious viral disease, but don’t confuse rubella with measles, which is sometimes called rubeola (MMR, 2013). The two illnesses share similar features, including a characteristic red rash, but they are caused by different viruses (MMR, 2013). Rubella virus lives in the mucus in the nose and throat of infected persons (MMR, 2013). Rubella is usually spread to others through sneezing or coughing. In young children, rubella is usually mild, with few symptoms. They may have a mild rash, whichusually starts on the face and then spreads to the neck, chest, arms, and legs, and it lasts for about three days (MMR, 2013). A child with rubella might also have a slight fever or other symptoms like a cold. Adults are more likely to experience headache, pink eye, and general discomfort one to five days before the rash appears (MMR, 2013). Adults also tend to have more complications, including sore, swollen joints, and, less commonly, arthritis, especially in women (MMR, 2013). A brain infection called encephalitis is a rare, but serious, complication affecting adults with rubella (MMR, 2013). However, the most serious consequence from rubella infection is the harm it can cause to a pregnant woman’s unborn baby (MMR, 2013). Measles spreads when a person infected with the measles virus breathes, coughs, or sneezes (Vaccine and Immunizations, 2015). It is very contagious. A person can catch measles just by being in a room where a person with measles has been, up to 2 hours after that person is gone, and you can catch measles from an infected person even before they have a measles rash (Vaccine and Immunizations, 2015). Almost everyone who has not had the MMR shot will get measles if they are exposed to the measles virus (Vaccine and Immunizations, 2015). Measles, mumps, and rubella (MMR) vaccine can protect children and adult from all three of these diseases. Thanks to successful vaccination programs these diseases are much less common in the U.S. than they used to be, but if we stopped vaccinating they would return (MMR, 2013). Between 2000 and 2007, the number of measles cases reached a record low, with only 37 cases being reported in 2004 (Medical News Today, 2015). Last year saw the highest number of reported measles cases in the US since the virus had been declared eliminated (Medical News Today, 2015). There were 23 measles outbreaks in 2014 causing 644 people to become infected (Medical News Today, 2015). According to the CDC, the majority of these cases were brought into the country by travelers from the Philippines (Medical News Today, 2015). Where a large outbreak of the virus was occurring at the time and most of the people who became infected in the US were part of unvaccinated Amish communities in Ohio, but while last year’s statistics seem bad, this years are set to be even worse (Medical News Today, 2015). Last month alone saw 102 measles cases reported over 14 US states, including California, Texas and Washington (Medical News Today, 2015). The majority of these cases are thought to have stemmed from Disneyland, CA, where a number of people reported developing the virus after visiting the amusement part in mid-December (Medical News Today, 2015). If you don’t have insurance or if your insurance does not cover vaccines for your child, the Vaccines for Children Program may be able to help (CDC, 2015). The Vaccines for Children (VFC) program provides vaccines for children who are uninsured, Medicaid-eligible, or American Indian/Alaska Native (CDC, 2015). No federal vaccination laws exist, but all 50 states require certain vaccinations for children entering public schools (State Laws: Vaccines and Requirements, 2014). Vaccination coverage in America has been historically high as a result of school requirements, caregiver intervention with vulnerable populations, and seasonal influenza-shot drives, but it still falls short (MMR, 2013). Physicians or other providers must provide the current Vaccine Information Statement (VIS) each time they administer a vaccine covered under the National Vaccine Injury or purchased through the Centers for Disease Control and Prevention grant (Kimmel & Wolfe, 2005). They must record in each patient’s medical record the date of administration, the vaccine manufacturer, the lot number, and the name and business address of the provider, along with the edition of the VIS that was given and the date on which the vaccine was administered (Kimmel & Wolfe, 2005). An effective interaction can address the concerns of vaccine supportive parents and motivate a hesitant parent towards vaccine acceptance (Leask, Kinnersley, Jackson, Cheater, Bedford & Rowles, 2012). Conversely, poor communication can contribute to rejection of vaccinations or dissatisfaction with care and health professionals have a central role in maintaining education (Leask et al., 2012). These concerns will likely increase as vaccination schedules inevitably become more complex, and parents have increased access to varied information through the internet and social media (Leask et al., 2012). In recognition of the need to support health professionals in this challenging communication task conducted in usually public trust in vaccination; this includes addressing parents’ vaccine concerns (Leask et al., 2012). There are several reasons why parents are choosing not to vaccinate their children. Parents who decided not to give their child MMR were concerned that the vaccine might cause a reaction in their child (Immunizations, n.d). Most children who have the MMR vaccine do not have any problems with it, or if reactions do occur they are usually mild (Immunizations, n.d). Parents were concerned that the long-term effects of the combined MMR vaccine were not known (Immunizations, n.d). Other reasons given for deciding not to go ahead with MMR were concern about the ingredients of the vaccines and that live vaccines were used and that these would be too much for a child’s body to cope with (Immunizations, n.d). A very small number of parents personally believed that immunity derived from actually having the disease was more effective than the immunity obtained from vaccines (Immunizations, n.d). There is no scientific evidence that MMR vaccine causes autism. The suggestion that MMR vaccine might lead to autism had its origins in research by Andrew Wakefield, a gastroenterologist, in the United Kingdom (DPH, 2013). In 1998, Wakefield and colleagues published an article in The Lancet claiming that the measles vaccine virus in MMR caused inflammatory bowel disease, allowing harmful proteins to enter the bloodstream and damage the brain (DPH, 2013). The validity of this finding was later called into question when it could not be reproduced by other researchers (DPH, 2013). In addition, the findings were further discredited when an investigation found that Wakefield did not disclose he was being funded for his research by lawyers seeking evidence to use against vaccine manufacturers (DPH, 2013). Wakefield was permanently barred from practicing medicine in the United Kingdom (DPH, 2013). There will always be some cases of measles in the US, as it can still be brought into the country by individuals from other countries who have not been vaccinated. The CDC says the MMR vaccine is safe, and one dose of the vaccine is around 93% effective at preventing measles, while two doses is approximately 97% effective (Medical News Today, 2015). Immunization is the only effective way of protection for children against these diseases because children’s immune systems are defenseless against them because they are not fully developed yet, and once infected in most cases there is no cure or at least a very low chance of one. References Center for Disease Control (2015, February 5). Retrieved March 18, 2015, from DPH: Infectious Diseases. (n.d.). Retrieved March 22, 2015 Immunization. (n.d.). Retrieved March 18, 2015, from Kimmel, S. R., & Wolfe, R. M. (2005). Communicating the benefits and risks of vaccines. The Journal of Family Practice, 54(1 Suppl), S51-S57 State Vaccines and requirements. (2014, December 12). Retrieved March 22, 2015, from Leask, J., Kinnersley, P., Jackson, C., Cheater, F., Bedford, H., & Rowles, G. (2012). Communication with parents about vaccination: a framework for health professionals. BMC Pediatrics, 12154. doi:10.1186/1471-2431-12-154 Measles History. (2014, November 3). Retrieved March 18, 2015, from Medical News Today (2015, February 5). Retrieved March 18, 2015, from MMR (Measles, Mumps, & Rubella) Vaccine. (2013, June 18). Retrieved March 18, 2015, from MMR Vaccine Does Not Cause Autism Examine the Evidence! Retrieved March 19, 2015, from Mumps Vaccine. (2006, October 16). Retrieved March 22, 2015, from Mumps Vaccination. (2012, July 2). Retrieved March 22, 2015, from Vaccine and Immunizations. (2015, February 5). Retrieved March 22, 2015, from Measles. (n.d.). Retrieved March 22, 2015, from MMR Vaccine (Measles, Mumps, and Rubella): MedlinePlus Drug Information. (n.d.). Retrieved March 22, 2015,:
https://www.ukessays.com/essays/nursing/health-issue-debate-vaccinations-1978.php
CC-MAIN-2021-25
refinedweb
2,519
60.55
Python Modules A module allows you to logically organize your Python code. Grouping related code into a module makes the code easier to understand and use. example of a simple module, support.py def print_func( par ): print "Hello : ", par return The import Statement You can use any Python source file as a module by executing an import statement in some other Python source file. The import has the following syntax: import module1[, module2[,... moduleN] When the interpreter encounters an import statement, it imports the module if the module is present in the search path. A search path is a list of directories that the interpreter searches before importing a module. For example, to import the module support.py, you need to put the following command at the top of the script − #!/usr/bin/python # Import module support import support # Now you can call defined function that module as follows support.print_func("Zara") When the above code is executed, it produces the following result − Hello : Zara A module is loaded only once, regardless of the number of times it is imported. This prevents the module execution from happening over and over again if multiple imports occur. The from...import Statement Python's from statement lets you import specific attributes from a module into the current namespace. The from...import has the following syntax − from modname import name1[, name2[, ... nameN]] For example, to import the function fibonacci from the module fib, use the following statement −/. The module search path is stored in the system module sys as the sys.path variable. The sys.path variable contains the current directory, PYTHONPATH, and the installation-dependent default. The PYTHONPATH Variable: The PYTHONPATH is an environment variable, consisting of a list of directories. The syntax of PYTHONPATH is the same as that of the shell variable PATH. Here is a typical PYTHONPATH from a Windows system: set PYTHONPATH=c:\python20\lib; And here is a typical PYTHONPATH from a UNIX system: set PYTHONPATH=/usr/local/lib/python Namespaces and Scoping Variables. #!. The list contains the names of all the modules, variables and functions that are defined in a module. Following is a simple example − #!/usr/bin/python # Import built-in module math import math content = dir(math) print content When the above code is executed, it produces the following result − ['__doc__', '__file__', '__name__', 'acos', 'asin', 'atan', 'atan2', 'ceil', 'cos', 'cosh', 'degrees', 'e', 'exp', 'fabs', 'floor', 'fmod', 'frexp', 'hypot', 'ldexp', 'log', 'log10', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh'] Here, the special string variable __name__ is the module's name, and __file__ is the filename from which the module was loaded. The globals() and locals() Functions − The globals() and locals() functions can be used to return the names in the global and local namespaces depending on the location from where they are called. If locals() is called from within a function, it will return all the names that can be accessed locally from that function. If globals() is called from within a function, it will return all the names that can be accessed globally from that function. The return type of both these functions is dictionary. Therefore, names can be extracted using the keys() function. The reload() Function When the module is imported into a script, the code in the top-level portion of a module is executed only once. Therefore, if you want to reexecute the top-level code in a module, you can use the reload() function. The reload() function imports a previously imported module again. The syntax of the reload() function is this − reload(module_name) Here, module_name is the name of the module you want to reload and not the string containing the module name. For example, to reload hello module, do the following − reload(hello) Packages in Python A package is a hierarchical file directory structure that defines a single Python application environment that consists of modules and subpackages and sub-subpackages, and so on. Consider a file Pots.py available in Phone directory. This file has imported Phone, you need to put explicit import statements in __init__.py as follows − from Pots import Pots from Isdn import Isdn from G3 import G3 After you add these lines to __init__.py, you have all of these classes available when you import the Phone package. #!/usr/bin/python # Now import your Phone Package. import Phone Phone.Pots() Phone.Isdn() Phone.G3() When the above code is executed, it produces the following result − I'm Pots Phone I'm 3G Phone I'm ISDN Phone In the above example, we have taken example of a single functions in each file, but you can keep multiple functions in your files. You can also define different Python classes in those files and then you can create your packages out of those classes.
http://www.tutorialspoint.com/python/python_modules.htm
CC-MAIN-2017-26
refinedweb
798
61.67
10 February 2011 23:07 [Source: ICIS news] HOUSTON (ICIS)--WR Grace is planning to raise capital expenditures in 2011 by 25-30% based on modest increases in demand, the US-based catalysts and building materials maker said on Thursday. The hike would raise the company’s expenditures to $140m-150m (€102m-110m), said WR Grace chief executive Fred Festa. Speaking on a fourth-quarter earnings conference call, Festa cited growth in construction spending in emerging markets as one of the leading factors in the company’s optimism. The construction products business is WR Grace's largest by sales at $223.8m, with fourth-quarter volumes rising 13% year on year in emerging markets. For the Asia-Pacific region, fourth-quarter construction sales of $44.8m were 25% higher year on year. However, optimism was tempered by lagging results in the developed economies, particularly ?xml:namespace> European fourth-quarter construction sales tumbled 16.8% as the construction industry was hit by bad winter weather, WR Grace said. “With the fourth quarter and January and all the weather events, I’m not sure yet,” Festa said. “That’s what we’ve forecasted - [European construction] flat to potentially down this year.” Meanwhile, North American construction sales were up 7.2% year on year. Festa said that despite the sales growth in On the whole, 2011 construction sales will likely be flat to up 2% for North America and However, Festa cited “early signs of stronger growth” in those markets and said new product development had Grace “well positioned for the eventual turnaround”. In the meantime, investment would be concentrated particularly in emerging regions such as Asia Pacific, the company said. That region's demand growth should continue to be robust, Festa added. WR Grace said earnings could be affected by higher costs for petroleum-derived raw materials, but that they could be partly offset by higher pricing and lower operating costs. WR Grace’s stock rose 73 cents, or 2%, to close at $36.79/share on Thursday at the New York Stock Exchange. ($1 = €0.73) For more on WR Grace
http://www.icis.com/Articles/2011/02/10/9434376/us-wr-grace-plans-25-30-spending-increase-on-growing-demand.html
CC-MAIN-2015-18
refinedweb
350
64.41
March 2007 Anders Hejlsberg, Mads Torgersen Applies to: Visual C# 3.0 Summary: Technical overview of C# 3.0 ("C# Orcas"), which introduces several language extensions that build on C# 2.0 to support the creation and use of higher order, functional style class libraries. (38 printed pages) Introduction 26.1 Implicitly Typed Local Variables 26.2 Extension Methods 26.2.1 Declaring Extension Methods 26.2.2 Available Extension Methods 26.2.3 Extension Method Invocations 26.3 Lambda Expressions 26.3.1 Anonymous Method and Lambda Expression Conversions 26.3.2 Delegate Creation Expressions 26.3.3 Type Inference 26.3.3.1 The first phase 26.3.3.2 The second phase 26.3.3.3 Input types 26.3.3.4 Output types 26.3.3.5 Dependence 26.3.3.6 Output type inferences 26.3.3.7 Explicit argument type inferences 26.3.3.8 Exact inferences 26.3.3.9 Lower-bound inferences 26.3.3.10 Fixing 26.3.3.11 Inferred return type 26.3.3.12 Type inference for conversion of method groups 26.3.3.13 Finding the best common type of a set of expressions 26.3.4 Overload Resolution 26.4 Object and Collection Initializers 26.4.1 Object Initializers 26.4.2 Collection Initializers 26.5 Anonymous Types 26.6 Implicitly Typed Arrays 26.7 Query Expressions 26.7.1 Query Expression Translation 26.7.1.1 Select and groupby clauses with continuations 26.7.1.2 Explicit range variable types 26.7.1.3 Degenerate query expressions 26.7.1.4 From, let, where, join and orderby clauses 26.7.1.5 Select clauses 26.7.1.6 Groupby clauses 26.7.1.7 Transparent identifiers 26.7.2 The Query Expression Pattern 26.8 Expression Trees 26.8.1 Overload Resolution 26.9 Automatically Implemented Properties This article contains some updates that apply to Visual C# 3.0. A comprehensive specification will accompany the release of the language. C#: This document is a technical overview of those features. The document makes reference to the C# Language Specification Version 1.2 (§1 through §18) and the C# Language Specification Version 2.0 (§19 through §25), both of which are available on the C# Language Home Page ().>(); A local variable declarator in an implicitly typed local variable declaration is subject to the following restrictions: The following are examples of incorrect implicitly typed local variable declarations: reasons of backward compatibility, when a local variable declaration specifies var as the type and a type named var is in scope, the declaration refers to that type.. Only local-variable-declaration, for-initializer, resource-acquisition and foreach-statement can contain implicitly typed local variable declarations. Extension methods are static methods that can be invoked using instance method syntax. In effect, extension methods make it possible to extend existing types and constructed types with additional methods. Note. Extension methods are declared by specifying the keyword this as a modifier on the first parameter of the methods. Extension methods can only be declared in non-generic, non-nested static classes. The following is an example of a static class that declares two extension methods: namespace Acme.Utilities { public static class Extensions { public static int ToInt32(this string s) { return Int32.Parse(s); } public static T[] Slice<T>(this T[] source, int index, int count) { if (index < 0 || count < 0 || source.Length – index < count) throw new ArgumentException(); T[] result = new T[count]; Array.Copy(source, index, result, 0, count); return result; } } } The first parameter of an extension method can have no modifiers other than this, and the parameter type cannot be a pointer type. Extension methods have all the capabilities of regular static methods. In addition, once imported, extension methods can be invoked using instance method syntax. Extension methods are available in a namespace if declared in a static class or imported through using-namespace-directives (§9.3.2) in that namespace. In addition to importing the types contained in an imported namespace, a using-namespace-directive thus imports all extension methods in all static classes in the imported namespace. In effect, available available and accessible extension methods in the namespace with the name given by identifier. From this set remove all the methods that are not applicable (§7.4.2.1) and the ones where no implicit identity, reference or boxing conversion exists from the first argument to the first parameter. The first method group that yields a non-empty such set of candidate methods is the one chosen for the rewritten method invocation, and normal overload resolution (§7.4.2) is applied to select the best extension method from the set of candidates. If all attempts yield empty sets of candidate methods, a compile-time error occurs. The preceding rules mean that instance methods take precedence over extension methods, and extension methods available in inner namespace declarations take precedence over extension methods available in outer namespace declarations. For example:, the B method takes precedence over the first extension method, and the C method takes precedence over both extension methods. => operator has the same precedence as assignment (=) and is right-associative. functionally similar to anonymous methods, except for the following points: Note This section replaces §21.3. An anonymous-method-expression and a lambda-expression is classified as a value with special conversion rules. The value does not have a type but can be implicitly converted to a compatible delegate type. Specifically, a delegate type D is compatible with an anonymous method or lambda-expression L provided: out. Note This section replaces §21.10. Delegate creation expressions (§7.5.10.3) are extended to permit the argument to be an expression classified as a method group, an expression classified as an anonymous method or lambda expression, or a value of a delegate type. The compile-time processing of a delegate-creation-expression of the form new D(E), where D is a delegate-type and E is an expression, consists of the following steps: E Note This section replaces §20.6.4. When a generic method is called without specifying type arguments, a type inference process attempts to infer type arguments for the call. The presence of type inference allows a more convenient syntax to be used for calling a generic method, and allows the programmer to avoid specifying redundant type information. For example, given the method declaration: class Chooser { static Random rand = new Random(); public static T Choose<T>(T first, T second) { return (rand.Next(2) == 0)? first: second; } } it is possible to invoke the Choose method without explicitly specifying a type argument: int i = Chooser.Choose(5, 213); // Calls Choose<int> string s = Chooser.Choose("foo", "bar"); // Calls Choose<string> Through type inference, the type arguments int and string are determined from the arguments to the method. Type inference occurs as part of the compile-time processing of a method invocation (§20.9.7) and takes place before the overload resolution step of the invocation. When a particular method group is specified in a method invocation, and no type arguments are specified as part of the method invocation, type inference is applied to each generic method in the method group. If type inference succeeds, then the inferred type arguments are used to determine the types of arguments for subsequent overload resolution. If overload resolution chooses a generic method as the one to invoke, then the inferred type arguments are used as the actual type arguments for the invocation. If type inference for a particular method fails, that method does not participate in overload resolution. The failure of type inference, in and of itself, does not cause a compile-time error. However, it often leads to a compile-time error when overload resolution then fails to find any applicable methods. If the supplied number of arguments is different than the number of parameters in the method, then inference immediately fails. Otherwise, assume that the generic method has the following signature: Tr M<X1...Xn>(T1 x1 ... Tm xm) With a method call of the form M(e1...em) the task of type inference is to find unique type arguments S1...Sn for each of the type parameters X1...Xn so that the call M<S1...Sn>(e1...em) becomes valid. During the process of inference each type parameter Xi is either fixed to a particular type Si or unfixed with an associated set of bounds. Each of the bounds is some type T. Initially each type variable Xi is unfixed with an empty set of bounds. Type inference takes place in phases. Each phase will try to infer type arguments for more type variables based on the findings of the previous phase. The first phase makes some initial inferences of bounds, whereas the second phase fixes type variables to specific types and infers further bounds. The second phase may have to be repeated a number of times. Note When we refer to delegate types throughout the following, this should be taken to include also types of the form Expression<D> where D is a delegate type. The argument and return types of Expression<D> are those of D. Note Type inference takes place not only when a generic method is called. Type inference for conversion of method groups is described in §26.3.3.12 and finding the best common type of a set of expressions is described in §26.3.3.13. For each of the method arguments ei: All unfixed type variables Xi that depend on (§26.3.3.5) no Xj are fixed (§26.3.3.10). If no such type variables exist, all unfixed type variables Xi are fixed for which all of the following hold: If no such type variables exist and there are still unfixed type variables, type inference fails. If no further unfixed type variables exist, type inference succeeds. Otherwise, for all arguments ei with corresponding argument type Ti where the output types (§26.3.3.4) contain unfixed type variables Xj but the input types (§26.3.3.3) do not, an output type inference (§26.3.3.6) is made for ei with type Ti. Then the second phase is repeated. If e is a method group or implicitly typed lambda expression and T is a delegate type then all the argument types of T are input types of e with type T. If e is a method group, an anonymous method, a statement lambda or an expression lambda and T is a delegate type then the return type of T is an output type of e with type T. An unfixed type variable Xi depends directly on an unfixed type variable Xj if for some argument ek with type Tk Xj occurs in an input type of ek with type Tk and Xi occurs in an output type of ek with type Tk. Xj depends on Xi if Xj depends directly on Xi or if Xi depends directly on Xk and Xk depends on Xj. Thus "depends on" is the transitive but not reflexive closure of "depends directly on". An output type inference is made from an expression e with type T in the following way: An explicit argument type inference is made from an expression e with type T in the following way: . An exact inference from a type U for a type V is made as follows: A lower-bound inference from a type U for a type V is made as follows: An unfixed type variable Xi with a set of bounds is fixed as follows. For purposes of type inference and overload resolution, the inferred return type of a lambda expression or anonymous method e is determined as follows: or As an example of type inference involving lambda expressions, consider the Select extension method declared in the System.Linq.Enumerable class: namespace System.Linq { public static class Enumerable { public static IEnumerable<TResult> Select<TSource,TResult>( this IEnumerable<TSource> source, Func<TSource,TResult> selector) { foreach (TSource element in source) yield return selector(element); } } } Assuming the System.Linq = Enumerable.Select(customers, c => c.Name);: c) {. string Similar to calls of generic methods, type inference must also be applied when a method group M containing a generic method is assigned to a given delegate type D. Given a method and the method group M being assigned to the delegate type D the task of type inference is to find type arguments S1...Sn so that the expression: M<S1...Sn> becomes assignable to D. Unlike the type inference algorithm for generic method calls, in this case there are only argument types, no argument expressions. In particular, there are no lambda expressions and hence no need for multiple phases of inference. Instead, all Xi are considered unfixed, and a lower-bound inference is made from each argument type Uj of D to the corresponding parameter type Tj of M. If for any of the Xi no bounds were found, type inference fails. Otherwise, all Xi are fixed to corresponding Si, which are the result of type inference. In some cases, a common type needs to be inferred for a set of expressions. In particular, the element types of implicitly typed arrays and the return types of anonymous methods and statement lambdas are found in this way. Intuitively, given a set of expressions e1...em this inference should be equivalent to calling a method Tr M<X>(X x1 ... X xm) with the ei as arguments. More precisely, the inference starts out with an unfixed type variable X. Output type inferences are then made from each ei with type X. Finally, X is fixed and the resulting type S is the resulting common type for the expressions. Lambda expressions in an argument list affect overload resolution in certain situations. Please refer to §7.4.2.3 for the exact rules. The following example illustrates the effect of lambdas on overload resolution. class ItemList<T>: List<T> { public int Sum(Func<T,int> selector) { int sum = 0; foreach (T item in this) sum += selector(item); return sum; } public double Sum. Sum The Sum methods could for example be used to compute sums from a list of detail lines in an order. class Detail { public int UnitCount; public double UnitPrice; ... } void ComputeSums() { ItemList<Detail> orderDetails = GetOrderDetails(...);.. In order to correctly parse object and collection initializers with generics, the disambiguating list of tokens in §20.6.5 must be augmented with the } token.. It is not possible for the object initializer to refer to the newly created object it is initializing. A member initializer that specifies an expression after the equals sign is processed in the same way as an assignment (§7.13.1) to the field or property. A member initializer that specifies an object initializer after the equals sign is a nested object initializer, i.e., an initialization of an embedded object. Instead of assigning a new value to the field or property, the assignments in the nested object initializer are treated as assignments to members of the field or property. Nested object initializers cannot be applied to properties with a value type, or to read-only fields with a value type. and initialized as follows: Point var a = new Point { X = 0, Y = 1 }; which has the same effect as var __a = new Point(); __a.X = 0; __a.Y = 1; var a = __a; where __a is an otherwise invisible and inaccessible temporary variable. } }; var __r = new Rectangle(); var __p1 = new Point(); __p1.X = 0; __p1.Y = 1; __r.P1 = __p1; var __p2 = new Point(); __p2.X = 2; __p2.Y = 3; __r.P2 = __p2; var r = __r; where __r, __p1 and __p2 are temporary variables that are otherwise invisible and inaccessible. If the Rectangle } }; var __r = new Rectangle(); __r.P1.X = 0; __r.P1.Y = 1; __r.P2.X = 2; __r.P2.Y = 3; var r = __r; A collection initializer specifies the elements of a collection. collection-initializer: { element-initializer-list } { element-initializer-list , } element-initializer-list: element-initializer element-initializer-list , element-initializer element-initializer: non-assignment-expression { expression-list } A collection initializer consists of a sequence of element initializers, enclosed by { and } tokens and separated by commas. Each element initializer specifies an element to be added to the collection object being initialized, and consists of a list of expressions enclosed by { and } tokens and separated by commas. A single-expression element initializer can be written without braces, but cannot then be an assignment expression, to avoid ambiguity with member initializers..IEnumerable or a compile-time error occurs. For each specified element in order, the collection initializer invokes the Add method on the target object with the expression list of the element initializer, applying normal overload resolution for each invocation.>(); object or an unsafe type. The name of an anonymous type is automatically generated by the compiler and cannot be referenced in program text. Within the same program, two anonymous object initializers that specify a sequence of properties of the same names and compile-time. The Equals and GetHashcode methods on anonymous types are defined in terms of the Equals and GetHashcode of the properties, so that two instances of the same anonymous type are equal if and only if all their properties are equal." } } }; Query expressions provide a language integrated syntax for queries that is similar to relational and hierarchical query languages such as SQL and XQuery. query-expression: from-clause query-body from-clause: from typeopt identifier in expression query-body: query-body-clausesopt select-or-group-clause query-continuationopt query-body-clauses: query-body-clause query-body-clauses query-body-clause query-body-clause: from-clause let-clause where-clause join-clause join-into-clause orderby-clause let-clause: let identifier = expression where-clause: where boolean-expression join-clause: join typeopt identifier in expression on expression equals expression join-into-clause: join typeopt identifier in expression on expression equals expression into identifier orderby-clause: orderby orderings orderings: ordering orderings , ordering ordering: expression ordering-directionopt ordering-direction: ascending descending select-or-group-clause: select-clause group-clause select-clause: select expression group-clause: group expression by expression query-continuation:, let, where or join clauses. Each from clause is a generator introducing a range variable ranging over a sequence. Each let clause computes a value and introduces an identifier representing that value. Each where clause is a filter that excludes items from the result. Each join clause compares specified keys of the source sequence with keys of another sequence, yielding matching pairs. Each orderby clause reorders items according to specified criteria. The final select or group clause specifies the shape of the result in terms of the range variable(s). Finally, an into clause can be used to "splice" queries by treating the results of one query as a generator in a subsequent query. Query expressions contain a number of new contextual keywords, i.e., identifiers that have special meaning in a given context. Specifically these are: from, join, on, equals, into, let, orderby, ascending, descending, select, group and by. In order to avoid ambiguities caused by mixed use of these identifiers as keywords or simple names in query expressions, they are considered keywords anywhere within a query expression. For this purpose, a query expression is any expression starting with "from identifier" followed by any token except ";", "=" or ",". In order to use these words as identifiers within a query expression, they can be prefixed with "@" (§2.4.2). The C# 3.0 language does not specify the exact execution semantics of query expressions. Rather, C# 3.0 translates query expressions into invocations of methods that adhere to the query expression pattern. Specifically, query expressions are translated into invocations of methods named Where, Select, SelectMany, Join, GroupJoin, OrderBy, OrderByDescending, ThenBy, ThenByDescending, GroupBy, and Cast. A query expression is processed by repeatedly applying the following translations until no further reductions are possible. The translations are listed in order of precedence: each section assumes that the translations in the preceding sections have been performed exhaustively. Certain translations inject range variables with transparent identifiers denoted by *. The special properties of transparent identifiers are discussed further in §26.7.1.7. A query expression with a continuation from ... into x ... is translated into from x in ( from ... ) ... The translations in the following sections assume that queries have no into continuations. The example from c in customers group c by c.Country into g select new { Country = g.Key, CustCount = g.Count() } from g in from c in customers group c by c.Country select new { Country = g.Key, CustCount = g.Count() } the final translation of which is customers. GroupBy(c => c.Country). Select(g => new { Country = g.Key, CustCount = g.Count() }) A from clause that explicitly specifies a range variable type from T x in e from x in ( e ) . Cast < T > ( ) A join clause that explicitly specifies a range variable type join T x in e on k1 equals k2 join x in ( e ) . Cast < T > ( ) on k1 equals k2 The translations in the following sections assume that queries have no explicit range variable types. from Customer c in customers where c.City == "London" select c from c in customers.Cast<Customer>() where c.City == "London" select c customers. Cast<Customer>(). Where(c => c.City == "London") Explicit range variable types are useful for querying collections that implement the non-generic IEnumerable interface, but not the generic IEnumerable<T> interface. In the example above, this would be the case if customers were of type ArrayList. A query expression of the form from x in e select x ( e ) . Select ( x => x ) from c in customers select c Is translated into customers.Select(c => c). A query expression with a second from clause followed by a select clause from x1 in e1 from x2 in e2 select v ( e1 ) . SelectMany( x1 => e2 , ( x1 , x2 ) => v ) A query expression with a second from clause followed by something other than a select clause: from x1 in e1 from x2 in e2 ... from * in ( e1 ) . SelectMany( x1 => e2 , ( x1 , x2 ) => new { x1 , x2 } ) ... A query expression with a let clause from x in e let y = f ... from * in ( e ) . Select ( x => new { x , y = f } ) ... A query expression with a where clause from x in e where f ... from x in ( e ) . Where ( x => f ) ... A query expression with a join clause without an into followed by a select clause join from x1 in e1 join x2 in e2 on k1 equals k2 select v ( e1 ) . Join( e2 , x1 => k1 , x2 => k2 , ( x1 , x2 ) => v ) A query expression with a join clause without an into followed by something other than a select clause from x1 in e1 join x2 in e2 on k1 equals k2 ... from * in ( e1 ) . Join( e2 , x1 => k1 , x2 => k2 , ( x1 , x2 ) => new { x1 , x2 }) ... A query expression with a join clause with an into followed by a select clause from x1 in e1 join x2 in e2 on k1 equals k2 into g select v ( e1 ) . GroupJoin( e2 , x1 => k1 , x2 => k2 , ( x1 , g ) => v ) A query expression with a join clause with an into followed by something other than a select clause into from x1 in e1 join x2 in e2 on k1 equals k2 into g ... from * in ( e1 ) . GroupJoin( e2 , x1 => k1 , x2 => k2 , ( x1 , g ) => new { x1 , g }) ... A query expression with an orderby clause from x in e orderby k1 , k2 , ... , kn ... from x in ( e ) . OrderBy ( x => k1 ) . ThenBy ( x => k2 ) . ... . ThenBy ( x => kn ) ... If an ordering clause specifies a descending direction indicator, an invocation of OrderByDescending or ThenByDescending is produced instead. The following translations assume that there are no let, where, join or orderby clauses, and no more than the one initial from clause in each query expression. from c in customers from o in c.Orders select new { c.Name, o.OrderID, o.Total } customers. SelectMany(c => c.Orders, (c,o) => new { c.Name, o.OrderID, o.Total } ) from c in customers from o in c.Orders orderby o.Total descending select new { c.Name, o.OrderID, o.Total } from * in customers. SelectMany(c => c.Orders, (c,o) => new { c, o }) orderby o.Total descending select new { c.Name, o.OrderID, o.Total } customers. SelectMany(c => c.Orders, (c,o) => new { c, o }). OrderByDescending(x => x.o.Total). Select(x => new { x.c.Name, x.o.OrderID, x.o.Total }) where x is a compiler generated identifier that is otherwise invisible and inaccessible. from o in orders let t = o.Details.Sum(d => d.UnitPrice * d.Quantity) where t >= 1000 select new { o.OrderID, Total = t } from * in orders. Select(o => new { o, t = o.Details.Sum(d => d.UnitPrice * d.Quantity) }) where t >= 1000 select new { o.OrderID, Total = t } orders. Select(o => new { o, t = o.Details.Sum(d => d.UnitPrice * d.Quantity) }). Where(x => x.t >= 1000). Select(x => new { x.o.OrderID, Total = x.t }) from c in customers join o in orders on c.CustomerID equals o.CustomerID select new { c.Name, o.OrderDate, o.Total } customers.Join(orders, c => c.CustomerID, o => o.CustomerID, (c, o) => new { c.Name, o.OrderDate, o.Total }) from c in customers join o in orders on c.CustomerID equals o.CustomerID into co let n = co.Count() where n >= 10 select new { c.Name, OrderCount = n } from * in customers. GroupJoin(orders, c => c.CustomerID, o => o.CustomerID, (c, co) => new { c, co }) let n = co.Count() where n >= 10 select new { c.Name, OrderCount = n } customers. GroupJoin(orders, c => c.CustomerID, o => o.CustomerID, (c, co) => new { c, co }). Select(x => new { x, n = x.co.Count() }). Where(y => y.n >= 10). Select(y => new { y.x.c.Name, OrderCount = y.n) where x and y are compiler generated identifiers that are otherwise invisible and inaccessible. from o in orders orderby o.Customer.Name, o.Total descending select o has the final translation orders. OrderBy(o => o.Customer.Name). ThenByDescending(o => o.Total) from x in e select v ( e ) . Select ( x => v ) except when v is the identifier x, the translation is simply ( e ) For example from c in customers.Where(c => c.City == "London") select c is simply translated into customers.Where(c => c.City == "London") from x in e group v by k ( e ) . GroupBy ( x => k , x => v ) except when v is the identifier x, the translation is ( e ) . GroupBy ( x => k ) from c in customers group c.Name by c.Country customers. GroupBy(c => c.Country, c => c.Name) Certain translations inject range variables with transparent identifiers denoted by *. Transparent identifiers are not a proper language feature; they exist only as an intermediate step in the query expression translation process. When a query translation injects a transparent identifier, further translation steps propagate the transparent identifier into lambda expressions and anonymous object initializers. In those contexts, transparent identifiers have the following behavior: from c in customers from o in c.Orders orderby o.Total descending select new { c.Name, o.Total } from * in from c in customers from o in c.Orders select new { c, o } orderby o.Total descending select new { c.Name, o.Total } which is further translated into customers. SelectMany(c => c.Orders.Select(o => new { c, o })). OrderByDescending(* => o.Total). Select(* => new { c.Name, o.Total }) which, when transparent identifiers are erased, is equivalent to customers. SelectMany(c => c.Orders.Select(o => new { c, o })). OrderByDescending(x => x.o.Total). Select(x => new { x.c.Name, x.o.Total }) from c in customers join o in orders on c.CustomerID equals o.CustomerID join d in details on o.OrderID equals d.OrderID join p in products on d.ProductID equals p.ProductID select new { c.Name, o.OrderDate, p.ProductName } from * in from * in from * in from c in customers join o in orders o c.CustomerID equals o.CustomerID select new { c, o } join d in details on o.OrderID equals d.OrderID select new { *, d } join p in products on d.ProductID equals p.ProductID select new { *, p } select new { c.Name, o.OrderDate, p.ProductName } which is further reduced to customers. Join(orders, c => c.CustomerID, o => o.CustomerID, (c, o) => new { c, o }). Join(details, * => o.OrderID, d => d.OrderID, (*, d) => new { *, d }). Join(products, * => d.ProductID, p => p.ProductID, (*, p) => new { *, p }). Select(* => new { c.Name, o.OrderDate, p.ProductName }) customers. Join(orders, c => c.CustomerID, o => o.CustomerID, (c, o) => new { c, o }). Join(details, x => x.o.OrderID, d => d.OrderID, (x, d) => new { x, d }). Join(products, y => y.d.ProductID, p => p.ProductID, (y, p) => new { y, p }). Select(z => new { z.y.x.c.Name, z.y.x.o.OrderDate, z.p.ProductName }) where x, y, and z are compiler generated identifiers that are otherwise invisible and inaccessible.<T> that supports the query expression pattern is shown below. A generic type is used in order to illustrate the proper relationships between parameter and result types, but it is possible to implement the pattern for non-generic types as well. delegate R Func<T1,R>(T1 arg1); delegate R Func<T1,T2,R>(T1 arg1, T2 arg2); class C { public C<T> Cast<T>(); } class C<T> { public C<T> Where(Func<T,bool> predicate); public C<U> Select<U>(Func<T,U> selector); public C; } } The methods above use the generic delegate types Func<T1, R> and Func<T1, T2,—a sequence of sequences, where each inner sequence has an additional Key property. The Standard Query Operators (described in a separate specification) provide an implementation of the query operator pattern for any type that implements the System.Collections.Generic.IEnumerable<T> interface.. For the purpose of overload resolution there are special rules regarding the Expression<D> types. Specifically the following rule is added to the definition of betterness: Note that there is no betterness rule between Expression<D> and delegate types. Oftentimes properties are implemented by trivial use of a backing field, as in the following example: public Class Point { private int x; private int y; public int X { get { return x; } set { x = value; } } public int Y { get { return y; } set { y = value; } } } Automatically implemented (auto-implemented) properties automate this pattern. More specifically, non-abstract property declarations are allowed to have semicolon accessor bodies. Both accessors must be present and both must have semicolon bodies, but they can have different accessibility modifiers. When a property is specified like this, a backing field will automatically be generated for the property, and the accessors will be implemented to read from and write to that backing field. The name of the backing field is compiler generated and inaccessible to the user. The following declaration is equivalent to the example above: public Class Point { public int X { get; set; } public int Y { get; set; } } Because the backing field is inaccessible, it can be read and written only through the property accessors. This means that auto; } } This restriction also means that definite assignment of struct types with auto-implemented properties can only be achieved using the standard constructor of the struct, since assigning to the property itself requires the struct to be definitely assigned.
http://msdn.microsoft.com/en-us/library/bb308966.aspx
crawl-002
refinedweb
5,196
59.19
Wify AP mode not Working After firmware update Just did update my wipy 2.0 using the update script. After it finished i pulled the wire (now the LED lid up bright) and reset. It started and i verified firmware is 1.8 in serial terminal. default Wify was visible and i could connect . Unfortunetely neither Telnet nor FTP ist working and no device is shown in network discovery. I tried to setup a different ap manually w/o success. Now i setup connection to home wify and this worked, i could Telnet... All Cables removed, Power on - default AP Not working. Also no wify Information in serial during startup ( thought it was printed with 0.9 firmware). I checked wlan.ifconfig(id=0) Reports 4*0.0.0.0 ! And for id=1 the expected 192.168.4.1 ...strange. Is this a firmware issue or did i just killed the wify Transmitter somehow? Any other debugging possible? (Quick Checking Bluetooth Scan didnt find any devices either, will try again tomorrow.) @bastler said in Wify AP mode not Working After firmware update: BTW, the pyupgrade script (toolversion=67634306) I got from debian apt-get Has a bug in lines 177..183: How did you get pyupgrade woth apt-get? You should load it from here: During update it should be clear which type of device will be loaded. Loading the wrong formware is not expected to work. In the serial terminal REPL prompt, you should be able to tell. Just enter: import uos uos.uname() or push Ctrl-B Edit: if anything else fails, try to start over by erasing the flash, using esptool.py Edit 2: Indeed, lines 177..183 look strange, but they are only used for manual device selection. This looks like a candidate for a bug report here: @robert-hh Thanks for help. Can the antenna switched for already configured wify? I tried this: wlan=WLAN(WLAN.AP) wlan.init(mode=WLAN.AP, ssid='sensor1', auth=(WLAN.WPA2, 'password'),channel=6,antenna=WLAN.INT_ANT) wlan.ifconfig(id=0,config=('192.168.4.1','255.255.255.0','192.168.4.1','8.8.8.8')) Not working, also with ext antenna and other channels and id=1 set up. os.uname() Shows 1.8.0.b1 version=1.8.6-760-g90b72952. Is there a newer version? But wait - isn't there something different with wify in Germany/europe with the channels which requires different firmware? Maybe I got US configuration? BTW, the pyupgrade script (toolversion=67634306) I got from debian apt-get Has a bug in lines 177..183: Case $device uses recent_lopy if device is wipy and recent_wipy if device is lopy !!! Though that didn't change behavoir after reflashing today, unless this was a fault only to be made once... @bastler Maybe by bad luck it's switched to external antenna. Try to enable the internal antenna, even is it is the default.
https://forum.pycom.io/topic/1838/wify-ap-mode-not-working-after-firmware-update
CC-MAIN-2018-22
refinedweb
491
69.38
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. OpenERP open external file I'm with trouble trying to open a specific extensions (I want to open *xls) I have currently this code: import os os.system('C:\\file.bmp') # this works, and opens the file os.system('C:\\file.xls') # this doesn't work. In specific extensions (like xls) The OpenERP freezes completely on "loading...", and most of the times, it makes me reboot computer because the "localhost:8069" becomes totally unavailable. Not even restarting the service openERP works. Any ideas? About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/openerp-open-external-file-31748
CC-MAIN-2018-13
refinedweb
138
59.19
This implementation is optimized for getting values while walking forward through a UTF-16 string. Therefore, the simplest and fastest access macros are the _FROM_LEAD() and _FROM_OFFSET_TRAIL() macros. The _FROM_BMP() macros are a little more complicated; they get values even for lead surrogate code _points_, while the _FROM_LEAD() macros get special "folded" values for lead surrogate code _units_ if there is relevant data associated with them. From such a folded value, an offset needs to be extracted to supply to the _FROM_OFFSET_TRAIL() macros. Most of the more complex (and more convenient) functions/macros call a callback function to get that offset from the folded value for a lead surrogate unit. Definition in file utrie.h. #include "unicode/utypes.h" #include "udataswp.h" Go to the source code of this file.
http://icu.sourcearchive.com/documentation/4.4.1-5/utrie_8h.html
CC-MAIN-2018-13
refinedweb
130
64.2
Subject: [Boost-announce] [Review] Type Traits Extension by Frederic Bron - Review summary and decision From: Joel Falcou (joel.falcou_at_[hidden]) Date: 2011-03-26 12:57:27 Hello all, Frederic Bron's type traits extension proposal review ended up last week. In so far, we received SEVEN reviews which are all voting YES for the inclusion of Type Traits extension in Boost. So the result is that the Type Traits Extension is ACCEPTED into Boost. Suggestions include: - some clean-up in the implementation like "Renam[ing] the detail classes inside the specific namespace without using number, i.e. with a clear meaning. The use of tag, tag 2 could be renamed check_pass, check_fail or something like that." - Sensibility on cv-ref qualifiers in traits call is important. This is being addressed by Frederic. - Frederic is OK to keep the detection of void as a valid return type by using some dont_check type as default. - small comprehensive example of some of the traits like a "maybe_print function, which prints a value if the appropriate operator<< ostream overload exists, else prints "<NOT PRINTABLE>". [...] a small example like this would show a practical use of the library, and how to make use of enable_if, or some other overload method." - The main recurring suggestions found was the choice of name for the operator traits with respect to the standard naming, naming in proto and other boost libraries. This topic is still hot and alive. An alternative to the naming scheme consensus was proposed in the form of alias of commonly used operator traits. This issue is I think beyond bikeshed-color argumentation and I think it is a good point if Frederic could step up and provide a definite and rationalized answer to this point. The current status is : * the namespace options is out casue the standard didnt choose this way; * Frederic and a few other seems to favor the proto naming scheme (more or less the negate issue and the pre/post operator) * the question of a common prefix is still open * the std:: like naming seems limited as not all operators are there. Thanks to all reviewers and people participating to the discussions. Thanks to Frederic for his work. Boost-announce list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
http://lists.boost.org/boost-announce/2011/03/0297.php
CC-MAIN-2016-26
refinedweb
395
55.74
I was looking into the routines of the C time library, since I needed a way to keep track of time on the log file of a program. I found the way to do it was to have a time_t time_t localtime(time_t* argument) tm tm asctime(strcut tm* argument) struct tm struct tm "The function also accesses and modifies a shared internal object, which may introduce data races on concurrent calls to gmtime and localtime. Some libraries provide an alternative function that avoids this data race: localtime_r (non-portable)." struct tm struct tm struct tm MyProgramStruct; localtime(&Rawtime, &MyProgramStruct); localtime /* localtime example */ #include <stdio.h> /* puts, printf */ #include <time.h> /* time_t, struct tm, time, localtime */ int main () { time_t rawtime; struct tm * timeinfo; time (&rawtime); timeinfo = localtime (&rawtime); printf ("Current local time and date: %s", asctime(timeinfo)); return 0; } So, who creates the struct tm object? Is it the C time library in the first time it is loaded? According to the docs for the localtime() function, it returns a pointer to a statically allocated struct. That struct belongs to the C library; the function provides a pointer to it. The fine details of exactly where it lives and exactly when and how storage for it is allocated don't matter and may vary from implementation to implementation. You just have to understand that different calls by the same process work with and provide pointers to the same struct. That would mean that the first process that loads the library would declare the object and all other processes would just share this library with the object already declared? No. You do not need to worry about the struct being shared between processes, only about it being shared across multiple calls and between multiple threads in the same process. Wouldn't it be better, in order to avoid those Data Races issues, to just create a new struct tm object for each call and return a pointer to it so each program has its own structure? Maybe having a struct tm MyProgramStruct; localtime(&Rawtime, &MyProgramStruct);instead of a single struct for every program using Ctime? Any reason it's done this way instead? Again, it's not a cross-process problem, only an intra-process problem, which is still bad enough. Yes, that problem would be addressed by not sharing the struct, and that's what distinguishes the function localtime_r(), where it is available. But the existing function cannot be changed to do the same, because that would introduce a new requirement on users to free the provided struct. The designers of localtime() wanted to make it easy to use, and indeed it is, as long as you don't run afoul of the shared data issue. If your program is single-threaded, then you can avoid it fairly easily. localtime() is not the only standard library function with an issue of this sort. Finally, is using C time library localtimeroutine a bad practice because of the possibility of the unsynchronization of programs leading to a wrong output? Not for that reason, no, because there is no cross-process problem. You do need to pay attention when using localtime() and other functions with similar static storage, such as strtok(). But you can judge on a program by program basis whether there is any data race -- you do not need to worry about interference from unspecified other programs.
https://codedump.io/share/BMr23zwZ8jt9/1/lttimehgt-shared-internal-object---struct-tm
CC-MAIN-2017-13
refinedweb
566
58.62
In GIMP there is a very simple way to do what I want. I only have the German dialog installed but I’ll try to translate it. I’m talking about going to Picture -> PrintingSize and then adjusting the Values X-Resolution and Y-Resolution which are known to me as so called DPI values. You can also choose the format which by default is Pixel/Inch. (In German the dialog is Bild -> Druckgröße and there X-Auflösung and Y-Auflösung) Picture -> PrintingSize X-Resolution Y-Resolution Pixel/Inch Bild -> Druckgröße X-Auflösung Y-Auflösung Ok, the values there are often 72 by default. When I change them to e.g. 300 this has the effect that the image stays the same on the computer, but if I print it, it will be smaller if you look at it, but all the details are still there, just smaller -> it has a higher resolution on the printed paper (but smaller size... which is fine for me). 72 300 I am often doing that when I am working with LaTeX, or to be exact with the command pdflatex on a recent Ubuntu-Machine. When I’m doing the above process with GIMP manually everything works just fine. The images will appear smaller in the resulting PDF but with high printing quality. pdflatex What I am trying to do is to automate the process of going into GIMP and adjusting the DPI values. Since ImageMagick is known to be superb and I used it for many other tasks I tried to achieve my goal with this tool. But it does just not do what I want. After trying a lot of things I think this actually is be the command that should be my friend: convert input.png -density 300 output.png This should set the DPI to 300, as I can read everywhere in the web. It seems to work. But when I check the file it stays the same (EDIT: which is what I expect, as explained above). file input.png output.png input.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced output.png: PNG image data, 611 x 453, 8-bit grayscale, non-interlaced When I use this command, it seems like it did what I wanted: identify -verbose output.png | grep 300 Resolution: 300x300 PNG:pHYs : x_res=300, y_res=300, units=0 Funny enough, the same output comes for input.png which confuses me... so this might be the wrong parameters to watch? input.png But when I now render my TeX with pdflatex the image is still big and blurry. Also when I open the image with GIMP again the DPI values are set to 72 instead of 300. So there actually was no effect at all. Now what is the problem here. Am I getting something completely wrong? I can’t be that wrong since everything works just fine with GIMP. Thanks for any help in this. I am also open to other automated solutions which are easily done on a Linux system. This question came from our site for professional and enthusiast programmers. ^ Specify the units - I seem to remember having a problem when I omitted this option (although DPI should be the default), for example: convert -units PixelsPerInch input.png -density 300 output.png Do you know which embedded data fields GIMP uses to read the resolution - does it have its own that override the standard ones used by ImageMagick? For example, Photoshop uses Photoshop:XResolution and Photoshop:YResolution so you have to set these for Photoshop to recognise a density setting (ImageMagick can’t do this - we use ExifTool). Photoshop:XResolution Photoshop:YResolution -density 300 Note that you can use Exiftool to read out resolutions. For example, Exiftool '-*resolution*' c.jpg might show Exiftool '-*resolution*' c.jpg Resolution Unit : inches X Resolution : 300 Y Resolution : 300 Exiftool also is able to set parameters, but as noted in man page Image::ExifTool::TagNames, the Extra Tags XResolution and YResolution are not writable by Exiftool. Image::ExifTool::TagNames I don't know whether ImageMagick has resolution-changing options, but would be surprised if it doesn't. Also, it is straightforward to write GIMP scripts to automate tasks like this, and also it is possible to change resolutions with small programs. For example, following is a C program (compilable via gcc setRes.c -O3 -Wall -o setRes) that reads the first few bytes of a jpeg file, changes resolutions to 300, and rewrites them. The program as shown uses constants for little-endian machines, like x86. If run on a big-endian machine it should terminate with a message like Error: xyz may be not a .jpg file, even if xyz is a jpeg file. Note, I haven't tested the resulting pictures via pdflatex; you probably would find it worthwhile to post a question in the tex SE. gcc setRes.c -O3 -Wall -o setRes Error: xyz may be not a .jpg file /* jiw -- 24 Sep 2012 -- Re: set resolution in a jpg -- Offered without warranty under GPL v3 terms as at */ #include <stdlib.h> #include <stdio.h> void errorExit(char *msg, char *par, int fe) { fprintf (stderr, "\n%3d Error: %s %s\n", fe, msg, par); exit (1); } // Note, hex constants are byte-reversed on little vs big endian machines enum { JF=0x464a, IF=0x4649, L300=0x2c01, B300=0x012c, NEWRES=L300}; int main(int argc, char *argv[]) { FILE *fi; short int buf[9]; int r, L=sizeof buf; if (argc<2) errorExit(argv[0], "requires a .jpg file name", 0); fi = fopen(argv[1], "r+b"); if(!fi) errorExit("open failed for", argv[1], ferror(fi)); r = fread(buf, 1, L, fi); if (r != L) errorExit("read failed for", argv[1], ferror(fi)); if (buf[3] != JF || buf[4] != IF) // Check JFIF signature errorExit(argv[1], "may be not a .jpg file", 0); buf[7] = buf[8] = NEWRES; fseek(fi, 0, SEEK_SET); r = fwrite(buf, 1, L, fi); if (r != L) errorExit("write failed for", argv[1], ferror(fi)); return 0; } "I want to change DPI with Imagemagick without changing the actual byte-size of the image data." This is completely impossible! Because: more "Dots per Inch" <==> more pixels per area <==> more total pixels per image <==> more total bytes per image Also you don't seem to understand what DPI in reality is: 72dpi 288dpi If you want to print the original 72x72 pixels image as a 1 inch wide square, but at 288dpi, then you'll have to rescale the image (in this case scaling it up). For every 1 pixel in the original you'll need 4 pixels of the new, upscaled image. Now there are different algorithms which can be used to compute what color values these 4 pixels (3 of them new pixels) should have: In any case you are creating a bigger image consisting of 288 rows of pixels which are each 288 pixels high (288x288 pixels). What Gimp does for you when you go through "Picture -> Printing Size": it simplifies the process of re-calculating the required changes in absolute pixel sizes, making it more user-friendly. For this purpose... cm mm inch According to these two pieces of info Gimp then calculates the total number of pixels it has to use (extrapolate from the original number of pixels) to fill the requested space at the requested resolution. However, scaling up a raster image by making it contain more pixels does not add real info to it, and it does only add 'quality' to it which is fictitious. It may look nicer to the human eye if your scale up algorithm is a 'good' one. And it will look ugly, if you just double, treble or quadruple existing pixels, like some simple algorithms do. For raster images, the DPI setting is only relevant in the context of printing or displaying it. Because printers or monitors have given, fixed resolutions. Therefor it is info that only... need to know. And ImageMagick's documentation is in full agreement with me: -density width -density widthxheight Set the horizontal and vertical resolution of an image for rendering to devices. -density width -density widthxheight Set the horizontal and vertical resolution of an image for rendering to devices. -density width -density widthxheight For vector images or file formats (such as PDF or PostScript) the DPI setting however is extremely important in the context of rasterizing them. A higher DPI will transfer more picture information into the raster format and hence preserve more details from the real original quality. When converting a vector image of a given size in mm, cm or inch into raster with a higher DPI will directly translate into a higher number of total pixels in the image. Also, ImageMagick does not support 'printing' as such. Instead, ImageMagick only... ...but to print the manipulated images, you need to use a different program. Some image formats (TIFF, PNG,...) do support storing a DPI setting internally in their meta data. But this is no more than a 'hint' attribute that does not alter the underlying raster image. That's the reason why you made this discovery: "When I check the file it stays the same." "When I check the file it stays the same." This 'hint' can possibly be automatically evaluated by printer drivers or by page creation programs such as LaTeX. In the absence of such DPI 'hints' (or if they somehow don't present themselves in the way LaTeX expects them to do), LaTeX should still be able to be commanded to render any given image on a page the way one expects it to -- it needs just some more explicit LaTeX code around the image! Some other image formats (JPEG(?), BMP,...) do not even support storing a DPI hint at their internal meta data. So Gimp only does support what you see it's doing with "Picture -> Printing Size" because it wants to print an image. With ImageMagick you cannot print. Keep doing what you want to do with Gimp when you print. It doesn't make sense with ImageMagick. See also this additional IM documentation snippet, which explains the very same topic in different words. So what remains is this: Please provide the following to resolve the above issue: convert -version convert -list configure This way we can help to solve the problem. But note: this is a different problem from what your current subject/headline asks: "I want to change DPI with Imagemagick without changing the actual byte-size of the image data" Since it is still not clear to some readers what I noted above, here is one more attempt... Whatever is noted as 'Resolution' or 'Density' inside an image file, is a metadata attribute. It has no influence upon the number of actual pixels described by the file and is completely irrelevant in this respect. It is just a hint which a printing or rendering device or an application may or may not follow when printing, rendering or displaying the image. For this purpose, it is just a a few number stored within the image file. These numbers tell output devices such as printers and displays how many dots (or pixels) per inch the image should be displayed at. For vector formats like PostScript, PDF, MWF and SVG it tells the pixel scale to draw any real world coordinates used by the image. One example, where the resolution value noted by ImageMagick inside the image metadata is NOT honored by an application is Adobe Photoshop. Photoshop stores its hints about a desired print or display resolution in a proprietary profile named 8bim. ImageMagick does not touch this profile, even when asked to write a resolution change into the metadata of an image file. Photoshop on the other hand, will ignore all resolution hints stored by ImageMagick in the otherwise standard metadata field that is defined for this purpose as soon as it sees its own 8bim profile. The OP should have chosen the heading: in order to avoid all misunderstandings... I could not figure out how to convince convert to only add the metadata and not re-encode my [monochrome] bitmap; it was expanding the file >50%. I discovered that pngcrush (not an ImageMagick tool) can also add the density metadata. This command line marks it 600dpi and allows other optimizations, which reduced the file size by ~10%: pngcrush -res 600 in.png out.png By posting your answer, you agree to the privacy policy and terms of service. asked 2 years ago viewed 22088 times active 1 month ago
http://superuser.com/questions/479197/i-want-to-change-dpi-with-imagemagick-without-changing-the-actual-byte-size-of-t/479198
CC-MAIN-2015-35
refinedweb
2,097
62.07
On Wed, Oct 12, 2016 at 11:52:02AM +0200, Bernd Schmidt wrote: > On 10/12/2016 11:31 AM, Markus Trippelsdorf wrote: > >On 2016.10.12 at 00:34 +0200, Bernd Schmidt wrote: > >>It's a discussion we should have, but I agree it should be done > >>incrementally. I would argue for =1 as the default. > > > >Here are some numbers for an allmodconfig Linux kernel on pcc64le: > > > >-Wimplicit-fallthrough=1 : 951 warnings > >-Wimplicit-fallthrough=2 : 1087 warnings > >-Wimplicit-fallthrough=3 : 1209 warnings > > > >I randomly looked at the differences and almost all additional > >-Wimplicit-fallthrough=2 warnings are bogus (~5% are genuine). > >And _all_ additional -Wimplicit-fallthrough=3 warnings appear > >to be bogus. > > And that's for a codebase that was written in English to begin with. Would > you mind posting one or two examples if you saw interesting ones, for > reference? > > This result suggests that we should probably collapse levels 3-5 into a > single strict one that doesn't try to be clever, and definitely make at most > level 1 the default. What do you mean at most? level 0 is the warning disabled, that is the default except for -Wextra. The difference between =1 and =2 is very small amount of warnings, one will need to annotate or add break; to those 951 spots anyway to make it -Wextra clean (those that don't have any kind of comment at all), so just handling the additional 136 ones as well is not that big deal. It would be interesting to see those 136 comments though, whether anything in them is about the intentional fall through or if they are just unrelated. Collapsing 3-5 levels is a bad idea, various projects will want to only rely on attributes, especially if other compilers only support attributes and nothing else, which is why there is level 5. Levels 4 and 3 give choice on how much free form the comments are. > Another thing, was it ever resolved what this warning does to tools like > ccache which like to operate on preprocessed files? We already have 2 real-world PRs about cases like: case 2: bar (); /* FALLTHRU */ #undef X case 3: #ifdef Y bar (); /* FALLTHRU */ #endif case 4: bar (); not being handled, which are extremely difficult to handle the current way in libcpp, there can be many tokens in between the fallthrough_comment_p comments and the case/default/label tokens. It should work fine with -C or -CC. So I'm wondering if a better approach wouldn't be that for -Wimplicit-fallthrough={1,2,3,4,5} we'd let the fallthrough_comment_p comments get through (perhaps normalized if not -C/-CC) as CPP_COMMENT with the PREV_FALLTHROUGH flag and perhaps also another one that would indicate it is really whitespace for other purposes. The -C/-CC description talks about significant differences though, e.g. /*FALLTHROUGH*/#define A 1 A is preprocessed differently with -C/-CC and without, so if we want to go that way, we'd also need to special case the non-C/CC added CPP_COMMENT. I have no idea what else it would affect though :(, I'm worried about token pasting and lots of other cases. If that is resolved, we could just emit the normalized /*FALLTHROUGH*/ comments if {-Wextra,-W,-Wimplicit-fallthrough{,=1,=2,=3,=4}} into -E output too and just document that we do that. Of course, for ccache I keep suggesting that they use -fdirectives-only preprocessing instead, because anything else breaks miserably tons of other stuff we have added into GCC over the last decade (-Wmisleading-indentation, -Wsystem-headers vs. macro contexts, etc.), but the maintainers disagree. Jakub
https://www.mail-archive.com/[email protected]/msg149768.html
CC-MAIN-2017-17
refinedweb
605
57.61
Hello Friends of Integrated Planning, thank you very much for all the feedback I received on the File Upload/Download how-to over the past years. I have great news: Basically every development request has been implemented! Yes, this means that there is a big load of new features available with version 3. Upload and download of CSV files and a new user interface that allows the preview of the file and plan data before you save it are just two of the highlights. The new version is also compatible with SAP BW 7.4. Prerequisites Minimum release to use the new version is SAP BW 7.30. Download You can download the complete how-to guide as well as the solution from SAP Note 2053696. Enhancements The following list shows the changes and enhancements in version 3 compared to the previously published version 2.4: - Enabled conversion exit in variable screen - Removed context info from message output (can be enabled again with show_messages parameter) - Added search help for all selection fields including special characteristics like fiscal period - Added check that 0INFOPROV must be filled for uploads on MultiProviders - Support for CSV format for upload and download (new parameters for data separator and escape character) - Improved auto-detection of file format - Added info messages to display version and detected file format - New parameter for checking for duplicate records - New parameter to define display of +/- sign for download - New and improved alternative user interface - Function to generate the required master data for ZIP_* InfoObjects - Added BADI for performing custom transformations during upload and download - Integrated File Upload/Download with Report-Report-Interface - New parameter setting for download to select field description instead of technical name in header line - Automatic recognition of UTF byte-order-mark during upload - Added ready-for-input variables for all parameters - v3.1- Added support for XLS format for upload with SAPGUI- Added load from application server which enables upload from Analysis Office - v3.2- Minimum release increased to SAP BW 7.30- File Upload is now supported for SAP BW 7.40 – Updated screen shots to show GUI-based planning modeler and web-based file upload/download application - v3.3- Added enhancement to support F4-BADI- Added option to sort variables on selection screen – New option to display instructions and support information to end-users - v3.4- Added support for formula variables- New option to set maximum number of visible messages in log – Added support for exit variables which are ready for input - v3.5- Download of key figures other than “Amount” or “Quantity” will take “Decimal” setting in to consideration when using “Convert Fields” option - v3.6- Improved error message in case of character format/encoding issues- Added detection of incorrect field separator (semicolon instead of comma and vice versa) – New Standard File Upload functionality – Support for Advanced DataStore Objects for SAP BW powered by SAP HANA - v3.7 – Added link to documentation of BW workspace solution as standard alternative to this how-to solution- Included chapter in documentation about required authorizations - v4.0 – Support for upload and downloading comments (characteristics as key figures)- Improved error messages when uploading files in CSV and TXT format - v4.1 – Added option to sort columns of data preview (file preview is always sorted the same as the file format)- Added optional URL parameters for defining maximum number of visible rows for file and data previews - v4.2 – Added optional URL parameter for displaying “logoff” button and redirecting to a target URL after logoff Preview Here are a few screen shots of the version 3 user interface (Note: The old UI is still available in the version 3 transport). For more details, please refer to the how-to guide (see “Download” section above). File Upload Selection Screen: File Upload Preview Screen: File Download Selection Screen: File Download Preview Screen: Your Feedback As always, I appreciate your feedback. It’s as simple as adding a comment to this blog. Enjoy the new File Upload and Download for BW-Integrated Planning! Hello Marc, thank you for your quick and helpful answer! Best regards, Uwe Hi Marc, Thanks for such a wonderful blog. We have a little different requirement in terms of options for loading flat file in the system: 1. Can we perform load with multiple Amount/Volume column, each representing a time period? 2. Can we use the characteristic relationship that we have maintained for infocube to derive the fields while performing the writeback of data to database? Your inputs will help us big way. Thanks Again for your valuable blog. Regards, Mukesh Hello Mukesh, yes, you can load files like you describe. However, you will have to use the BADI to transform your file structure to the structure of the aggregation level. There’s an example provided in the appendix. The upload automatically uses all characteristic relationships – just like any other planning function. Best, Marc, Product Management SAP HANA DW Hi Marc, thanks again for your very useful solution. I integrated the file upload web dynpro into my web planning application. Now i have probles with the different prefixes of the webdynpro path when transporting from dev to test and prod. I implement a custom web item and compute the prefix as described by you. The cusom web item returns the prefix of my webdynpro path. But If I want to work with tis result running a JavaScript within <Body onload=JavaScript:”function();”> Can you provide a small example webtemplate code for showin how to use the result of the custom web item to generate the URL to the planning sequence. I think this would be very helpful to all. Thank you very much. Best regards Timo Hi Timo, I’m sorry, but I’m too busy with SAP BW/4HANA and won’t have time to develop this. I’m sure that others have coded it already. Please search in the community or ask for help there. Best, Marc Product Management SAP HANA DW Hi Timo, hi Marc, at my current project we also would like to integrate the file upload functionality into a web template, expecting that with the upload we would not encounter locking errors and the user can edit the uploaded data right after upload in the planning mask. What would be the best way to integrate the functionality into a web template? Is it as Timo write to create a custom web item that call the web dynpro? Would it also work, if we just integrate the planing sequence (that includes the file upload function) into a button in WAD? thanks and best regards Cornelia Dear Marc, Unfortunaly, Hierarchy node variables are not supported yet in the how to solution but have you planned to develop it? Many thanks, Best regards, Jonathan PS. We are in BW 7.5 SP6, but we still use your solution because new standard file upload solution does not work properly yet. Hi Jonathan, Nice to hear that you are still using it (well, I know many customers are 😉 ). I’m sorry, but I’m too busy with SAP BW/4HANA and won’t have time to develop the enhancement for hierarchy node variables. If you come across a working WebDynpro for these variables, I’m happy to take a look and try to integrate it (if allowed to reuse the code). Best, Marc Product Management SAP HANA DW Standard function 0RSPL_FILE_UPLOAD_AO (file upload for analysis for office) doesn’t work and looks like it will not work for along time. See note 2461345 for more details. Hi Marc, we encountered the same problem as Aleksandrs that planning function type 0RSPL_FILE_UPLOAD_AO is currently not supported within Analysis for Office. The note 2471638 suggests to wait. Do you have an idea in which version of Analysis it may be expected? It’s not included in the latest roadmap of Analysis for Office. Thank you, Alexey Hello Aleksandrs, Alexey, I don’t know any details about the AO roadmap or when the upload function is planned for. Either continue to use the File Upload I provide or contact the AO team for more information. Best, Marc Product Management SAP HANA DW Hi Marc. I’ve been using your utility for good few years now with good results. However we have recently upgraded to 7.5 and consequently imported new version of the tool. As a result it stopped working. It seems that new version performs file format checks in a different way. In version we used previously, one of the KF to upload was with currency but there was no currency in the file. Instead, variable exit of the very first variable in the filter was reading first record of the file and determined currency based on its content. This doesn’t work anymore, apparently new function is checking file consistency (here-missing currency) before variable exit. When I set currency as fixed in planning filter, upload works, even though I stil don’t see exit variables setting locks in rsplse. I was wondering where is a proper place to apply code to continue using application as we did before i.e. being able to set variables based on the first record of uploaded file, instead of setting them upfront. Thanks, Marcin Hi Marcin, In general, the filter is set before the file is processed by the planning function (upload). This means defining variables has to happen before the actual upload function. Something like this is not foreseen. The file content is stored as an attribute in the assistant class (by the WebDynpro). You can use this in your exit. However, it’s in raw format and you will need a bit of ABAP to get to the actual key figure or currency key value. Best, Marc Product Management SAP HANA DW Hi Marc, We have tested and retested behaviour in BW 7.3 SP8 and BW 7.5 SP5 (with only change being new version of upload function applied from the note). I am 100% certain that variables in 7.3 are called again after data is imported in its raw format (i.e. after click on upload button) to BW and we did all the ABAP to read it and use for exit variables. First variable has also logic to validate file / data format, then other are used for derivation of values (e.g. is sales org in first record is S001 then Company Code determined by variable is DE01). In 7.5 exit variables run only once, even before user click on browse. this means we cannot set variables depending on loaded data. I wonder if there is any way to restore behavior from 7.3? Regards, Marcin Hi Marcin, chapter 5.13 lists a few SAP Notes related to this issue. Please make sure you have SAP Note 2200044 implemented with the proper RSADMIN switch. Best, Marc Product Management SAP HANA DW Hi Marc, I want to upload full backup of InfoCube to the system, I am getting this error message when running a plan sequence: I am uploading a csv file downloaded from system, picked one line a changed a key figure to test the upload. Regarding the error I tried to enter in excel 10,010 change it to textto number, but there is no way forward. File Upload: Data conversion error Check if character format (encoding) is correct: DEFAULT I have set up the filter like this (I was forced by the system): Answer much appreciated, Tomas. Hi Tomas, you have to be careful which program you use to edit a file. Excel probably saved it with a different character encoding (for example “Western European Windows” format). You can change the format under File > Save As > More options > Tools > Web options. Either use a Text Editor that keeps the character encoding of the original file or change the character format in the upload function. Best, Marc Product Management SAP HANA DW You are absolutely right, thanks for your remark. Hi what is the character encoding that works? In the web options, I saw that my xls is set on “western european windows”. I changed to unicode, utf8, us-ascii, no luck. Regards Yann Found it: there was a “scandinavian” character in my data set. ø is not accepted in the upload… Changed it to o and it worked. Hi Marc, i am having issues with values for two fields after Uploading files and refreshed the data, System transposes the the values of the two fields to each other. e.g before the download lets say field A- 1.000 and field B- 2.000, After the upload and refresh the Data the value Transposes to A- 2.000 while B- 1.000. please what might cause the problem and possible way to fix this. My email is [email protected]. Thanks Sam Hi Marc, I am just wondering if the upload would work also for new master data together with values? Thanks a lot for a short information. Kind regards Sabine Hi Sabine, no. Master data must exist before plan data can be loaded (unless of course an InfoObject does not have a master data table). It’s the same with any planning function. Best, Marc Product Management SAP HANA DW Hi Marc, we are trying to import the transports into our HANA 7.5 SP 6 and receive the following error message Does this ring a bell? thanks for any pointers Regards Manuela Hello Manuela, the transport is compatible with 7.3 and higher. Please import with “ignore component versions” turned on. Best, Marc Product Management SAP HANA DW Hi Marc We are migrating BW to v7.4 so upgrading from v2.4 to the v3 version of this file upload. v3 works on new system for uploading actuals, target etc. but I have a commentary upload (ie no key figures in upload file) which doesn’t work. Running the planning sequence and selecting te file through gui or web just says ‘0 records read’. If I run the same file using 2.4, it reads and updates. (I would just use 2.4 but it doesn’t wok via web, only through gui) any advice? Thanks for your help! Hi Iksit, I’m not aware of this issue. Could you please send me an email? Best, Marc Product Management SAP HANA DW Hi Marc, is it possible to upload or download characteristics as key figures with your solution? I stil post this question even though I fear I already know the answer … kind regards & keep up the good work Thomas Hello, Marc! Thanks for great updates for the solution! I have a question about authorizations concept. using authorizations, provided in appendix, leads to errors while save process. web dynpro is getting started, but it’s unable to save data, system generates error message “you have no authorization for info-cube Z* with 02 ACTVT”. is it necessary to set 02 actvt for this operation? or may it be local error in customized roles? please advice Hello Elena, yes, the user needs change authorization. Please adjust the role accordingly. Best, Marc Product Management SAP HANA DW Hi Marc, I have configured the file upload functionality and it works fine. The user is presented with a web layout where they enter varialbles, browse and upload their files. However now the users want to to upload large files (around 2 million records). The Web layout takes about 25 mins for the upload. However the ask is if this can be pushed to a “background” job. Which means the user uploads a file to a shared folder and the Planning Sequence is called through a process chain passing the File Name as a Variant. Will this work? Regards, Prasanth. Hi Prasanth, the upload was never intended for mass data. That’s also documented in the how-to guide (PDF). It’s much better to use the standard file upload feature for BW. There is an option to run the file upload planning function in batch but it’s not integrated with the web UI. You would specify the file name on the application server (has to be accessible from AS ABAP) in the planning function parameters. Then schedule the corresponding planning sequence as a process chain. Two challenges: Setting the right variable values and concurrent planning sessions/locking. For a single user case, these are easy (hard code the variables in the process chain, tell the user not to change data at the same time) but for multiple concurrent planning users this won’t be easy especially if they need to plan on the data at the same time. Let me know if you make this work in a productive environment 🙂 Best, Marc Product Management SAP HANA DW Hi, Marc. I’ve seen that not only me has the issue with characteristic as key figure. I tried a simple scenario. A direct update DSO with planning mode – ON is included into aggregation level. In DSO I have an object for comments which is set to characteristic as key figure. And I can plan it for example in RSRT. I checked this. But when I use File upload tool I have no errors, but any text placed in “comments” column is being converted to 0. Is there any possibilities to work with characteristic as key figure. Or may be you can give a cue how(in which method) it can be solved? Thank you. Hello Yuri, I have a solution ready for testing. Please send me an email. Best, Marc Product Management SAP HANA DW Hi Marc, Is this fixed with Version 4.0. I am have an agg level on a DSO with comments in a column. When I load the data it says 1 records read and 0 generated. Although the file has 12 records in it with comments. Regards, Prasanth. Never Mind. The comments upload only seems to work only for Comment Characeteristics with Master data, if not it does not do anything. Regards, Prasanth. Hi Marc, I am testing the function in BW on Hana 7.5 SP6. It works well initially, but despite having set the planning fucntion to X -Overwrite – once I have loaded the data one time and activated it (ADSO is set like planning cube – all char are key) if I try loading the file again, it gives me Error while reading data; navigation is possible (BRAIN289) Upload xxx ended with errors Message no. RSPLF114 Any Idea what I am doing wrong? regards Manuela Sorry, Manuela. I don’t know why this is happening. Please open an incident so SAP Support can take a look. Best, Marc Product Management SAP HANA DW Hello Marc, I am using BPC 11 system BW4HANA version. I have implemented the note you have suggested (Version 3). Unfortunately it was not working. Will you please help us the latest note no for flat file upload via planning ? Version 4 or something for BW4 HANA? BR, Arup. Hello Arup, the solution also works for SAP BPC 11, version for SAP BW/4HANA. Please use version 4 of the solution which you can find attached to SAP Note 2053696. Best, Marc Product Management SAP HANA DW Thanks Marc. It is working. Hi Marc, the import for small files works well, but now I have a problem with a lagre file with 200 columns. Is there a limitation in the planning function? The field “File Format” only displays 250 chararcters. regards Klaus Hello Klaus, The display of the format will just be concatenated on the parameter screen (which you show). The file format can be as wide as you like. If you send me an email, I can provide a program to upload the file format from a txt file. Best, Marc Product Management SAP HANA DW Hi Marc, we use your solution with great success in our excisting planning solution on BW 7.3. Now we wanna create a new planning tool on BW/4HANA with BPC 11.0. Will there be a future solution for your upload Tool? Or is it an standard function for BPC 11 in the future? We really need such a flexible upload possibility also for then new solution 🙂 Tnx for your response Christian Hello Christian, the solution also works for SAP BPC 11, version for SAP BW/4HANA. Please use version 4 of the solution which you can find attached to SAP Note 2053696. Best, Marc Product Management SAP HANA DW Perfect! Many tnx for the quick reply Hello Marc, now that our customers start using your valuable upload function within the embedded BW of our simple finance implementation, we came across a strange issue: F4 help does not work properly in some cases, the “Search” button seems to be with no function. Are you able to have look into our issue 580950 / 2017? I think you will need to look into our system to believe it. 😉 As expected SAP does not give standard support on this. Thank you and best regards Uwe Hello Uwe, The solution is in the customer namespace but still build using standard SAP components. In your case, it’s ABAP WebDynpro with an SAP Search Help and this should work of course just like in other standard SAP applications. I will take a look at the issue and get back to you via your customer message. Best, Marc Product Management SAP HANA DW Hello, We are currently using your enhanced version of File Upload in our project. Unfotrunatelly we noticed one problem. The order of columns that are displayed in the File Preview is correct and consistent with the fields order definition in the FILEFORMAT parameter of the planning function. We would expect that also the Data Preview will show the columns with the correct order. But here the order is different and seems random. Is it possible to correct/change the order somehow? I tried to create a new view with the changed layout of the Data Preview. The problem is that this view is then available only for the specific user. Is it possible to create a view that will be available and initial for all users? Is it possible to transport this setting to other systems? I raised an Incident for this issue but SAP Administrator asked me to contact directly with you. Hello Wojciech, good observation and questions. No one has complained about the field order yet… I get the point and understand that it’s not optimal. The order is given by the internal data structures used by BW-IP. The data is displayed using a standard Web Dynpro ALV control. The initial view is configurable. Here is the documentation: Personalizing SAP List Viewer Configuring Standard ALV Functions Specifying the Initial View of Applications I have not had much time to play with it but there is one limitation: The fields are defined dynamically based on the aggregation level. If you are using the upload for several different levels, then I believe you won’t be able to have different initial views. :-(. So… I enhanced the solution to allow you to maintain the initial order of columns separately per planning function. If you are able to update to a new version, send me an email and I will provide it to you for testing :-). Best, Marc Product Management SAP HANA DW Hello, I’m trying to use the BADI’s method transform_file to transform a keyfigure-based file to an account based format. To my surprise, the method transform_file is called in step 6 of the planning function logic while method _convert_string_to_tab is called in step 3, so I can’t transform the data in the file to match the planning function. Do I misunderstand the use of this BADI? Kind regards, Jorg Hi Jorg, transform_file is the correct place do change the format. For example, the file contains KF1, KF2, KF3. You also need to include ACCOUNT and AMOUNT (as empty fields) if it’s not included yet. Then you can transform the records in c_t_file and move the data from each KF to a new record with the corresponding ACCOUNT and AMOUNT. The file upload will then ignore the KFx fields and take the data from ACCOUNT and AMOUNT (which obviously need to be in your aggregation level). Best, Marc Product Management SAP HANA DW Hi Marc, great piece of code! I have a feature request: I would like to run the execution with packaging (described here: – in the execution of a planning sequence in process chains you can select it ). Currently it seems that it runs without packages but I have the need to write the planning buffer in-between. Is it possible to integrate this as a flag maybe in the URL as a parameter? Best regards, Alexander Hi Alexander, The complete upload function works on the premise of a single file that is processed in one package. So, unfortunately, this is not feasible to allow for automated packaging. However, you should be able to create your own set of filters and either add them as separate steps to the sequence or use several sequences within a process chain. Best, Marc Product Management SAP HANA DW Hi Marc, thanks for this great development. Unfortunately we were struggling with our secured namespace because of the ‘/’. If we use the comment function we need use characteristics as a key in the direct-update DSO. The generated InfoObject name will contain ‘&’ and this InfoObject name is a problem when calling function module RSD_IOBJ_INCL_ATR_NAV_GET. This problem could be handled by replacing this ‘&’ and the generation namespace prefix by the original namespace. Do you think you can add this in the next version? Best regards Andreas Hallo Andreas, It’s best to check this case in your system. Please create an incident report and ask support to forward it to my name. Best, Marc Product Management SAP DW Hello Marc, incident 204833 / 2018 is created. Not sure if it is already forwarded to you. Thanks and regards Andreas Hi Andreas, the solution is now available via SAP Note 2053696 as well. Best, Marc Product Management SAP DW Hello all, is it possible to create / delete columns in the download file planning function type? Best regards Hello Oliver, the file format can contain InfoObjects that are not included in the aggregation level. You can fill them using a BAdI. If you do not want an InfoObject in the output, just don’t include it in the file format. Best, Marc Product Management SAP DW Hi Marc bernard, i tried to follow sap note step by step guide(2053696) but i tried to test the planning sequence i encountered an error File Upload: Unable to open file: C:\Users\Raytejana\Documents\SAP\SAP GUI\SAP_BEX_FI thanks in advance, Ray Tejana Hi Ray, when you execute the file upload in transaction RSPLAN, then as a default, a fixed file name and path are used (defined by SAPworkdir environment variable). Change the file name in the parameters of your upload planning function to “Prompt”. Then the system will ask you for the file name in RSPLAN. Note: This is usually only for testing. Your planning users will use the WebDynpro and select the file that way. Best, Marc Product Management SAP DW Hi Bernard, Thank you for your fast reply, i tried uploading this sample excel data. but encountered errors.but it seems more of master data issue. should i load the info cube first to recognize the data? apologies this is first time implementing IP and also about WebDynPro as mentioned in the guide to access it via webdynpro i need to http://(customer server):(port)/sap/bc/webdynpro/sap/zrsplf_file_download_v3? planning_sequence=FILE_SEQ2 FILE_SEQ2 is the name of my sequence i already activated the service but when i tried to access the link but its unreachable. should be seing the interface of the upload program? is there a step i missed or is this something that basis team can help me? thanks, Joseph hi bernard , i successfully accessed the UI the problem is more of the security issue. i tried to upload but i still recieve the same error messages Hi Joseph, you have to maintain the master data for the characteristics Version and Customer first. Best, Marc Product Management SAP DW Hi Bernard, We extensively and successfully implemented your application for our client, this is really powerful thanks ! We are only struggling with one open point: in the data selection header we need to restrict the F4 value help to the list of authorized value. We can do it using the F4 restriction BADI ; we have created a variable, put it in the filter, and also created associated BADI implementation ; this is working fine if we use the variable in a BEX query for instance. However, if we open the Webdynpro application, the variable is present in the data selection, but in any case the value help is restricting the values as defined in the BADI. The BADI is not even triggered actually, when putting an external break-point this is not going inside. Any insight ? Do you think we can open OSS message for that ? I am not 100% how to what extend your solution is supported by SAP officially i mean. Many thanks. Best regards, Vincent. Hi Vincent, the F4-Badi is supposed to work also with the WebDynpro. Please check that the enhancement implementation ZRSPLF_F4_ENHANCEMENT is active in transaction SE19 and the code is present in the active version of function RSD_CHA_HELP_VALUES_EXIT. The start of the code should look as follows: I suggest to put a break-point into this RSD function. It should call the F4-Badi from there. Best, Marc SAP HANA Competency Center Thanks Marc for your quick and clear response ! I am not sure to get you 100% still… : >> However in debug this is still not going into my implementation of enhancement spot RSR_VARIABLE_F4_RESTRICT where my custom code to restrict the list is actually defined in an implementation class calling method IF_RSR_VARIABLE_F4_RESTRICT~GET_RESTRICTION_FLAT. Am i missing something obvious in here ? Thanks again. Best regards, Vincent Check that the enhancement spot implementation has the correct filter i.e. proper name of the InfoObject that is used behind the variable. Best, Marc SAP HANA Competency Center Hi Marc, Thanks, actually i am concerned because yes the enhancement implementation is properly configured, and actually this works smoothly when i try to execute the planning sequence in RSPLAN. But when opening the WebDynpro then there is no restriction anymore and if i put an external break-point inside the enhancement spot it does not go inside. Do you think we can open OSS message to get support on this ? (except if you have other ideas to suggest). Thanks again ! Regards, Vincent I tested it again and the F4-BAdI is certainly called by the WebDynpro as well. Please create an incident with open GUI and HTTP connection (have it forwarded to me). Best, Marc Hi ! Thanks a lot. I have created ticket, it should be forwarded to you: 357702 / 2018 F4 value help BADI not triggered in Webdynpro Upload File Meanwhile i have asked for system opening. It may take a few days, there is no emergency on this topic for the time being. Best, Vincent Hello Marc, I am trying to upload a CSV , after some transformation which results in a huge data set ( C_th_Data has finally more than 30 million records ). The upload takes a lot of time in the process EXECUTE_SERVICE (CL_RSPLFR_CONTROLLER) and fails due to internal memory issue. Can we skip the above process and as I do not need any validation. Please note the transforamtion works fine for a small set of data. Thanks, Sharad Hi Sharad, please see the note in chapter 2 of the how-to guide: There’s no way to improve performance or reduce memory consumption of the upload for such high data volume. Sorry, but planning functions – not just the file upload – are not designed for this. Best, Marc SAP HANA Competency Center Thanks Marc for the quick response. If I reduce the volume to 5 million would it still be considered as a mass upload ? Thanks, Sharad The file upload will have C_TH_DATA and at least one copy (sometimes several copies) in memory. Several million records probably still lead to memory issues. You will need some trial and error to find the maximum for your system (also with other concurrent users). But please, no complains about performance. If you want a fast load (that also requires less memory), use ETL. Best, Marc SAP HANA Competency Center Thanks Marc .. 🙂 Hello, is it possible to integrate a button for a logout? Only closing the web browser is in some cases not enough to cancel a lock. Thank You and best regards Ricarda Seyb Hello Ricarda, interesting request. No one ever asked for it… but it was easy to implemented. Please send me an email, so I can send you a version for testing. Best, Marc SAP HANA Competency Center Hi Marc, could you please tell how what is the expected behavior for the field conversion marked with ‘X’ for the field value that has already been provided the user in the system format? e.g. user inputs YYYYMMDD and not DD.MM.YYYY for 0calday Asking this as it currently generates an error without description – “File Upload: Table conversion error.”. If it is how it is supposed to work, would you be so kind and propose some workaround so both formats are supported? “Field conversion: If the “Convert Fields” setting is turned on, field values are interpreted according to user settings (see below).” Would very much appreciate your reply. Thanks in advance & Kind regards, Sebastian Hi Sebastian, the file upload is using the same logic as the standard GUI_UPLOAD to parse a line of the file and separate it into fields. With field conversion turned on, this logic will validate data types, which includes the user’s date format settings. So the user must supply dates in the format they see everywhere else in SAP. If field conversion is off, there’s no such check. Since this is not coding of the file upload, I won’t be able to change or adjust it. The only way around it, is to upload in XML (which is unlikely to be accepted). The remaining alternative is to keep field conversion off meaning all values must be in internal format. However, what you can do is program your own conversion logic in the provided BADI and method transform file from user entries to internal values. You will have to do it for all fields and obviously cover the cases that any user might come up with (or you define). A bit of programming but doable. Best, Marc SAP HANA Competency Center Thank you! Appreciate your prompt reply as UAT’s are live. Have a good weekend! Best, Sebastian Hi Marc, could you clarify the situation about “New Standard File Upload functionality” (I mean planning function type 0RSPL_FILE_UPLOAD_AO): is it possible to upload an .xls (not .csv) file using this standard function? Or the only way to achieve this is to use your Z-development? We are on BWonHANA 7.50 SP12, AO2.7. Thanks, Alex Hello Alex, the standard solution is documented here: It does not support XLS, just CSV format. Best, Marc SAP HANA Competency Center
https://blogs.sap.com/2014/08/13/how-to-load-a-file-into-bw-integrated-planning-version-3/
CC-MAIN-2018-39
refinedweb
5,903
63.49
#include <hallo.h> * Marco d'Itri [Sat, Nov 12 2005, 12:42:07PM]: > Package: sl-modem-source > Version: 2.9.9d-7 > Severity: serious > > See policy 10.6: packages must use MAKEDEV instead of calling mknod. I suggest changing the policy to reflect the reality. Using a wrapper like MAKEDEV to maintain device nodes which use arbitrary choosen major/minor numbers is just not very useful. In this case, there is even more code to fix the major/minor numbers because of a transition, you cannot seriosly tell me to "do that with MAKEDEV". I remember having waited _months_ for addition of DVB devices into makedev. Eduard. PS, commenting the other complaint: > (Please remember that there is no need to check for udev or devfs in the > script, because MAKEDEV does it internally.) Fine. However, I have not seen any d-d-a announcement or private mail describing the change in that behaviour. Further, I expect from you as a carefull maintainer to write at least a simple HOWTO for your fellows about how (exactly) to deal with changes/transition to udev. You simply throw people into cold water. Feel free to tell me that I am wrong, this is just a subjective impression. -- * ij hat gestern seine Segelnummer gesehen: G 386 <ij> 386!! und das mir! *grummel* -- ij - Amiga seit 1989
https://lists.debian.org/debian-devel/2005/11/msg00902.html
CC-MAIN-2017-09
refinedweb
222
65.93
Visualizations in Qubole Notebooks are not limited to the graphing functions available out of the box. In order to use third-party visualizations, we’ll need to go back into the Spark Qubole Notebook and select the relevant libraries we want to import from our packages. In this section, we will: The example libraries used are all contained in this Earthquake Visualization Notebook (MatPlotLib, Plotly, and Folium Maps). In order to use some of these more advanced visualizations, we’ll need to import our Pandas library by converting our Spark DataFrame into a Pandas DataFrame*, which has more features than just Spark alone. First we’ll create and visualize the year_count table. Given we’re using Python here we’ll also need to initialize it with %PySpark %pyspark year_count = eq.groupby(eq.year).count().sort(eq.year) z.show(year_count) Next, we’ll convert the year_count table into a Pandas Dataframe: yc = year_count.toPandas() *Important note: this method collects the data from Spark executors and brings it all to the Spark driver, which can cause the Notebook to not run if using too large of a data set. Now that we have our Pandas DataFrame defined, we can start to use some of our plotting libraries. We’ll start by importing MatPlotLib, followed by Plotly. Using PySpark’s import command we can load in MatPlotLib and use our Pandas DataFrame create our visualization. import matplotlib.pyplot as plt plt.plot(yc['year'], yc['count']) z.showplot(plt) As you can see below, we are now able to get a very clean visual of the number of earthquakes that are happening each year. Similar to the prior example, we are going to use the import command to use Plotly, and we’re also going to import Plotly’s graph_objs function to be able to interact with the visualization. import plotly import plotly.graph_objs as go Once we’ve imported Plotly, we need to define our plot function and create our axis def plot(plot_dic, width="100%", height="100%", **kwargs): kwargs['output_type'] = 'div' plot_str = plotly.offline.plot(plot_dic, **kwargs) print('%%angular ‘ % (height, width, plot_str)) trace = go.Scatter( x = yc[‘year’], y = yc[‘count’] ) data = [trace] plot(data) As you can see in the visualization below, all Plotly graphs in Qubole notebooks offer interactivity for further exploration. We can also click down into specific dates and dig into the data further. The final library we will use in this example is Folium, a Python visualization library, which allows you to create and plot in a variety of Leaflet maps. We’ll start again by importing the library – import folium We also need to define another Pandas DataFrame on our EarthQuake (eq) table; we’ll name this local_eq. We also need to define our Folium map, which we’ll call eq_map. local_eq = eq.toPandas() eq_map = folium.Map() From there we can use PySpark generate the Folium Map with a For loop. for i in range(1000): folium.Circle(location = [local_eq.at[i, 'lat'], local_eq.at[i, 'lon']], radius = local_eq.at[i, 'depth'] ** 2, color='blue', fill_color='blue', fill_opacity=.3, fill=True).add_to(eq_map) In the next paragraph, we’ll run Folium and use the plots that we just generated to populate a map visualization in the Notebook. Now that we’ve concluded the basics of data processing with Spark and using Notebooks, proceed to the next section on the machine learning workflow with Qubole.
https://www.qubole.com/developers/spark-getting-started-guide/advanced-analytics-and-dashboarding/
CC-MAIN-2021-31
refinedweb
570
54.93
thpy - ruby's pry like runtime developer consolethpy - ruby's pry like runtime developer console What's thisWhat's this Debugging tool like Ruby's pry. Insert one themod call break cause interuption of program and setup scala interactive tool (Repl) with bound values. User can show the value, call methods and change internal state of it. How to installHow to install build.sbt libraryDependencies += "com.github.bigwheel" %% "thpy" % "0.1.0" How to useHow to use Write code as sample Main.scala, import com.github.bigwheel.thpy.break import com.github.bigwheel.thpy.Macro.anyToTyped // for human interactive test purpose Main. object Main { def main(args: Array[String]): Unit = { val a = 10 break(a) } } then sbt run and interact with thpy console $ sbt run .... Welcome to Thpy at <empty>.Main.main(/home/kbigwheel/code/thpy-sample/src/main/scala/Main.scala:8) 4: 5: def main(args: Array[String]): Unit = { 6: val a = 10 => 7: break(a) 8: } 9: 10: } bound names: a Type in expressions for evaluation. Or try :help. thpy> a res0: Int = 10 thpy> 1 + 1 res1: Int = 2 thpy> a + 2 res2: Int = 12 thpy> :quit Support scala versionSupport scala version Tested only in 2.12. And this must not work well in other version because repl code has no compatibility. Known problemKnown problem Type of binded value must be visible in public scope. There private class, enclosing class or annonymous class cannot be used in thpy (public class with private constructor is no problem).
https://index.scala-lang.org/bigwheel/thpy/thpy/0.1.0?target=_2.12
CC-MAIN-2022-05
refinedweb
250
59.6
FOR insurers as for Floridians, the recent pounding from four back-to-back hurricanes, costing $20 billion or so, has been highly unusual, as well as unwelcome. Not since Texas in 1886 have so many hurricanes struck one American state in a single season. And Mother Nature seems to be fairly bursting with surprises. In Europe, last summer was the hottest on record and severe windstorms have been on the rise. On a ten-year view, the frequency of weather disasters has tripled since the 1960s and insured losses have risen ten-fold, according to Munich Re, the world's largest reinsurer. Some might ascribe all this to global warming. In fact, this is far from being established—and hurricanes are especially hard to assess. Whatever the cause, the world's insurers are counting the cost of more volatile weather. “Higher variability means more uncertainty means normally a higher price,” says David Bresch of Swiss Re, Munich Re's biggest rival. However, predicting climate change and its effects in the future is harder than simply reacting to volatility today. The possibility that temperatures might rise and, for example, cause more flooding in Europe has not yet led most insurers to increase premiums and deductibles. Since reinsurance contracts are renewed every year, adjustments can be made speedily if necessary. And insurers are constantly updating their disaster models. Swiss Re has already plotted out half a million possible storms in North America and the Caribbean alone. On the upside, climate change could make some areas less stormy. Munich Re has been studying warming patterns for more than 20 years; it expects warmer weather in Europe (more storms and flooding in winters, more heatwaves, severe storms and wildfires in summer) and some of the same in North America. Gerhard Berz, a meteorologist who heads the company's geo-risk research group, says that more study of the oceans' role in climate change would be especially helpful to scientists and insurers. If insurers and reinsurers ever feel confident enough of climate change to act, premiums and deductibles are likely to rise. Reinsurance prices are normally calculated on the basis of losses in the past five to ten years, according to Mr Berz at Munich Re, so if warming trends accelerate, price increases could follow with a slight lag. Reinsurers would also be likely to insist on stricter limits on their coverage. Climate change could also increase demand for catastrophe (“cat”) bonds. These are a form of securitised risk, offered by insurers or reinsurers to limit their exposures. The investor receives a high rate of return (often above 10%) in exchange for the risk of losing his principal if losses from a hurricane, windstorm or terrorist attack exceed a certain level. (So severe must the disaster be that even the recent storms have not been strong enough to trigger payouts, though prices on the bonds have wobbled.) Such instruments are getting more popular, with hedge funds among the most eager investors: cat-bond issues totalled $1.7 billion in 2003, up 42% from 2002, according to Guy Carpenter, a reinsurance brokerage firm. Cat bonds for other big risks such as flooding could be introduced if climate change creates the need, according to Mark Hvidsten of Willis, an insurance broker. Some of the worst risks, though, are likely to be borne by governments. American states such as Florida (for hurricanes) and California (for earthquakes) are already involved in coverage for high-risk areas. Governments are even more instrumental in prevention: insurers rely on them to curb carbon emissions, fund climate-change research and build protections such as flood defences. Perhaps governments and insurers together could gently suggest to citizens that building new houses in flood- or hurricane-watch areas is not always the best idea. This article appeared in the Finance & economics section of the print edition under the headline "Awful weather we're having"
https://www.economist.com/finance-and-economics/2004/09/30/awful-weather-were-having
CC-MAIN-2021-39
refinedweb
649
51.38
In my last article AutoCompleteBox in Silverlight 4.0 , Data Source for Autocomplete Box was an IEnumerable list. In this post we will work with a column value of table from SQL Server as Data Source for Autocomplete Box. Our approach would be 1. Create WCF Service 2. Exposed data 3. Consume in Silverlight 4. Put data exposed by service as data source of Autocomplete Box. I have a table called Course in School database. I am going to bind value of the Title column of Course table as source for AutoCompletebox. Create WCF Service Right click on web project hosting Silverlight application and add a new item by selecting WCF Service application from Web tab. 1. Very first let us create a Service Contract returning titles as list of string. Create Data Access layer Now let us create a Data Access layer using LINQ to SQL class. a. Right click on web application project and add a new item. Select LINQ to SQL class from Data tab. Note: I have written many posts on LINQ. You can refer them for better and detail understanding of LINQ. In this post I am going bit faster J b. Select server explorer option. c. Either create a new data connection or choose [if listed for your Database server] from server explorer. d. Drag and drop Course table on dbml file. Now our Data access layer is in place. Implement Service Service contract and Data access layer is in place. Now let us implement the service. I have done a very simple stuff in above code. I created instance of Data class context and fetching all the titles from courses table. Test WCF Service Right click on .SVC class and select show in browser. Consume WCF Service in Silverlight Right click on Silverlight application and add service reference. Since WCF Service is in same solution with Silverlight click on Discover to select service. Now Service is added. We need to make a client side class to bind as data source of Autocomplete box. Add below class in Silverlight project. Call WCF Service We know we need to make asynchronous call to WCF from Silverlight. In completed event we can bind Autocompletebox as below. In above snippet atcTextBox is name of the Autocompletebox. Drag and Drop AutoCompleteBox on XAML 1. From the tool box select AutoCompleteBox and drop on the XAML page 2. Set the Binding, ValueMemberPath and FilterMode. 3. Set the DataTemplate and bind the TextBlock where user will type the text. For reference, <Grid x: <sdk:AutoCompleteBox x: <sdk:AutoCompleteBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding CourseName}" /> </DataTemplate> </sdk:AutoCompleteBox.ItemTemplate> </sdk:AutoCompleteBox> </Grid> using System; using System.Collections.Generic; using System.Windows.Controls; using SilverlightApplication4.ServiceReference1; namespace SilverlightApplication4 { public partial class MainPage : UserControl { List<Courses> lstCourses = null; public MainPage() { InitializeComponent(); Service1Client proxy = new Service1Client(); proxy.GetTitleToBindCompleted += new EventHandler<GetTitleToBindCompletedEventArgs>(proxy_GetTitleToBindCompleted); proxy.GetTitleToBindAsync(); } void proxy_GetTitleToBindCompleted(object sender, GetTitleToBindCompletedEventArgs e) { lstCourses = new List<Courses>(); var res = e.Result; foreach (var r in res) { lstCourses.Add(new Courses { CourseName = r.ToString() }); } atcTextBox.DataContext = lstCourses; } } } 3 thoughts on “AutoCompleteBox in Silverlight 4.0 with DataSource from SQL Server” Dear Dhananjay Can’t i do the same by using domain service class means the whole procedure you explained in the Tutorials on WCF RIA services. Do i have to create a service class along with Domainservice class? i mean i am confused that what is the role of domain data class. can i do the same Autocomplete box binding by using domain service class?
https://debugmode.net/2011/05/08/autocompletebox-in-silverlight-4-0-with-datasource-from-sql-server/
CC-MAIN-2022-05
refinedweb
587
52.66
This is the story of a Java debugging journey that started with a question I couldn’t answer about Java stack traces. As a long time Java programmer, I am approached for help by developers who encounter unusual problems in the language. Diving in and getting acquainted with all the dark corners of Java is something I really enjoy, mainly because I emerge with a better understanding of the language, and also because it equips our team with better tools to solve the everyday problems … as well as the unusual ones. Our trip takes us through a deeper look at Java lambdas and method references, and ends up with a quick foray into the JVM code. We’ll use a couple of debugging tools and techniques to figure it all out, and learn a little about implementation details and diagnostic JVM options. It’s a good example of how, with the source in hand, you can demystify a general phenomenon, such as missing frames. It all started with a NullPointerException… A co-worker approached me with the following stack trace: java.lang.NullPointerException.jira.jql.util.ParentEpicFinalTerminalClauseFactory.getFinalTerminalQuery(ParentEpicFinalTerminalClauseFactory.java:47) This was accompanied by another 364 less-interesting frames. What’s interesting, and what brought him to me, is the code of the first frame. It’s from the JDK (build 1.8.0_172-b11), and it is as follows: ReferencePipeline.java public final Stream map(Function<? super P_OUT, ? extends R> mapper) { Objects.requireNonNull(mapper); return new StatelessOp<P_OUT, R>(this, StreamShape.REFERENCE, StreamOpFlag.NOT_SORTED | StreamOpFlag.NOT_DISTINCT) { @Override Sink opWrapSink(int flags, Sink sink) { return new Sink.ChainedReference<P_OUT, R>(sink) { @Override public void accept(P_OUT u) { downstream.accept(mapper.apply(u)); } }; } }; } The question at hand was, “What could be null on line 193 (highlighted above in red)?” It can’t be mapper because: - It is a local, so it can’t be updated from outside this code - There’s a requireNonNull before the usage, so it started not null, and - It is effectively final and used in an inner class, so it can’t have changed. For full details on effectively final you can read the Java Language Specification, version 8, section 4.12.4, but TL;DR: it could be final and so is treated as final since the inner class reference needs it. It can’t be downstream, because if you hunt down its declaration you find: Sink.java static abstract class ChainedReference<T, E_OUT> implements Sink { protected final Sink<? super E_OUT> downstream; public ChainedReference(Sink<? super E_OUT> downstream) { this.downstream = Objects.requireNonNull(downstream); } As you can see: - The field downstream is final, - There’s a requireNonNull when it is initialized, and - There’s no usage of this or super in the constructor, so we can’t be in code executing on a partially constructed object. My next thought was that maybe P_OUT was a boxed type, like Integer or Long, and that to invoke mapper.apply, the compiler had inserted an unboxing of u.Unboxing throws a NullPointerException if the reference is null (detailed in JLS 8, 5.1.8). For example, code like int value = getValue(); can throw an exception, if getValue() is declared to return Integer and actually returns null because the compiler inserts a call to Integer.intValue to convert the returned Integer to an int. However, looking at the code for getFinalTerminalQuery in this case revealed that P_OUT was com.atlassian.jira.issue.Issue, which isn’t boxed – it’s the main issue interface in Jira. Where’s the null? Although these questions didn’t find the source of the NullPointerException, they did reveal that mapper was Issue::getId – that is – a method reference (as per JLS 8, 15.13), and this seemed unusual enough to be interesting and worth looking at further. At this point I was somewhat bemused, but also quite excited, because I couldn’t explain what I was seeing … which meant that I was about to learn something new! So I broke out a bit of test code by boiling down the case above to the following code: Main.java public class Main { static class Thing { Long getId() { return 163L; } } public static void main(String[] args) { Thing thing = null; System.out.println( Collections.singleton(thing).stream().map(Thing::getId).collect(toList()) ); } } Sure enough, running this code got me a NullPointerException with a stack similar enough to the above, at least at the pointy end. By the way, you can clone that repository using git clone [email protected]:atlassian/missing-frame.git if you want to see it for yourself. There are some handy canned command lines in the README. In other words, you’ll get a NullPointerException if you apply a valid method reference (like Thing::getId in the example) to a null reference. This is not surprising. What is surprising, until you know this, is that you won’t see a frame for the invocation of that function. You can’t see a frame for getId of course, because that would mean you’d be in a member function where this was null, which won’t happen while you’re looking. This has nothing to do with the use of the Stream by the way, you can get the same effect from Thing thing = null; Function<Thing, Long> function = Thing::getId; function.apply(thing); But I didn’t know that at the time I was writing the test code. This is one of the nice things about writing test code, you can freely edit it and play around with what exactly is causing the effect you are investigating. But where is the exception thrown? In IntelliJ, you can use View/Show Bytecode on these examples to see the JVM bytecode. Other IDEs have similar functionality. You could also use javap from the command line. In any case, you will see something like: INVOKEDYNAMIC apply()Ljava/util/function/Function; [ // handle kind 0x6 :; // arguments: (Ljava/lang/Object;)Ljava/lang/Object;, // handle kind 0x5 : INVOKEVIRTUAL com/atlassian/rgates/missingframe/Main$Thing.getId()Ljava/lang/Long;, (Lcom/atlassian/rgates/missingframe/Main$Thing;)Ljava/lang/Long; ] What is invokedynamic, you say? In this case, we need to break out the Java Virtual Machine Specification, version 8, section 6.5, which tells us that some information is looked up and then resolved (as per JVMS 8, section 5.4.3.6) to produce a MethodHandle object. This is then invoke()d to get a CallSite which is bound to that instruction, effectively caching the costly lookup. Then, we invokeExact the CallSite.target to get a Function object. Whoah. It’s worth noting this is not what is throwing – it hasn’t even looked at the null reference yet, and the spec is quite clear that an “invokedynamic instruction which is bound to a call site object never throws a NullPointerException or …”. So what is the Function we get? If you look at it in IntelliJ, for example by breakpointing the constructor of NullPointerException and inspecting the mapper, you see an object which claims to be a com.atlassian.rgates.missingframe.Main$$Lambda$1/6738746, but you won’t find that in your source. These classes are generated on the fly by the LambdaMetafactory referenced in the INVOKEDYNAMIC opcode. The code that writes them is in the delightfully named java.lang.invoke.InnerClassLambdaMetafactory#spinInnerClass. It’s worth noting they don’t go through the normal ClassLoader machinery, but use the wonderful sun.misc.Unsafe#defineAnonymousClass to get themselves loaded. Anyway, now that I was this far down, I had to keep digging. So I dumped the class spun up by spinInnerClass to a file by evaluating new FileOutputStream(“lambdaClass.class”).write(classBytes) in the debugger. This is a common trick I use when debugging – evaluating an expression which intentionally has side effects. There is some data we want to inspect – classBytes in this example. To inspect it, we want to pass it off to another tool, and so we need to get it in a file. The expression has the side effect of writing classBytes to the file lambdaClass.class, and I can use this for further processing. Once I had the bytes from the built-on-the-fly class in a file, I could use the javap utility to dump it. 0: aload_1 1: checkcast #15 // class com/atlassian/rgates/missingframe/Main$Thing 4: invokevirtual #19 // Method com/atlassian/rgates/missingframe/Main$Thing.getId:()Ljava/lang/Long; 7: areturn Voila! It’s the invokevirtual that is throwing NullPointerException, as per JVMS 8, section 6.5: if objectref is null, the invokevirtual instruction throws a NullPointerException. So that’s kind of what one might expect – a runtime generated wrapper which calls the function named by the method reference. But the frame is still missing! There totally should be a frame for apply. In fact, if you’re playing at home and have breakpointed the NullPointerException constructor like me, you can see the apply frame on the stack. However, if you step through the code until after the call to java.lang.Throwable#fillInStackTrace(int), you won’t find the frame in the output of Throwable.getStackTrace. At this point, the only place left to inspect was fillInStackTrace, and of course that is native. So you need to hop over to the OpenJdk source, and look at void java_lang_Throwable::fill_in_stack_trace(Handle throwable, methodHandle method, TRAPS) in hotspot/src/share/vm/classfile/javaClasses.cpp. I didn’t grok the whole method, but I did skip through it enough to find: javaClasses.cpp if (method->is_hidden()) { if (skip_hidden) continue; } Methods can be hidden? TIL. I wonder who calls set_hidden? As it turns out, void ClassFileParser::MethodAnnotationCollector::apply_to(methodHandle m) says: classFileParser.cpp if (has_annotation(_method_LambdaForm_Hidden)) m->set_hidden(true); A bit more digging reveals that the value of _method_LambdaForm_Hidden is the string Ljava/lang/invoke/LambdaForm$Hidden;. To tie this back to the Java code, I again used javap to decompile the class file we dumped above. This time, however, I added an extra flag to javap – the -v flag, which makes javap verbose. In particular, this makes it dump the strings in the class constant pool – our string is at index #13, and we can see the runtime annotation #13 is present on the apply method: #13 = Utf8 Ljava/lang/invoke/LambdaForm$Hidden; ... public java.lang.Object apply(java.lang.Object); ... RuntimeVisibleAnnotations: 0: #13() So, you can’t see the frame because it’s hidden. In fact, a bit further up in spinInnerClass, it calls: mv.visitAnnotation("Ljava/lang/invoke/LambdaForm$Hidden;", true); to add the annotation, but I missed this on the first read through. Reading this backwards to understand the flow: When generating the runtime wrappers used to invoke the method references, the generation code annotates the apply method with a java.lang.invoke.LambdaForm$Hidden annotation, and the JVM code (which fills in stack traces to implement the Java level fillInStackTrace function) checks for this annotation, and skips over frames for methods with this annotation. One more thing … The sharp eyed will have noticed that the JVM code in javaClasses.cpp above has a skip_hidden check also, which turns out to be set from ShowHiddenFrames, which is mentioned in: diagnostic(bool, ShowHiddenFrames, false, \ "show method handle implementation frames (usually hidden)") \ Reading some documentation in this file led me to java -XX:+UnlockDiagnosticVMOptions -XX:+PrintFlagsFinal which shows a wealth of JVM diagnostic and internal stuff, including: bool ShowHiddenFrames = false {diagnostic} and thus, finally to: :; java -XX:+UnlockDiagnosticVMOptions -XX:+ShowHiddenFrames -cp build/libs/missing-frame.jar com.atlassian.rgates.missingframe.Main Exception in thread "main" java.lang.NullPointerException at com.atlassian.rgates.missingframe.Main$Lambda$1/295530567.apply(:1000004).rgates.missingframe.Main.main(Main.java:17) There’s the frame! The error is at <Unknown>:1000004 – the JVM doesn’t know the source name (hence <Unknown>), and uses 1000000 + bci for line numbers if it has no line numbers, where bci is the byte code index of the invokevirtual we identified as the cause above. And my co-worker? I got back to my co-worker with my findings, and they responded Curiosity: Satisfied. It’s nice to go beyond just “use this workaround” and instead gain a full understanding of exactly why the code is behaving in the way we observe. It’s especially nice when I get to learn a few things about Java implementation details and JVM internals. P.S. Our engineering teams are hiring. A lot. Just sayin’. Get stories like this delivered to your inbox
https://www.atlassian.com/blog/software-teams/java-debugging-example
CC-MAIN-2019-22
refinedweb
2,086
55.44
Removing a string in Python without removing repeating characters python remove duplicate words from string remove duplicate characters in a string java remove duplicate characters in a string c python remove repeated characters from string write a program in python to remove duplicate characters from a string remove duplicate characters in a string c# how to remove duplicate strings in python I am printing a folder name to a text file containing data, and want to remove the external folders from the string. For example, it is printing C:\A3200\201808101040, but I want to print 201808101040. When I use str(os.getcwd().strip('C:\\A3200\\')) to remove the external folders from being printed, the program returns 180810104, which is weird because some of the zeros are removed but some aren't, etc. (it removed the beginning 20 and the ending 0.) I know that this could be done by getting the folder name a different way than os.getcwd(), but I am interested in this method of string manipulation for the future. How do I remove a certain string of characters within a full string without affecting the characters that are repeated later in the full string? That may work, but I would like to know for future reference how to just do it string-wise, in case I need to remove something else like "pear" from "pear tree", etc. where the "e" is in both words You could do 'pear tree'.replace('pear', '', 1).strip() Remove all duplicates from a given string in Python, We are given a string and we need to remove all duplicates from it ? What will be the output if order of character matters ? and has no duplicate elements. Python string translate() function replace each character in a string using the given translation table. We have to specify a Unicode code point for a character and ‘None’ as the replacement to remove it froma result string. We can use the ord() function to get the Unicode code point of the character. Strip takes a set of characters and removes from both sides until it encounters a character not in the set. This is why it eats your 2 and 0 but not the 1. You will probably have better luck with os.getcwd().split(os.sep)[-1] How to remove duplicate characters from String in Java? [Solved , How do I remove spaces from a string in Python? How can I remove duplicate characters from a string using Python? For example, let's say I have a string: How can I make the string: I'm new to python and What I have tired and it's working. I knew there is smart and best way to do this.. and only experience can show this.. NOTE: Order is important and this question is not similar to this one. The answer to this specific question is employing os.path.basename(). In regards to your more broad question: "How do I remove a certain string of characters within a full string without affecting the characters that are repeated later in the full string?" I would consider using a regular expression (regex). This allows you to specify positive and negative look-aheads / look-behinds, and many other useful tricks. In your case here, I would consider searching the string instead of actually replacing any characters in the string. Here is a regex example for your question: import re s = r'C:\A3200\201808101040' matches = re.findall(r'[0-9]+', s) print(matches) Yields: ['3200', '201808101040'] Obviously, in this case, you are interested in the final match returned in matches, therefore you can access this via matches[-1], which gives 201808101040. Python Remove Spaces from String, Python Exercises, Practice and Solution: Write a Python program to remove duplicate characters of a given string.. a=r"C:\A3200\201808101040" # make sure you read it raw a[a.rindex("\\")+1:] #'201808101040' OR In case you just need 'C:\A3200' and '201808101040' seperated a=r"C:\A3200\201808101040" a.rsplit("\\",1)[1] #'201808101040' a.rsplit("\\",1)[0] #'C:\A3200' Python: Remove duplicate characters of a given string, A quick one in Python - [code] def removeDuplicates(string): uniqs = '' for x in string: if Originally Answered: How do I remove duplicate characters in a string? I am trying to find the position of a substring within a string without using any The string class has a method replace that can be used to replace substrings in a string. We can use this method to replace characters we want to remove with an empty string. For example: >>> "Hello people".replace("e", "") "Hllo popl" If you want to remove multiple characters from a string in a single line, it's better to use regular expressions. How to remove duplicates from a string (in-place), To remove all duplicates from a string in python, we need to first split the string by spaces so that we have each word in an array. Remove duplicates from a given string; Remove three consecutive duplicates from string; Remove all consecutive duplicates from the string; Remove duplicates from a string in O(1) extra space; Remove duplicates from string keeping the order according to last occurrences; Python Remove Duplicates from a List; Python | Remove duplicates in Matrix Remove all duplicates from a given string in Python, java python python python by durgasoft python videos by durgasoft Program to remove Duration: 12:20 Posted: Apr 12, 2019. Python || Q14. Program to remove duplicate characters from the , Remove duplicate characters in a given string keeping only the first can even do without O(N) additional storage as is done here in Python. But if you want to remove the specific character from the starting and end of the string then you have to set the character as argument value for the method. It returns the main string value after removing the particular characters from the string. Different uses of the strip method in python are shown in the following example. os.path.basename ("C:\A3200\201808101040")? - That may work, but I would like to know for future reference how to just do it string-wise, in case I need to remove something else like "pear" from "pear tree", etc. where the "e" is in both words - my-string.replace('pear', '') will take "pear" out of pear tree - Can this question please be clarified? I feel like multiple questions are being posed in the comments of various answers. What are you trying to acheive? You can cross the 'pear tree' bridge when you need to remove the word 'pear' from the phrase 'pear tree' - I think there is one question being asked: How do I remove a certain string of characters within a full string without affecting the characters that are repeated later in the full string? There may be different ways of doing it for the file directory example, I want to do it in the exact same method as I would for the pear tree example - Is OP concerned about 3200? If so shouldn't it be A3200 after all it is a part of the path
https://thetopsites.net/article/51789115.shtml
CC-MAIN-2021-25
refinedweb
1,185
68.2
Running Linux GPU Applications on Windows# Introduction top# Some of the GPU-requiring tools commonly used with SVL Simulator, like Apollo or Autoware.Auto, might require the Linux operating system to run. If you don't have Linux available or prefer to use Windows, it's now possible to run Linux programs using Windows Subsystem for Linux (WSL). Features available in WSL are dependent on the version of Windows 10. Windows build 21362 contains the version of WSL required to run Linux GPU applications on Windows. As an alternative, you can set up a desktop environment). Installation top# NOTE: At the time of writing, most of the drivers and software required to use GPU-PV in WSL are still in preview. This might require using their pre-release versions. Details can be found in the specific sub-sections. Verify Windows 10 version top# In the Windows command line, enter: winver The reported OS Build should be 21362 or higher. If the build number is lower, and your system is up to date, the required Windows version is not yet a part of public release. To use it, you will have to join Windows Insider Program on the Dev Channel. Doing so will let you use preview builds of Windows 10. If you're interested, please follow official instructions and update your Windows version. Install WSL 2 top# To install WSL 2, please refer to the official documentation. We recommend using Ubuntu 18.04 or Ubuntu 20.04 Linux distribution. This tutorial will assume Ubuntu 20.04 is installed. If you have installed WSL previously, make sure you're using WSL version 2. To check for version, enter: wsl -l -v The distribution you're planning to use should report 2 under VERSION. If you're using WSL 1, update it to WSL 2. Make sure your Linux kernel version is up to date. You can update it through an elevated Windows command line by entering: wsl --update Install NVIDIA drivers top# As of the end of September 2021, the NVIDIA drivers for CUDA on WSL are still in public preview. Version 470.14 (with CUDA 11.3) or higher is required. You can check your current driver version through Windows command line, by entering: nvidia-smi If your driver version is older, you can download required version from official NVIDIA web page. To verify that your GPU is working inside WSL, you can build and run one of default CUDA examples using WSL terminal: cd /usr/local/cuda/samples/4_Finance/BlackScholes make ./BlackScholes Your console should provide you with output similar to the one shown below. This means that your GPU is accessible inside of WSL and can be used to run CUDA programs. [./BlackScholes] - Starting... GPU Device 0: "Pascal" with compute capability 6.1.363584 msec Effective memory bandwidth: 220.031695 GB/s Gigaoptions per second : 22.003170 BlackScholes, Throughput = 22.0032 GOptions/s, Time = 0.00036 Install Docker Desktop for Windows top# If you plan to use Docker inside your WSL 2 distro, we suggest to install Docker Desktop by following official documentation. Alternatively, you can decide to skip Docker Desktop and use Docker and NVIDIA Container Toolkit installed directly from your WSL 2. We recommend the first option - this tutorial will assume Docker Desktop for Windows is used. If you insist on using the second option, you can find instructions in official NVIDIA documentation. If you're already using Docker Desktop for Windows, make sure version 3.1 or higher is installed. After installing and launching Docker Desktop, navigate to Settings -> General and make sure that the option Use the WSL 2 based engine is enabled. After that, navigate to Settings -> Resources -> WSL integration and make sure that integration with your WSL 2 distro is enabled. Whenever you're using Docker from WSL, the Docker Desktop application must be running on your Windows machine - otherwise WSL won't be able to recognize service docker. To verify that Docker environment is running properly, enter your WSL terminal and launch a sample CUDA docker image from NVIDIA: docker run --rm -it --gpus=all --env NVIDIA_DISABLE_REQUIRE=1 nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark NOTE: Parameter --env NVIDIA_DISABLE_REQUIRE=1disables the CUDA version check. This is required as of the end of September 2021 due to bug in NVIDIA drivers (CUDA version reported in WSL is lower than installed). Your console should provide you with output similar to the one shown below. This means that your GPU is accessible inside of Docker containers and can be used to run CUDA programs. GPU Device 0: "Pascal" with compute capability 6.1 > Compute 6.1 CUDA device: [NVIDIA GeForce GTX 1080] 20480 bodies, total time for 10 iterations: 16.788 ms = 249.832 billion interactions per second = 4996.645 single-precision GFLOP/s at 20 flops per interaction Set up desktop environment (optional) top# If you want to run any GUI-based applications inside your WSL or Docker environment, you have configure the X Window System. By default, display options inside of WSL are not configured and no valid output device is registered. This means not only that GUI will not be displayed, but also that any program attempting to output something to screen might not work correctly or fail to launch. If you have Windows 10 build 21362 or higher, you don't have to do anything - along your WSL instance, WSLg should have been automatically installed. WSLg pipes X11 and Wayland (used to run Linux GUI applications) directly into Windows graphical user interface. This means that any GUI application launched inside WSL will simply open on your Windows desktop. To verify that's the case, run any GUI-based application from your WSL instance. As an example, you can use one of OpenGL test applications. In your WSL terminal, enter: glxgears This should open new window with three spinning gears. If your Linux distro does not have this installed by default, you can get glxgears application from mesa-utils package: sudo apt install mesa-utils Networking considerations top# Compared to using Docker on Linux, using it on Windows through Docker Desktop with the WSL 2 backend has some significant differences in networking. Whenever you want to use any kind of networking functionality in a container running on Docker Desktop, make sure to take the points below into account: - The --net=hostoption for docker runwill not behave as expected. The Docker daemon runs inside an isolated network namespace, which, from the perspective of the Docker container, is a host network. You won't be able to access containers started with this option through the usual means. - Docker Desktop provides special host name ( host.docker.internal) that resolves to your host machine. It always resolves to an IP reachable from the container, and resolves to 127.0.0.1on the host. If you can't connect to your container through localhost, try using host.docker.internalinstead. - Since you can't use the --net=hostoption, all of the ports that will be used to communicate with the container have to be explicitly exposed using -pflags (see official documentation for details). They will be tunneled both to the WSL 2 network namespace and the Windows host.
https://www.svlsimulator.com/docs/archive/2021.3.1/installation-guide/running-linux-gpu-applications-on-windows/
CC-MAIN-2022-21
refinedweb
1,204
56.15
Difference between revisions of "Session:The Kernel Report ELC 2012" Revision as of 14:34, 5 March 2013 Session Details - Event - ELC 2012 - Date - February 15, 2012 - Presenter - Jonathan Corbet - Organization - LWN.net - Slides - The Kernel Report ELC 2012 - Video - here (linux foundation) and here (free-electrons) - Duration - 58 minutes Abstract. Biography. Notes Transcript - Transcribed by - Chris Dudding 0:00 - 1:00: [ELC Slide - Thank you to our sponsors] >> INTRODUCER: So, I'd like to welcome you all out to this year's Embedded Linux conference. Very happy to see you all here and I hope you're as anxious as I am. We're.. well erm. anxious is not the right word but excited as I am about the great sessions we've got planned for this week. Um.. This is always erm.. the best part of getting ready for a conference is when you actually get the thing under way and there's been a lot of prep work behind the scenes and we're very excited to have you here and excited about the programme we've got. [ELC Slide - Mobile App] I've got just a couple of quick announcements I'd like to make: First, about the mobile application so in the guide it talks about a mobile app um.. the actual name you look for in the marketplace is wrong in the guide 1:00 - 2:00: You need to look under Linux Foundation conference. And that's available on the Android Market. There's actually a kind of a funny story why its not available for iPhone and that was because when it was first submitted to the iPhone erm. I don't know what they call their.. >> AUDIENCE: App Store [Laughter] Well, so. Yes, thank you. It actually it was for.. It had Android listed in the description because it covers both conferences: Android Developers.. [rephrases] Android Builders Submit and ELC and they rejected it because of the word Android. So, go figure! So, not for lack of trying. I'm sorry we don't have an iPhone app for you. Get yourself a more open phone! [Laughter] [ELC Slide - Intel Atom Processor Giveaway] Anyway, also I'd like to talk about this. So this um.. [holds Intel Atom development board] Ignore the antennas. Inside is a little development board 2:00 - 3:00: that is being manufactured by Intel and it contains an Atom processor. Its actually called the.. I love Intel code names.. the board is called a Fish River Island 2 and its erm. E6XX Tunnel Creek Processor and they will be manufacturing these.. these are not available yet. They will be manufacturing 'em and sending them out to attendees. If you are interested in getting one.. we have.. unfortunately we don't have enough for everyone but what they are doing is they are taking proposals you can register at the Intel desk just outside in the lobby. And kind of just give an idea what you would plan to do with the board and the first 120 good ideas will get one shipped to you sometime in April or May. So, that's pretty nice. Let's thank Intel for that. [Claps] So that's pretty nice. Um.. So that's pretty cool 3:00 - 4:00: Nice little development board for you [inaudible comment from audience] [laughs] No, too low. and let's see I want to mention the YOCTO reception, hosted by YOCTO and Intel is reception tonight. Should be a lot of fun. Its over at the hiller aviation museum and there's a little.. cute little boarding pass thing you got talking about that also we've got little stickers done by jambe that you can get if you want to proudly proclaim that you came to ELC and the last thing to do is to introduce our keynote speaker for this morning. So, I've known Jon Corbet for a number of years. He's kind of.. I don't know if this is the right term because he has no beard.. he's one of the grey beards in the Linux industry and an incredible incredible asset 4:00 - 5:00: At these events it is very customary for us to put in a plug for LWN.net and this event is no exception. In my opinion one of the premier sources for information about Linux, the Linux kernel and the industry and open source. If you are not a subscribing member of LWN.net, you should be one.. shame on you.. because this is an asset. Really a community asset that we should support and Jon very graciously accepted to give our kernel report and he'll tell us all about what's going on with the kernel in the last little bit and maybe a little bit about what's coming up in the future So without further ado, let me introduce Jon Corbet. 5:00 - 6:00: [Slide 1 - The kernel report] >> JON CORBET: Hi. Thanks a lot. Good morning everybody. How many of you have seen me give one of these talks before? [Laughter] A fair number. [Slide 2 - The plan] Well you'll be glad to hear I've reorganised it. The plan remains the same, which is to look back over a years worth of kernel developments with an eye towards what's going on in the future. I've changed the way things are done. Hopefully it will work out well. [Slide 3 - Starting off 2011] We're actually going to start just over a year ago with.. at the beginning of 2011 we saw the release of the 2.6.37 kernel. What better to start a new year than with a new kernel? This was a fairly big release with well over 11,000 changesets in it. This kernel brought in the first of a set of scalability patches for the virtual filesystem layer which added a fair amount to the complexity of that layer but also if you had the right kind of workload brought about something of the order of 30% performance improvement if you were doing lots of opens and closes that sort of thing. So that was good to have. Block I/O bandwidth controller.. its actually a second bandwidth controller working at a higher level in the I/O scheduler stack allowing 6:00 - 7:00: the placement of absolute limits on block I/O bandwidth Finally got some support for the point to point tunnelling protocol in the mainline kernel. Basic support for the parallel NFS protocol and Wakeup sources which is an interesting.. on its own wakeup sources is just an accounting mechanism for tracking devices in the system that can wake the system from a sleep state but its part of a bigger effort to replicate the android opportunistic suspend mechanism and provide an implementation of that mechanism that works well within the mainline kernel. So we are still seeing pieces of that going in and pieces of it under discussion but at some point we may have a solution for that So that was 2.6.37 - a lot went in there. [Slide 4 - What have we done since then?] And a lot has happened since then. So what has gone on since 2.6.37? We've made 5 more kernel releases. I can call them out as i come to them going on the year. We've merged almost 60,000 change sets it was over the course of the last year. These have come from over 3000 developers and at this point we have over 400 companies that we can identify that have contributed to the kernel 7:00 - 8:00: So, I've put up numbers like this before. We've seen them before. We know at this point that the kernel is a very active, very fast moving project. Perhaps the biggest on the planet, hard to say and it continues to move on and it shows no real signs of slowing down [Slide 5- February] So February of 2011 [Slide 6 - Greg K-H quote] One of the things that happened early on in february was a little note from Greg Kroah-Hartman congratulating ralink saying as you can see ralink has stopped dumping drivers on us and instead is now working on patching the driver that we already have in the upstream kernel and trying to make that driver support their new hardware this he said shows a huge willingness to learn how to work with the kernel community and they need to be praised for this change in attitude This is..i mean its a nice note but there is nothing all that special in a way its something that we go through with a lot of companies they take a little while to figure out how to work with us and then they do 8:00 - 9:00: and so we see a lot of progress as companies figure out how does mainline kernel process work, how can we get our code in there, why is it in our interest to do so and then they figure how to do it and they become part of the machine. And we see this happening over and over again [Slide 7 - Employer Contributions] So this seems a good as time as any to put up this slide. This is a variant of a slide that i've been putting up for a while showing the top contributors to the kernel over the course of the last year from the beginning of the 2.6.38 development cycle through the 3.2 release So, we see as always, volunteers top of the list at just under 14%. The percentage of changes coming in from people working on their own time has actually slowly fallen over the years and why that is is hard to say. One could take a pessimistic view and say that the kernel is getting too big and too complex, the easy projects are done and so we are putting off our volunteers that way. On the other hand, one could look at this and one could say well anybody who has shown any ability to actually get code into the kernel in any kind of reliable way 9:00 - 10:00: tends not to stay a volunteer for very long unless they really really want to because they tend to get buried in job offers. so that of course is not going to be a bad thing. other than that we see a lot of the same companies that we've been seeing for quite a long time we can see companies that are not only competing fiercely in other areas of the market but in fact at this point they all seem to be suing each other elsewhere [Laughter] but they are still working together quite well at this level and um.. the situation hasn't changed a whole lot but here's one change I want to call out this is.. was.. alright.. we'll do it the old fashioned way. no we won't.. [Slide 9 - Kernel changeset contributions by employer] speak to me.. come on.. um.. alright. well. this is the one i was going to get to eventually. um.. what is going on here.. alright.. we'll get there 10:00 - 11:00:. just in time built into the kernel for the Berkeley packet filter subsystem for filtering packets that you are trying to inspect coming off the net sort of thing. send multiple message system call. scalability for applications that are sending lots of little messages. ICMP sockets so you can write an unprivledged ping client at last. namespace file descriptors another piece of the containers problem. cleancache.. which is some of that transcedent memory stuff i was talking about before. a different way of caching pages.]:
http://elinux.org/index.php?title=Session:The_Kernel_Report_ELC_2012&diff=226898&oldid=226754
CC-MAIN-2015-11
refinedweb
1,990
78.99
This is a simple guide on how to best build your documentation in Julia v1.0 if you are using Documenter. The old way If you are using Documenter you probably have something like the following in your .travis.yml file for building your documentation: after_success: - julia -e 'Pkg.add("Documenter")' - julia -e 'cd(Pkg.dir("PACKAGE_NAME")); include(joinpath("docs", "make.jl"))' Why is this bad? Here are some reasons: - There is no good way to add doc-build dependencies, you have to add them manually (like we Pkg.add("Documenter")above). - Code in the after_success:part of the build does not fail the build. This makes it difficult to verify that (i) the doc build works and (ii) that doctest etc. still passes. For this reason some package have chosen to build the docs as part of the test suite instead. - Doc building runs in the global environment, and is thus affected by the surroundings. So, maybe we can use this new Pkg thing, that apparently should solve all our problems. Absolutely! The new way In Julia v1.0 we can instead use a designated doc-build project with a Project.toml in the docs/ directory. The project includes Documenter and all your other doc-build dependencies. If Documenter is the only dependency, then the Project.toml file should include the following: [deps] Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4" [compat] Documenter = "~0.19" Here we also added a compatibility constraint for Documenter, since I happen to know that Documenter 0.20 will break everything. Instead of creating the Project.toml file manually we could have started Julia with julia --project=docs/ (alternatively just start Julia and then using Pkg; Pkg.activate("docs")) and then used the Pkg REPL to add our doc-build dependencies. Next we need to configure the .travis.yml to use this project. For this we will utilize Travis Build Stages that have just been released (see). Add the following to your .travis.yml file: jobs: include: - stage: "Documentation" julia: 1.0 os: linux script: - julia --project=docs/ -e 'using Pkg; Pkg.instantiate(); Pkg.add(PackageSpec(path=pwd()))' - julia --project=docs/ docs/make.jl after_success: skip This adds a new build stage, called Documentation, that will run after the default test stage. It will look something like this. The julia: and os: entries control from which worker we build the docs, for example Julia 1.0 and linux. What happens on the three lines in the script: part? - The first line Pkg.instantiate()s the project, meaning that Documenterand the other dependencies will be installed. - The second line adds our package to the doc-build environment. - The third line builds the docs. Lastly, commit the files (the Manifest.toml can be .gitignored) and push! Why is this better? - Using a custom docs projects gives full control over the environment where the docs build. - We can have documentation dependencies. - A failed doc build now fails CI. In my opinion this is a pretty nice example of how to utilize the new Pkg environments. If you need some inspiration, have a look at the following packages that already uses projects and build stages for doc-building: Some gotchas Make sure that the juliaand osarguments to Documenter.deploydocsmatch the configuration you define for the build stage in .travis.yml. Note that this is something that will change in the next Documenter release, where the juliaand osarguments are removed and the deployment is essentially governed by the .travis.ymlconfiguration. So the annoyance of keeping those in sync is temporary. You might run into which is a bug in Julia v0.7, v1.0(but fixed on nightlyand the upcoming Julia v1.0.1). You can add the following lines at the top of docs/make.jlto workaround it: # Workaround for JuliaLang/julia/pull/28625 if Base.HOME_PROJECT[] !== nothing Base.HOME_PROJECT[] = abspath(Base.HOME_PROJECT[]) end Feel free to ping me ( @fredrikekre) for any questions from your PRs. Happy upgrading!
https://discourse.julialang.org/t/psa-use-a-project-for-building-your-docs/14974
CC-MAIN-2018-39
refinedweb
658
60.92
Am 13.02.2010 10:50, schrieb Florian Ludwig: > Hi, > >. If you are talking about code sharing you can move the common code out of your applications in a seperate namespace. If you follow the model trac is using you would install a module/package/egg with the basic functionality of the pluginsystem (i.e. what's in core.py and env.py + logging and whatever you think is necessary). All shared code like your auth-plugins would go in a common plugin directory to which you can refer via PYTHONPATH. Another common technique is chaining of WSGI middleware..., check out pythonpaste.org. Then there is SOA where functionality is shared via RPC/webservices and wsdl/uddi. But my feeling is this is mostly used in "Enterprise" applications and is best used in Java/.NET where you already have libraries doing all the XML stuff. hth Paul
https://mail.python.org/pipermail/python-list/2010-February/568068.html
CC-MAIN-2016-50
refinedweb
147
60.21
You can cast an object to another class type, but only if the current object type and the new class type are in the same hierarchy of derived classes, and one is a superclass of the other. For example, earlier in this chapter we defined the classes Animal, Dog, Spaniel, Cat and Duck, and these classes are related in the hierarchy shown below: You can cast an object of a class upwards through its direct and indirect superclasses. For example, you could cast an object of type Spaniel directly to type Dog, type Animal or type Object. You could write: Spaniel aPet = new Spaniel("Fang"); Animal theAnimal = (Animal)aPet; // Cast the Spaniel to Animal When you are assigning an object to a variable of a superclass type, you do not have to include the cast. You could write the assignment as: Animal theAnimal = aPet; // Cast the Spaniel to Animal and it would work just as well. The compiler is always prepared to insert a cast to a superclass type when necessary. When you cast an object to a superclass type, Java retains full knowledge of the actual class to which the object belongs. If this were not the case, polymorphism would not be possible. Since information about the original type of an object is retained, you can cast down a hierarchy as well. However, you must always write the cast explicitly since the compiler is not prepared to insert it, and the object must be a legitimate instance of the class you are casting to – that is, the class you are casting to must be the original class of the object, or must be a superclass of the object. For example, you could cast a reference stored in the variable theAnimal above to type Dog or type Spaniel, since the object was originally a Spaniel, but you could not cast it to Cat or Duck, since an object of type Spaniel does not have Cat or Duck as a superclass. To cast theAnimal to type Dog, you would write: Dog aDog = (Dog)theAnimal; // Cast from Animal to Dog Now the variable aDog refers to an object of type Spaniel that also happens to be a Dog. Remember, you can only use the variable aDog to call the polymorphic methods from the class Spaniel that override methods that exist in Dog. You can't call methods that are not defined in the Dog class. If you want to call a method that is in the class Spaniel and not in the class Dog, you must first cast aDog to type Spaniel. Although you cannot cast between unrelated objects, from Spaniel to Duck for instance, you can achieve a conversion by writing a suitable constructor but obviously, only where it makes sense to do so. You just write a constructor in the class to which you want to convert, and make it accept an object of the class you are converting from as an argument. If you really thought Spaniel to Duck was a reasonable conversion, you could add the constructor to the Duck class: public Duck(Spaniel aSpaniel) { // Back legs off, and staple on a beak of your choice... super("Duck"); // Call the base constructor name = aSpaniel.getName(); breed = "Barking Coot"; // Set the duck breed for a converted Spaniel } This assumes you have added a method, getName(), in the class Dog which will be inherited in the class Spaniel, and which returns the value of name for an object. This constructor accepts a Spaniel and turns out a Duck. This is quite different from a cast though. This creates a completely new object that is separate from the original, whereas a cast presents the same object as a different type. You will have cause to cast objects in both directions through a class hierarchy. For example, whenever you execute methods polymorphically, you will be storing objects in a variable of a base class type, and calling methods in a derived class. This will generally involve casting the derived class objects to the base class. Another reason you might want to cast up through a hierarchy is to pass an object of several possible subclasses to a method. By specifying a parameter as base class type, you have the flexibility to pass an object of any derived class to it. You could pass a Dog, Duck or Cat to a method as an argument for a parameter of type Animal, for instance. The reason you might want to cast down through a class hierarchy is to execute a method unique to a particular class. If the Duck class has a method layEgg(), for example, you can't call this using a variable of type Animal, even though it references a Duck object. As we have already said, casting downwards through a class hierarchy always requires an explicit cast. We'll amend the Duck class and use it along with the Animal class in an example. Add layEgg() to the Duck class as: public class Duck extends Animal { public void layEgg() { System.out.println("Egg laid"); } // Rest of the class as before... } If you now try to use this with the code: public class LayEggs { public static void main(String[] args) { Duck aDuck = new Duck("Donald", "Eider"); Animal aPet = aDuck; // Cast the Duck to Animal aPet.layEgg(); // This won't compile! } } you will get a compiler message to the effect that layEgg() is not found in the class Animal. Since you know this object is really a Duck, you can make it work by writing the call to layEgg() in the code above as: ((Duck)aPet).layEgg(); // This works fine The object pointed to by aPet is first cast to type Duck. The result of the cast is then used to call the method layEgg(). If the object were not of type Duck, the cast would cause an exception to be thrown. There are circumstances when you may not know exactly what sort of object you are dealing with. This can arise if a derived class object is passed to a method as an argument for a parameter of a base class type, for example, in the way we discussed in the previous section. In some situations you may need to cast it to its actual class type, perhaps to call a class specific method. If you try to make an illegal cast, an exception will be thrown, and your program will end, unless you have made provision for catching it. One way to obviate this situation is to test that the object is the type you expect before you make the cast. We saw earlier in this chapter how we could use the getClass() method to obtain the Class object corresponding to the class type, and how we could compare it to a Class instance for the class we are looking for. You can also do this using the operator instanceof. For example, suppose you have a variable, pet, of type Animal, and you want to cast it to type Duck. You could code this as: Duck aDuck; // Declare a duck if(pet instanceof Duck) { aDuck = (Duck)pet; // It is a duck so the cast is OK aDuck.layEgg(); // and we can have an egg for tea } If pet does not refer to a Duck object, an attempt to cast the object referenced by pet to Duck would cause an exception to be thrown. This code fragment will only execute the cast and lay an egg if pet does point to a Duck object. The code fragment above could have been written much more concisely as: if(pet instanceof Duck) ((Duck)pet).layEgg(); // It is a duck so we can have an egg for tea So what is the difference between this and using getClass()? Well, it's quite subtle. The instanceof operator checks whether a cast of the object referenced by the left operand to the type specified by the right operand is legal. The result will be true if the object is the same type as the right operand, or of any subclass type. We can illustrate the difference by choosing a slightly different example. Suppose pet stores a reference to an object of type Spaniel. We want to call a method defined in the Dog class so we need to check that pet does really reference a Dog object. We can check for whether or not we have a Dog object with the statements: if(pet instanceof Dog) System.out.println("We have a dog!"); else System.out.println("It's definitely not a dog!"); We will get confirmation that we have a Dog object here even though it is actually a Spaniel object. This is fine though for casting purposes. As long as the Dog class is in the class hierarchy for the object, the cast will work OK, so the operator is telling us what we need to know. However, suppose we write: if(pet.getClass() == Dog.class) System.out.println("We have a dog!"); else System.out.println("It's definitely not a dog!"); Here the if expression will be false because the class type of the object is Spaniel, so its Class object is different from that of Dog.class – we would have to write Spaniel.class to get the value true from the if expression. We can conclude from this that for casting purposes you should always use the instanceof operator to check the type of a reference. You only need to resort to checking the Class object corresponding to a reference when you need to confirm the exact type of the reference.
http://www.yaldex.com/java_tutorial/0855485157.htm
CC-MAIN-2016-44
refinedweb
1,599
67.38
Make Custom Pictures For Gridworld Actors << AntFarm | GridworldTrailIndex | Custom ActorWorld that Reports the Furthest Bug >> If you save a gif file with the same name as your Actor, it will will be shown in your instance of ActorWorld. For best results: - Make it square ( like 48 x 48 pixels, 96 x 96, or 480 x 480) - Make it shades of grey, black and white (so it can be tinted when you change colors - Put the head at the top (so it moves head first) You can make the grid larger or smaller with Ctrl-PageUp or Ctrl-PageDown (Cmd-PgUp and Cmd-PgDwn on a Mac) Watch a Video Tutorial on how to have your images appear in GridWorld. import info.gridworld.actor.Bug; public class Earwig extends Bug { public Earwig(){ super(); } } I made this by tracing a photo of an Earwig using a transparent layer in GIMP (an open source application like PhotoShop). I found a photo of an earwig, made a new file in GIMP that was 480 x 480, imported the photo into a layer, and scaled it down. Then I made a transparent layer over the photo layer with a grey brush and traced. I turned off the visibility of the photo and saved it as Earwig.gif . I made this by taking a photo, and erasing the edges. You may wish to make the image greyscale so it takes the color better when you setColor() of your Actor. Gridworld will only pay attention to black, so you will notice that the tires will always be black, but the red tail light will change color if the Actor changes its color. If instead you wish to have the colors to always be displayed as the original, then you should set the Actor's color to null with the line setColor(null).
https://mathorama.com/apcs/pmwiki.php?n=Main.MakeCustomPicturesForGridworldActors
CC-MAIN-2021-04
refinedweb
305
58.45
Tutorial: Use Azure Key Vault with an Azure web app in .NET Azure Key Vault helps you protect secrets such as API keys and database connection strings. It provides you with access to your applications, services, and IT resources. In this tutorial, you learn how to create an Azure web application that can read information from an Azure key vault. The process uses managed identities for Azure resources. For more information about Azure web applications, see Azure App Service. The tutorial shows you how to: - Create a key vault. - Add a secret to the key vault. - Retrieve a secret from the key vault. - Create an Azure web app. - Enable a managed identity for the web app. - Assign permission for the web app. - Run the web app on Azure. Before you begin, read Key Vault basic concepts. If you don’t have an Azure subscription, create a free account. Prerequisites - For Windows: .NET Core 2.1 SDK or later - For Mac: Visual Studio for Mac - For Windows, Mac, and Linux: - Git - This tutorial requires that you run the Azure CLI locally. You must have the Azure CLI version 2.0.4 or later installed. Run az --versionto find the version. If you need to install or upgrade the CLI, see Install Azure CLI 2.0. - .NET Core About Managed Service Identity Azure Key Vault stores credentials securely, so they're not displayed in your code. However, you need to authenticate to Azure Key Vault to retrieve your keys. To authenticate to Key Vault, you need a credential. It's a classic bootstrap dilemma. Managed Service Identity (MSI) solves this issue by providing a bootstrap identity that simplifies the process. When you enable MSI for an Azure service, such as Azure Virtual Machines, Azure App Service, or Azure Functions, Azure creates a service principal. MSI does this for the instance of the service in Azure Active Directory (Azure AD) and injects the service principal credentials into that instance. Next, to get an access token, your code calls a local metadata service that's available on the Azure resource. Your code uses the access token that it gets from the local MSI endpoint to authenticate to an Azure Key Vault service. Log in to Azure To log in to Azure by using the Azure CLI, enter: az login Create a resource group An Azure resource group is a logical container into which Azure resources are deployed and managed. Create a resource group by using the az group create command. Then, select a resource group name and fill in the placeholder. The following example creates a resource group in the West US location: # To list locations: az account list-locations --output table az group create --name "<YourResourceGroupName>" --location "West US" You use this resource group throughout this tutorial. Create a key vault To create a key vault in your resource group, provide the following information: - Key vault name: a string of 3 to 24 characters that can contain only numbers (0-9), letters (a-z, A-Z), and hyphens (-) - Resource group name - Location: West US In the Azure CLI, enter the following command: az keyvault create --name "<YourKeyVaultName>" --resource-group "<YourResourceGroupName>" --location "West US" At this point, your Azure account is the only one that's authorized to perform operations on this new vault. Add a secret to the key vault Now you can add a secret. It might be a SQL connection string or any other information that you need to keep both secure and available to your application. To create a secret in the key vault called AppSecret, enter the following command: az keyvault secret set --vault-name "<YourKeyVaultName>" --name "AppSecret" --value "MySecret" This secret stores the value MySecret. To view the value that's contained in the secret as plain text, enter the following command: az keyvault secret show --name "AppSecret" --vault-name "<YourKeyVaultName>" This command displays the secret information, including the URI. After you complete these steps, you should have a URI to a secret in a key vault. Make note of this information for later use in this tutorial. Create a .NET Core web app To create a .NET Core web app and publish it to Azure, follow the instructions in Create an ASP.NET Core web app in Azure. You can also watch this video: Open and edit the solution Go to the Pages > About.cshtml.cs file. Install these NuGet packages: Import the following code to the About.cshtml.cs file: using Microsoft.Azure.KeyVault; using Microsoft.Azure.KeyVault.Models; using Microsoft.Azure.Services.AppAuthentication; Your code in the AboutModel class should look like this: public class AboutModel : PageModel { public string Message { get; set; } public async Task OnGetAsync() { /// Thrown when the operation returned an invalid status code /// </exception> catch (KeyVaultErrorException keyVaultException) { Message = keyVaultException.Message; if((int)keyVaultException.Response.StatusCode == 429) retry = true; } } // This method implements exponential backoff if there are 429 errors from Azure Key Vault private static long getWaitTime(int retryCount) { long waitTime = ((long)Math.Pow(2, retryCount) * 100L); return waitTime; } // This method fetches a token from Azure Active Directory, which can then be provided to Azure Key Vault to authenticate public async Task<string> GetAccessTokenAsync() { var azureServiceTokenProvider = new AzureServiceTokenProvider(); string accessToken = await azureServiceTokenProvider.GetAccessTokenAsync(""); return accessToken; } } Run the web app - On the main menu of Visual Studio 2017, select Debug > Start, with or without debugging. - In the browser, go to the About page. The value for AppSecret is displayed. Enable a managed identity Azure Key Vault provides a way to securely store credentials and other secrets, but your code needs to authenticate to Key Vault to retrieve them. Managed identities for Azure resources overview helps to solve this problem by giving Azure services an automatically managed identity in Azure AD. You can use this identity to authenticate to any service that supports Azure AD authentication, including Key Vault, without having to display credentials in your code. In the Azure CLI, to create the identity for this application, run the assign-identity command: az webapp identity assign --name "<YourAppName>" --resource-group "<YourResourceGroupName>" Replace <YourAppName> with the name of the published app on Azure. For example, if your published app name was MyAwesomeapp.azurewebsites.net, replace <YourAppName> with MyAwesomeapp. Make a note of the PrincipalId when you publish the application to Azure. The output of the command in step 1 should be in the following format: { "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "type": "SystemAssigned" } Note The command in this procedure is the equivalent of going to the Azure portal and switching the Identity / System assigned setting to On in the web application properties. Assign permissions to your app Replace <YourKeyVaultName> with the name of your key vault, and replace <PrincipalId> with the value of the PrincipalId in the following command: az keyvault set-policy --name '<YourKeyVaultName>' --object-id <PrincipalId> --secret-permissions get list This command gives the identity (MSI) of the app service permission to do get and list operations on your key vault. Publish the web app to Azure Publish your web app to Azure once again to verify that your live web app can fetch the secret value. - In Visual Studio, select the key-vault-dotnet-core-quickstart project. - Select Publish > Start. - Select Create. When you run the application, you should see that it can retrieve your secret value. Now, you've successfully created a web app in .NET that stores and fetches its secrets from your key vault. Clean up resources When they are no longer needed, you can delete the virtual machine and your key vault. Next steps Feedback We'd love to hear your thoughts. Choose the type you'd like to provide: Our feedback system is built on GitHub Issues. Read more on our blog.
https://docs.microsoft.com/en-us/azure/key-vault/tutorial-net-create-vault-azure-web-app
CC-MAIN-2019-13
refinedweb
1,290
54.83
Package: unixcw Version: 2.3-13 Severity: important Tags: patch ----- [N7DR]. ----- [KA6MAL]). Attached patch fixes the problem by not trying invalid ioctl. # # * Fix invalid ioctl which breaks CW output on some devices (LP: #511676). # - src/cwlib/cwlib.c: do not try to use mixer volume ioctl on audio device. # # -- Kamal Mostafa <[email protected]> Wed, 27 Jan 2010 19:40:14 -0800 # === modified file 'src/cwlib/cwlib.c' --- a/src/cwlib/cwlib.c 2006-12-18 09:45:19 +0000 +++ b/src/cwlib/cwlib.c 2010-01-28 03:43:46 +0000 @@ -2138,6 +2138,18 @@ { int read_volume, mixer, device_mask; +/* + * [Kamal Mostafa <[email protected]>] + * This attempt to use a mixer ioctl on the audio device is invalid! I do not + * think that this ioctl could ever have worked. Most audio devices will + * just return an error, allowing this routine to proceed to try the volume + * control methods on the (proper) mixer device. However, some audio devices + * actually *do* have an ioctl (unrelated to volume control) corresponding to + * this ioctl number -- if they return 0, then this routine is tricked into + * thinking that it has set the volume when in fact it has not. + * + */ +#if 0 /* Try to use the main /dev/audio device for ioctls first. */ if (ioctl (cw_sound_descriptor, MIXER_READ (SOUND_MIXER_PCM), &read_volume) == 0) @@ -2145,6 +2157,7 @@ *volume = read_volume; return RC_SUCCESS; } +#endif /* Volume not found; try the mixer PCM channel volume instead. */ mixer = open (cw_mixer_device, O_RDONLY | O_NONBLOCK); @@ -2174,6 +2187,8 @@ *volume = read_volume; close (mixer); + if (cw_is_debugging_internal (CW_DEBUG_KEYING)) + fprintf(stderr, "cw: volume control is SOUND_MIXER_PCM\n"); return RC_SUCCESS; } else @@ -2191,6 +2206,8 @@ *volume = read_volume; close (mixer); + if (cw_is_debugging_internal (CW_DEBUG_KEYING)) + fprintf(stderr, "cw: volume control is SOUND_MIXER_VOLUME\n"); return RC_SUCCESS; } } @@ -2216,11 +2233,19 @@ { int mixer, device_mask; +/* + * [Kamal Mostafa <[email protected]>] + * This attempt to use a mixer ioctl on the audio device is invalid! + * (see above). + */ +#if 0 /* Try to use the main /dev/audio device for ioctls first. */ if (ioctl (cw_sound_descriptor, MIXER_WRITE (SOUND_MIXER_PCM), &volume) == 0) return RC_SUCCESS; +#endif + /* Try the mixer PCM channel volume instead. */ mixer = open (cw_mixer_device, O_RDWR | O_NONBLOCK); if (mixer == -1)
https://lists.debian.org/debian-qa-packages/2010/01/msg00394.html
CC-MAIN-2017-22
refinedweb
346
56.35
In this section you will learn how to declare array in java. As we know an array is collection of single type or similar data type. When an array is created its length is determined. An array can hold fixed number of value and can be accessed by a square bracket called index. Syntax : data-type[] array-name; int[] arr ; (declares an array of integer type) Like declaration of variable of other type, array is declared and it has two parts, one is type and other is array name. By type we mean the data type of element and index means that this variable holds an array. The index will hold the integer value that indicate the size of array. The array name can be anything as you want. While declaring an array is just to inform compiler that this variable will hold an array. Similarly like above declaration you can declare array of any type as follows: String[] arr; boolean[] arr; float[] arr; double[] arr; char[] arr; You can also put the square bracket after the array-name: int arr[]; For allocating array to memory then declare the array with new keyword. int[] a=new int[5]; These declaration indicate, it will allocate space in the memory with array size 5 which hold five element. To declare two-dimensional array place two square bracket with type or array-name: int[][] array-name; These are two index indicating rows and column of array. It is called an array of array. Example : Code to display the element array : public class ArrayDeclaton { public static void main(String args[]) { int a[]={10,20,30,40,50}; System.out.println("First element of array = "+a[0]); System.out.println("Second element of array = "+a[1]); System.out.println("Third element of array = "+a[2]); System.out.println("Fourth element of array = "+a[3]); System.out.println("Fifth element of array = "+a[4]); } } Output from this program is : If you enjoyed this post then why not add us on Google+? Add us to your Circles Liked it! Share this Tutorial Discuss: Java Array declaration Post your Comment
http://roseindia.net/java/beginners/arrayexamples/array-declaration-in-java.shtml
CC-MAIN-2014-42
refinedweb
352
56.15
Chances are you might have heard of io_uring. It first appeared in Linux 5.1, back in 2019, and was advertised as the new API for asynchronous I/O. Its goal was to be an alternative to the deemed-to-be-broken-beyond-repair AIO, the “old” asynchronous I/O API. Calling io_uring just an asynchronous I/O API doesn’t do it justice, though. Underneath the API calls, io_uring is a full-blown runtime for processing I/O requests. One that spawns threads, sets up work queues, and dispatches requests for processing. All this happens “in the background” so that the user space process doesn’t have to, but can, block while waiting for its I/O requests to complete. A runtime that spawns threads and manages the worker pool for the developer makes life easier, but using it in a project begs the questions: 1. How many threads will be created for my workload by default? 2. How can I monitor and control the thread pool size? I could not find the answers to these questions in either the Efficient I/O with io_uring article, or the Lord of the io_uring guide – two well-known pieces of available documentation. And while a recent enough io_uring man page touches on the topic: By default, io_uringlimits the unbounded workers created to the maximum processor count set by RLIMIT_NPROCand the bounded workers is a function of the SQ ring size and the number of CPUs in the system. … it also leads to more questions: 3. What is an unbounded worker? 4. How does it differ from a bounded worker? Things seem a bit under-documented as is, hence this blog post. Hopefully, it will provide the clarity needed to put io_uring to work in your project when the time comes. Before we dig in, a word of warning. This post is not meant to be an introduction to io_uring. The existing documentation does a much better job at showing you the ropes than I ever could. Please give it a read first, if you are not familiar yet with the io_uring API. Not all I/O requests are created equal io_uring can perform I/O on any kind of file descriptor; be it a regular file or a special file, like a socket. However, the kind of file descriptor that it operates on makes a difference when it comes to the size of the worker pool. You see, I/O requests get classified into two categories by io_uring: io-wqdivides work into two categories: 1. Work that completes in a bounded time, like reading from a regular file or a block device. This type of work is limited based on the size of the SQ ring. 2. Work that may never complete, we call this unbounded work. The amount of workers here is limited by RLIMIT_NPROC. This answers the latter two of our open questions. Unbounded workers handle I/O requests that operate on neither regular files ( S_IFREG) nor block devices ( S_ISBLK). This is the case for network I/O, where we work with sockets ( S_IFSOCK), and other special files like character devices (e.g. /dev/null). We now also know that there are different limits in place for how many bounded vs unbounded workers there can be running. So we have to pick one before we dig further. Capping the unbounded worker pool size Pushing data through sockets is Cloudflare’s bread and butter, so this is what we are going to base our test workload around. To put it in io_uring lingo – we will be submitting unbounded work requests. While doing that, we will observe how io_uring goes about creating workers. To observe how io_uring goes about creating workers we will ask it to read from a UDP socket multiple times. No packets will arrive on the socket, so we will have full control over when the requests complete. Here is our test workload - udp_read.rs. $ ./target/debug/udp-read -h udp-read 0.1.0 read from UDP socket with io_uring USAGE: udp-read [FLAGS] [OPTIONS] FLAGS: -a, --async Set IOSQE_ASYNC flag on submitted SQEs -h, --help Prints help information -V, --version Prints version information OPTIONS: -c, --cpu <cpu>... CPU to run on when invoking io_uring_enter for Nth ring (specify multiple times) [default: 0] -w, --workers <max-unbound-workers> Maximum number of unbound workers per NUMA node (0 - default, that is RLIMIT_NPROC) [default: 0] -r, --rings <num-rings> Number io_ring instances to create per thread [default: 1] -t, --threads <num-threads> Number of threads creating io_uring instances [default: 1] -s, --sqes <sqes> Number of read requests to submit per io_uring (0 - fill the whole queue) [default: 0] While it is parametrized for easy experimentation, at its core it doesn’t do much. We fill the submission queue with read requests from a UDP socket and then wait for them to complete. But because data doesn’t arrive on the socket out of nowhere, and there are no timeouts set up, nothing happens. As a bonus, we have complete control over when requests complete, which will come in handy later. Let’s run the test workload to convince ourselves that things are working as expected. strace won’t be very helpful when using io_uring. We won’t be able to tie I/O requests to system calls. Instead, we will have to turn to in-kernel tracing. Thankfully, io_uring comes with a set of ready to use static tracepoints, which save us the trouble of digging through the source code to decide where to hook up dynamic tracepoints, known as kprobes. We can discover the tracepoints with perf list or bpftrace -l, or by browsing the events/ directory on the tracefs filesystem, usually mounted under /sys/kernel/tracing. $ sudo perf list 'io_uring:*' List of pre-defined events (to be used in -e): io_uring:io_uring_complete [Tracepoint event] io_uring:io_uring_cqring_wait [Tracepoint event] io_uring:io_uring_create [Tracepoint event] io_uring:io_uring_defer [Tracepoint event] io_uring:io_uring_fail_link [Tracepoint event] io_uring:io_uring_file_get [Tracepoint event] io_uring:io_uring_link [Tracepoint event] io_uring:io_uring_poll_arm [Tracepoint event] io_uring:io_uring_poll_wake [Tracepoint event] io_uring:io_uring_queue_async_work [Tracepoint event] io_uring:io_uring_register [Tracepoint event] io_uring:io_uring_submit_sqe [Tracepoint event] io_uring:io_uring_task_add [Tracepoint event] io_uring:io_uring_task_run [Tracepoint event] Judging by the number of tracepoints to choose from, io_uring takes visibility seriously. To help us get our bearings, here is a diagram that maps out paths an I/O request can take inside io_uring code annotated with tracepoint names – not all of them, just those which will be useful to us. Starting on the left, we expect our toy workload to push entries onto the submission queue. When we publish submitted entries by calling io_uring_enter(), the kernel consumes the submission queue and constructs internal request objects. A side effect we can observe is a hit on the io_uring:io_uring_submit_sqe tracepoint. $ sudo perf stat -e io_uring:io_uring_submit_sqe -- timeout 1 ./udp-read Performance counter stats for 'timeout 1 ./udp-read': 4096 io_uring:io_uring_submit_sqe 1.049016083 seconds time elapsed 0.003747000 seconds user 0.013720000 seconds sys But, as it turns out, submitting entries is not enough to make io_uring spawn worker threads. Our process remains single-threaded: $ ./udp-read & p=$!; sleep 1; ps -o thcount $p; kill $p; wait $p [1] 25229 THCNT 1 [1]+ Terminated ./udp-read This shows that io_uring is smart. It knows that sockets support non-blocking I/O, and they can be polled for readiness to read. So, by default, io_uring performs a non-blocking read on sockets. This is bound to fail with -EAGAIN in our case. What follows is that io_uring registers a wake-up call ( io_async_wake()) for when the socket becomes readable. There is no need to perform a blocking read, when we can wait to be notified. This resembles polling the socket with select() or [e]poll() from user space. There is no timeout, if we didn’t ask for it explicitly by submitting an IORING_OP_LINK_TIMEOUT request. io_uring will simply wait indefinitely. We can observe io_uring when it calls vfs_poll, the machinery behind non-blocking I/O, to monitor the sockets. If that happens, we will be hitting the io_uring:io_uring_poll_arm tracepoint. Meanwhile, the wake-ups that follow, if the polled file becomes ready for I/O, can be recorded with the io_uring:io_uring_poll_wake tracepoint embedded in io_async_wake() wake-up call. This is what we are experiencing. io_uring is polling the socket for read-readiness: $ sudo bpftrace -lv t:io_uring:io_uring_poll_arm tracepoint:io_uring:io_uring_poll_arm void * ctx void * req u8 opcode u64 user_data int mask int events $ sudo bpftrace -e 't:io_uring:io_uring_poll_arm { @[probe, args->opcode] = count(); } i:s:1 { exit(); }' -c ./udp-read Attaching 2 probes... @[tracepoint:io_uring:io_uring_poll_arm, 22]: 4096 $ sudo bpftool btf dump id 1 format c | grep 'IORING_OP_.*22' IORING_OP_READ = 22, $ To make io_uring spawn worker threads, we have to force the read requests to be processed concurrently in a blocking fashion. We can do this by marking the I/O requests as asynchronous. As io_uring_enter(2) man-page says: IOSQE_ASYNC Normal operation for io_uring is to try and issue an sqe as non-blocking first, and if that fails, execute it in an async manner. To support more efficient over‐ lapped operation of requests that the application knows/assumes will always (or most of the time) block, the application can ask for an sqe to be issued async from the start. Available since 5.6. This will trigger a call to io_queue_sqe() → io_queue_async_work(), which deep down invokes create_io_worker() → create_io_thread() to spawn a new task to process work. Remember that last function, create_io_thread() – it will come up again later. Our toy program sets the IOSQE_ASYNC flag on requests when we pass the --async command line option to it. Let’s give it a try: $ ./udp-read --async & pid=$!; sleep 1; ps -o pid,thcount $pid; kill $pid; wait $pid [2] 3457597 PID THCNT 3457597 4097 [2]+ Terminated ./udp-read --async $ The thread count went up by the number of submitted I/O requests (4,096). And there is one extra thread - the main thread. io_uring has spawned workers. If we trace it again, we see that requests are now taking the blocking-read path, and we are hitting the io_uring:io_uring_queue_async_work tracepoint on the way. $ sudo perf stat -a -e io_uring:io_uring_poll_arm,io_uring:io_uring_queue_async_work -- ./udp-read --async ^C./udp-read: Interrupt Performance counter stats for 'system wide': 0 io_uring:io_uring_poll_arm 4096 io_uring:io_uring_queue_async_work 1.335559294 seconds time elapsed $ In the code, the fork happens in the io_queue_sqe() function, where we are now branching off to io_queue_async_work(), which contains the corresponding tracepoint. We got what we wanted. We are now using the worker thread pool. However, having 4,096 threads just for reading one socket sounds like overkill. If we were to limit the number of worker threads, how would we go about that? There are four ways I know of. Method 1 - Limit the number of in-flight requests If we take care to never have more than some number of in-flight blocking I/O requests, then we will have more or less the same number of workers. This is because: io_uringspawns workers only when there is work to process. We control how many requests we submit and can throttle new submissions based on completion notifications. io_uringretires workers when there is no more pending work in the queue. Although, there is a grace period before a worker dies. The downside of this approach is that by throttling submissions, we reduce batching. We will have to drain the completion queue, refill the submission queue, and switch context with io_uring_enter() syscall more often. We can convince ourselves that this method works by tweaking the number of submitted requests, and observing the thread count as the requests complete. The --sqes <n> option (submission queue entries) controls how many read requests get queued by our workload. If we want a request to complete, we simply need to send a packet toward the UDP socket we are reading from. The workload does not refill the submission queue. $ ./udp-read --async --sqes 8 & pid=$! [1] 7264 $ ss -ulnp | fgrep pid=$pid UNCONN 0 0 127.0.0.1:52763 0.0.0.0:* users:(("udp-read",pid=7264,fd=3)) $ ps -o thcount $pid; nc -zu 127.0.0.1 52763; echo -e '\U1F634'; sleep 5; ps -o thcount $pid THCNT 9 😴 THCNT 8 $ After sending one packet, the run queue length shrinks by one, and the thread count soon follows. This works, but we can do better. Method 2 - Configure IORING_REGISTER_IOWQ_MAX_WORKERS In 5.15 the io_uring_register() syscall gained a new command for setting the maximum number of bound and unbound workers. IORING_REGISTER_IOWQ_MAX_WORKERS By default, io_uring limits the unbounded workers cre‐ ated to the maximum processor count set by RLIMIT_NPROC and the bounded workers is a function of the SQ ring size and the number of CPUs in the system. Sometimes this can be excessive (or too little, for bounded), and this command provides a way to change the count per ring (per NUMA node) instead. arg must be set to an unsigned int pointer to an array of two values, with the values in the array being set to the maximum count of workers per NUMA node. Index 0 holds the bounded worker count, and index 1 holds the unbounded worker count. On successful return, the passed in array will contain the previous maximum va‐ lyes for each type. If the count being passed in is 0, then this command returns the current maximum values and doesn't modify the current setting. nr_args must be set to 2, as the command takes two values. Available since 5.15. By the way, if you would like to grep through the io_uring man pages, they live in the liburing repo maintained by Jens Axboe – not the go-to repo for Linux API man-pages maintained by Michael Kerrisk. Since it is a fresh addition to the io_uring API, the io-uring Rust library we are using has not caught up yet. But with a bit of patching, we can make it work. We can tell our toy program to set IORING_REGISTER_IOWQ_MAX_WORKERS (= 19 = 0x13) by running it with the --workers <N> option: $ strace -o strace.out -e io_uring_register ./udp-read --async --workers 8 & [1] 3555377 $ pstree -pt $! strace(3555377)───udp-read(3555380)─┬─{iou-wrk-3555380}(3555381) ├─{iou-wrk-3555380}(3555382) ├─{iou-wrk-3555380}(3555383) ├─{iou-wrk-3555380}(3555384) ├─{iou-wrk-3555380}(3555385) ├─{iou-wrk-3555380}(3555386) ├─{iou-wrk-3555380}(3555387) └─{iou-wrk-3555380}(3555388) $ cat strace.out io_uring_register(4, 0x13 /* IORING_REGISTER_??? */, 0x7ffd9b2e3048, 2) = 0 $ This works perfectly. We have spawned just eight io_uring worker threads to handle 4k of submitted read requests. Question remains - is the set limit per io_uring instance? Per thread? Per process? Per UID? Read on to find out. Method 3 - Set RLIMIT_NPROC resource limit A resource limit for the maximum number of new processes is another way to cap the worker pool size. The documentation for the IORING_REGISTER_IOWQ_MAX_WORKERS command mentions this. This resource limit overrides the IORING_REGISTER_IOWQ_MAX_WORKERS setting, which makes sense because bumping RLIMIT_NPROC above the configured hard maximum requires CAP_SYS_RESOURCE capability. The catch is that the limit is tracked per UID within a user namespace. Setting the new process limit without using a dedicated UID or outside a dedicated user namespace, where other processes are running under the same UID, can have surprising effects. Why? io_uring will try over and over again to scale up the worker pool, only to generate a bunch of -EAGAIN errors from create_io_worker() if it can’t reach the configured RLIMIT_NPROC limit: $ prlimit --nproc=8 ./udp-read --async & [1] 26348 $ ps -o thcount $! THCNT 3 $ sudo bpftrace --btf -e 'kr:create_io_thread { @[retval] = count(); } i:s:1 { print(@); clear(@); } END { clear(@); }' -c '/usr/bin/sleep 3' | cat -s Attaching 3 probes... @[-11]: 293631 @[-11]: 306150 @[-11]: 311959 $ mpstat 1 3 Linux 5.15.9-cloudflare-2021.12.8 (bullseye) 01/04/22 _x86_64_ (4 CPU) 🔥🔥🔥 02:52:46 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 02:52:47 all 0.00 0.00 25.00 0.00 0.00 0.00 0.00 0.00 0.00 75.00 02:52:48 all 0.00 0.00 25.13 0.00 0.00 0.00 0.00 0.00 0.00 74.87 02:52:49 all 0.00 0.00 25.30 0.00 0.00 0.00 0.00 0.00 0.00 74.70 Average: all 0.00 0.00 25.14 0.00 0.00 0.00 0.00 0.00 0.00 74.86 $ We are hogging one core trying to spawn new workers. This is not the best use of CPU time. So, if you want to use RLIMIT_NPROC as a safety cap over the IORING_REGISTER_IOWQ_MAX_WORKERS limit, you better use a “fresh” UID or a throw-away user namespace: $ unshare -U prlimit --nproc=8 ./udp-read --async --workers 16 & [1] 3555870 $ ps -o thcount $! THCNT 9 Anti-Method 4 - cgroup process limit - pids.max file There is also one other way to cap the worker pool size – limit the number of tasks (that is, processes and their threads) in a control group. It is an anti-example and a potential misconfiguration to watch out for, because just like with RLIMIT_NPROC, we can fall into the same trap where io_uring will burn CPU: $ systemd-run --user -p TasksMax=128 --same-dir --collect --service-type=exec ./udp-read --async Running as unit: run-ra0336ff405f54ad29726f1e48d6a3237.service $ systemd-cgls --user-unit run-ra0336ff405f54ad29726f1e48d6a3237.service Unit run-ra0336ff405f54ad29726f1e48d6a3237.service (/user.slice/user-1000.slice/[email protected]/app.slice/run-ra0336ff405f54ad29726f1e48d6a3237.service): └─823727 /blog/io-uring-worker-pool/./udp-read --async $ cat /sys/fs/cgroup/user.slice/user-1000.slice/[email protected]/app.slice/run-ra0336ff405f54ad29726f1e48d6a3237.service/pids.max 128 $ ps -o thcount 823727 THCNT 128 $ sudo bpftrace --btf -e 'kr:create_io_thread { @[retval] = count(); } i:s:1 { print(@); clear(@); }' Attaching 2 probes... @[-11]: 163494 @[-11]: 173134 @[-11]: 184887 ^C @[-11]: 76680 $ systemctl --user stop run-ra0336ff405f54ad29726f1e48d6a3237.service $ Here, we again see io_uring wasting time trying to spawn more workers without success. The kernel does not let the number of tasks within the service’s control group go over the limit. Okay, so we know what is the best and the worst way to put a limit on the number of io_uring workers. But is the limit per io_uring instance? Per user? Or something else? One ring, two ring, three ring, four … Your process is not limited to one instance of io_uring, naturally. In the case of a network proxy, where we push data from one socket to another, we could have one instance of io_uring servicing each half of the proxy. How many worker threads will be created in the presence of multiple io_urings? That depends on whether your program is single- or multithreaded. In the single-threaded case, if the main thread creates two io_urings, and configures each io_uring to have a maximum of two unbound workers, then: $ unshare -U ./udp-read --async --threads 1 --rings 2 --workers 2 & [3] 3838456 $ pstree -pt $! udp-read(3838456)─┬─{iou-wrk-3838456}(3838457) └─{iou-wrk-3838456}(3838458) $ ls -l /proc/3838456/fd total 0 lrwx------ 1 vagrant vagrant 64 Dec 26 03:32 0 -> /dev/pts/0 lrwx------ 1 vagrant vagrant 64 Dec 26 03:32 1 -> /dev/pts/0 lrwx------ 1 vagrant vagrant 64 Dec 26 03:32 2 -> /dev/pts/0 lrwx------ 1 vagrant vagrant 64 Dec 26 03:32 3 -> 'socket:[279241]' lrwx------ 1 vagrant vagrant 64 Dec 26 03:32 4 -> 'anon_inode:[io_uring]' lrwx------ 1 vagrant vagrant 64 Dec 26 03:32 5 -> 'anon_inode:[io_uring]' … a total of two worker threads will be spawned. While in the case of a multithreaded program, where two threads create one io_uring each, with a maximum of two unbound workers per ring: $ unshare -U ./udp-read --async --threads 2 --rings 1 --workers 2 & [2] 3838223 $ pstree -pt $! udp-read(3838223)─┬─{iou-wrk-3838224}(3838227) ├─{iou-wrk-3838224}(3838228) ├─{iou-wrk-3838225}(3838226) ├─{iou-wrk-3838225}(3838229) ├─{udp-read}(3838224) └─{udp-read}(3838225) $ ls -l /proc/3838223/fd total 0 lrwx------ 1 vagrant vagrant 64 Dec 26 02:53 0 -> /dev/pts/0 lrwx------ 1 vagrant vagrant 64 Dec 26 02:53 1 -> /dev/pts/0 lrwx------ 1 vagrant vagrant 64 Dec 26 02:53 2 -> /dev/pts/0 lrwx------ 1 vagrant vagrant 64 Dec 26 02:53 3 -> 'socket:[279160]' lrwx------ 1 vagrant vagrant 64 Dec 26 02:53 4 -> 'socket:[279819]' lrwx------ 1 vagrant vagrant 64 Dec 26 02:53 5 -> 'anon_inode:[io_uring]' lrwx------ 1 vagrant vagrant 64 Dec 26 02:53 6 -> 'anon_inode:[io_uring]' … four workers will be spawned in total – two for each of the program threads. This is reflected by the owner thread ID present in the worker’s name ( iou-wrk-<tid>). So you might think - “It makes sense! Each thread has their own dedicated pool of I/O workers, which service all the io_uring instances operated by that thread.” And you would be right1. If we follow the code – task_struct has an instance of io_uring_task, aka io_uring context for the task2. Inside the context, we have a reference to the io_uring work queue ( struct io_wq), which is actually an array of work queue entries ( struct io_wqe). More on why that is an array soon. Moving down to the work queue entry, we arrive at the work queue accounting table ( struct io_wqe_acct [2]), with one record for each type of work – bounded and unbounded. This is where io_uring keeps track of the worker pool limit ( max_workers) the number of existing workers ( nr_workers). The perhaps not-so-obvious consequence of this arrangement is that setting just the RLIMIT_NPROC limit, without touching IORING_REGISTER_IOWQ_MAX_WORKERS, can backfire for multi-threaded programs. See, when the maximum number of workers for an io_uring instance is not configured, it defaults to RLIMIT_NPROC. This means that io_uring will try to scale the unbounded worker pool to RLIMIT_NPROC for each thread that operates on an io_uring instance. A multi-threaded process, by definition, creates threads. Now recall that the process management in the kernel tracks the number of tasks per UID within the user namespace. Each spawned thread depletes the quota set by RLIMIT_NPROC. As a consequence, io_uring will never be able to fully scale up the worker pool, and will burn the CPU trying to do so. $ unshare -U prlimit --nproc=4 ./udp-read --async --threads 2 --rings 1 & [1] 26249 [email protected]:/blog/io-uring-worker-pool$ pstree -pt $! udp-read(26249)─┬─{iou-wrk-26251}(26252) ├─{iou-wrk-26251}(26253) ├─{udp-read}(26250) └─{udp-read}(26251) $ sudo bpftrace --btf -e 'kretprobe:create_io_thread { @[retval] = count(); } interval:s:1 { print(@); clear(@); } END { clear(@); }' -c '/usr/bin/sleep 3' | cat -s Attaching 3 probes... @[-11]: 517270 @[-11]: 509508 @[-11]: 461403 $ mpstat 1 3 Linux 5.15.9-cloudflare-2021.12.8 (bullseye) 01/04/22 _x86_64_ (4 CPU) 🔥🔥🔥 02:23:23 CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle 02:23:24 all 0.00 0.00 50.13 0.00 0.00 0.00 0.00 0.00 0.00 49.87 02:23:25 all 0.00 0.00 50.25 0.00 0.00 0.00 0.00 0.00 0.00 49.75 02:23:26 all 0.00 0.00 49.87 0.00 0.00 0.50 0.00 0.00 0.00 49.62 Average: all 0.00 0.00 50.08 0.00 0.00 0.17 0.00 0.00 0.00 49.75 $ NUMA, NUMA, yay 🎶 Lastly, there’s the case of NUMA systems with more than one memory node. io_uring documentation clearly says that IORING_REGISTER_IOWQ_MAX_WORKERS configures the maximum number of workers per NUMA node. That is why, as we have seen, io_wq.wqes is an array. It contains one entry, struct io_wqe, for each NUMA node. If your servers are NUMA systems like Cloudflare, that is something to take into account. Luckily, we don’t need a NUMA machine to experiment. QEMU happily emulates NUMA architectures. If you are hardcore enough, you can configure the NUMA layout with the right combination of -smp and -numa options. But why bother when the libvirt provider for Vagrant makes it so simple to configure a 2 node / 4 CPU layout: libvirt.numa_nodes = [ {:cpus => "0-1", :memory => "2048"}, {:cpus => "2-3", :memory => "2048"} ] Let’s confirm how io_uring behaves on a NUMA system. Here’s our NUMA layout with two vCPUs per node ready for experimentation: $ numactl -H available: 2 nodes (0-1) node 0 cpus: 0 1 node 0 size: 1980 MB node 0 free: 1802 MB node 1 cpus: 2 3 node 1 size: 1950 MB node 1 free: 1751 MB node distances: node 0 1 0: 10 20 1: 20 10 If we once again run our test workload and ask it to create a single io_uring with a maximum of two workers per NUMA node, then: $ ./udp-read --async --threads 1 --rings 1 --workers 2 & [1] 693 $ pstree -pt $! udp-read(693)─┬─{iou-wrk-693}(696) └─{iou-wrk-693}(697) … we get just two workers on a machine with two NUMA nodes. Not the outcome we were hoping for. Why are we not reaching the expected pool size of <max workers> × <# NUMA nodes> = 2 × 2 = 4 workers? And is it possible to make it happen? Reading the code reveals that – yes, it is possible. However, for the per-node worker pool to be scaled up for a given NUMA node, we have to submit requests, that is, call io_uring_enter(), from a CPU that belongs to that node. In other words, the process scheduler and thread CPU affinity have a say in how many I/O workers will be created. We can demonstrate the effect that jumping between CPUs and NUMA nodes has on the worker pool by operating two instances of io_uring. We already know that having more than one io_uring instance per thread does not impact the worker pool limit. This time, however, we are going to ask the workload to pin itself to a particular CPU before submitting requests with the --cpu option – first it will run on CPU 0 to enter the first ring, then on CPU 2 to enter the second ring. $ strace -e sched_setaffinity,io_uring_enter ./udp-read --async --threads 1 --rings 2 --cpu 0 --cpu 2 --workers 2 & sleep 0.1 && echo [1] 6949 sched_setaffinity(0, 128, [0]) = 0 io_uring_enter(4, 4096, 0, 0, NULL, 128) = 4096 sched_setaffinity(0, 128, [2]) = 0 io_uring_enter(5, 4096, 0, 0, NULL, 128) = 4096 io_uring_enter(4, 0, 1, IORING_ENTER_GETEVENTS, NULL, 128 $ pstree -pt 6949 strace(6949)───udp-read(6953)─┬─{iou-wrk-6953}(6954) ├─{iou-wrk-6953}(6955) ├─{iou-wrk-6953}(6956) └─{iou-wrk-6953}(6957) $ Voilà. We have reached the said limit of <max workers> x <# NUMA nodes>. Outro That is all for the very first installment of the Missing Manuals. io_uring has more secrets that deserve a write-up, like request ordering or handling of interrupted syscalls, so Missing Manuals might return soon. In the meantime, please tell us what topic would you nominate to have a Missing Manual written? Oh, and did I mention that if you enjoy putting cutting edge Linux APIs to use, we are hiring? Now also remotely 🌎. _____ 1And it probably does not make the users of runtimes that implement a hybrid threading model, like Golang, too happy. 2To the Linux kernel, processes and threads are just kinds of tasks, which either share or don’t share some resources.
https://blog.cloudflare.com/missing-manuals-io_uring-worker-pool/
CC-MAIN-2022-21
refinedweb
4,579
71.44
Introduction instruments. However, it is not uncommon to hear it being used for small to medium web and desktop applications. Creating a Database and Making a Connection. import sqlite3 con = sqlite3.connect('/path/to/file/db.sqlite3') You will find that in everyday database programming you will be constantly creating connections to your database, so it is a good idea to wrap this simple connection statement into a reusable generalized function. # db_utils.py import os import sqlite3 # create a default path to connect to and create (if necessary) a database # called 'database.sqlite3' in the same directory as this script DEFAULT_PATH = os.path.join(os.path.dirname(__file__), 'database.sqlite3') def db_connect(db_path=DEFAULT_PATH): con = sqlite3.connect(db_path) return con Creating Tables In order to create database tables you need to have an idea of the structure of the data you are interested in storing. There are many design considerations that go into defining the tables of a relational database, which entire books have been written about. I will not be going into the details of this practice and will instead leave it up to reader to further investigate. However, to aid in our discussion of SQLite database programming with Python I will be working off the premise that a database needs to be created for a fictitious book store that has the below data already collected on book sales. Upon inspecting this data it is evident that it contains information about customers, products, and orders. A common pattern in database design for transactional systems of this type are to break the orders into two additional tables, orders and line items (sometimes referred to as order details) to achieve greater normalization. In a Python interpreter, in the same directory as the db_utils.py module defined previously, enter the SQL for creating the customers and products tables follows: >>> from db_utils import db_connect >>> con = db_connect() # connect to the database >>> cur = con.cursor() # instantiate a cursor obj >>>>> cur.execute(customers_sql) >>>>> cur.execute(products_sql) The above code creates a connection object then uses it to instantiate a cursor object. The cursor object is used to execute SQL statements on the SQLite database. With the cursor created I then wrote the SQL to create the customers table, giving it a primary key along with a first and last name text field and assign it to a variable called customers_sql. I then call the execute(...) method of the cursor object passing it the customers_sql variable. I then create a products table in a similar way. You can query the sqlite_master table, a built-in SQLite metadata table, to verify that the above commands were successful. To see all the tables in the currently connected database query the name column of the sqlite_master table where the type is equal to "table". >>> cur.execute("SELECT name FROM sqlite_master WHERE type='table'") <sqlite3.Cursor object at 0x104ff7ce0> >>> print(cur.fetchall()) [('customers',), ('products',)] To get a look at the schema of the tables query the sql column of the same table where the type is still "table" and the name is equal to "customers" and/or "products". >>> cur.execute("""SELECT sql FROM sqlite_master WHERE type='table' … AND name='customers'""") <sqlite3.Cursor object at 0x104ff7ce0> >>> print(cur.fetchone()[0]) CREATE TABLE customers ( id integer PRIMARY KEY, first_name text NOT NULL, last_name text NOT NULL) The next table to define will be the orders table which associates customers to orders via a foreign key and the date of their purchase. Since SQLite does not support an actual date/time data type (or data class to be consistent with the SQLite vernacular) all dates will be represented as text values. >>>>> cur.execute(orders_sql) The final table to define will be the line items table which gives a detailed accounting of the products in each order.>> cur.execute(lineitems_sql) In this section I will be demonstrating how to INSERT our sample data into the tables just created. A natural starting place would be to populate the products table first because without products we cannot have a sale and thus would not have the foreign keys to relate to the line items and orders. Looking at the sample data I see that there are four products: - Introduction to Combinatorics ($7.99) - A Guide to Writing Short Stories ($17.99) - Data Structures and Algorithms ($11.99) - Advanced Set Theory ($16.99) The workflow for executing INSERT statements is simply: - Connect to the database - Create a cursor object - Write a parameterized insert SQL statement and store as a variable - Call the execute method on the cursor object passing it the sql variable and the values, as a tuple, to be inserted into the table Given this general outline let us write some more code. >>> con = db_connect() >>> cur = con.cursor() >>>>> cur.execute(product_sql, ('Introduction to Combinatorics', 7.99)) >>> cur.execute(product_sql, ('A Guide to Writing Short Stories', 17.99)) >>> cur.execute(product_sql, ('Data Structures and Algorithms', 11.99)) >>> cur.execute(product_sql, ('Advanced Set Theory', 16.99)) The above code probably seems pretty obvious, but let me discuss it a bit as there are some important things going on here. The insert statement follows the standard SQL syntax except for the ? bit. The ?'s are actually placeholders in what is known as a "parameterized query". Parameterized queries are an important feature of essentially all database interfaces to modern high level programming languages such as the sqlite3 module in Python. This type of query serves to improve the efficiency of queries that are repeated several times. Perhaps more important, they also sanitize inputs that take the place of the ? placeholders which are passed in during the call to the execute method of the cursor object to prevent nefarious inputs leading to SQL injection. The following is a comic from the popular xkcd.com blog describing the dangers of SQL injection. To populate the remaining tables we are going to follow a slightly different pattern to change things up a bit. The workflow for each order, identified by a combination of customer first and last name and the purchase date, will be: - Insert the new customer into the customers table and retrieve its primary key id - Create an order entry based off the customer id and the purchase date then retrieve its primary key id - For each product in the order determine its primary key id and create a line item entry associating the order and the product To make things simpler on ourselves let us do a quick look up of all our products. For now do not worry too much about the mechanics of the SELECT SQL statement as we will devote a section to it shortly. >>> cur.execute("SELECT id, name, price FROM products") >>> formatted_result = [f"{id:<5}{name:<35}{price:>5}" for id, name, price in cur.fetchall()] >>> id, product,>> print('\n'.join([f"{id:<5}{product:<35}{price:>5}"] + formatted_result)) Id Product Price 1 Introduction to Combinatorics 7.99 2 A Guide to Writing Short Stories 17.99 3 Data Structures and Algorithms 11.99 4 Advanced Set Theory 16.99 The first order was placed on Feb 22, 1944 by Alan Turing who purchased Introduction to Combinatorics for $7.99. Start by making a new customer record for Mr. Turing then determine his primary key id by accessing the lastrowid field of the cursor object. >>>>> cur.execute(customer_sql, ('Alan', 'Turing')) >>> customer_id = cur.lastrowid >>> print(customer_id) 1 We can now create an order entry, collect the new order id value and associate it to a line item entry along with the product Mr. Turing ordered. >>>>>>> product_id = 1 >>> cur.execute(li_sql, (order_id, 1, 1, 7.99)) The remaining records are loaded exactly the same except for the order made to Donald Knuth, which will receive two line item entries. However, the repetitive nature of such a task is crying out the need to wrap these functionalities into reusable functions. In the db_utils.py module add the following code: def create_customer(con, first_name, last_name): sql = """ INSERT INTO customers (first_name, last_name) VALUES (?, ?)""" cur = con.cursor() cur.execute(sql, (first_name, last_name)) return cur.lastrowid def create_order(con, customer_id, date): sql = """ INSERT INTO orders (customer_id, date) VALUES (?, ?)""" cur = con.cursor() cur.execute(sql, (customer_id, date)) return cur.lastrowid def create_lineitem(con, order_id, product_id, qty, total): sql = """ INSERT INTO lineitems (order_id, product_id, quantity, total) VALUES (?, ?, ?, ?)""" cur = con.cursor() cur.execute(sql, (order_id, product_id, qty, total)) return cur.lastrowid Awh, now we can work with some efficiency! You will need to exit() your Python interpreter and reload it to get your new functions to become accessible in the interpreter. >>> from db_utils import db_connect, create_customer, create_order, create_lineitem >>> con = db_connect() >>> knuth_id = create_customer(con, 'Donald', 'Knuth') >>> knuth_order = create_order(con, knuth_id, '1967-07-03') >>> knuth_li1 = create_lineitem(con, knuth_order, 2, 1, 17.99) >>> knuth_li2 = create_lineitem(con, knuth_order, 3, 1, 11.99) >>> codd_id = create_customer(con, 'Edgar', 'Codd') >>> codd_order = create_order(con, codd_id, '1969-01-12') >>> codd_li = create_lineitem(con, codd_order, 4, 1, 16.99) I feel compelled to give one additional piece of advice as a student of software craftsmanship. When you find yourself doing multiple database manipulations (INSERTs in this case) in order to accomplish what is actually one cumulative task (ie, creating an order) it is best to wrap the subtasks (creating customer, order, then line items) into a single database transaction so you can either commit on success or rollback if an error occurs along the way. This would look something like this: try: codd_id = create_customer(con, 'Edgar', 'Codd') codd_order = create_order(con, codd_id, '1969-01-12') codd_li = create_lineitem(con, codd_order, 4, 1, 16.99) # commit the statements con.commit() except: # rollback all database actions since last commit con.rollback() raise RuntimeError("Uh oh, an error occurred ...") I want to finish this section with a quick demonstration of how to UPDATE an existing record in the database. Let's update the Guide to Writing Short Stories' price to 10.99 (going on sale). >>>>> cur.execute(update_sql, (10.99, 2)) Querying the Database Generally the most common action performed on a database is a retrieval of some of the data stored in it via a SELECT statement. For this section I will be demonstrating how to use the sqlite3 interface to perform simple SELECT queries. To perform a basic multirow query of the customers table you pass a SELECT statement to the execute(...) method of the cursor object. After this you can iterate over the results of the query by calling the fetchall() method of the same cursor object. >>> cur.execute("SELECT id, first_name, last_name FROM customers") >>> results = cur.fetchall() >>> for row in results: ... print(row) (1, 'Alan', 'Turing') (2, 'Donald', 'Knuth') (3, 'Edgar', 'Codd') Lets say you would like to instead just retrieve one record from the database. You can do this by writing a more specific query, say for Donald Knuth's id of 2, and following that up by calling fetchone() method of the cursor object. >>> cur.execute("SELECT id, first_name, last_name FROM customers WHERE id = 2") >>> result = cur.fetchone() >>> print(result) (2, 'Donald', 'Knuth') See how the individual row of each result is in the form of a tuple? Well while tuples are a very useful Pythonic data structure for some programming use cases many people find them a bit hindering when it comes to the task of data retrieval. It just so happens that there is a way to represent the data in a way that is perhaps more flexible to some. All you need to do is set the row_factory method of the connection object to something more suitable such as sqlite3.Row. This will give you the ability to access the individual items of a row by position or keyword value. >>> import sqlite3 >>> con.row_factory = sqlite3.Row >>> cur = con.cursor() >>> cur.execute("SELECT id, first_name, last_name FROM customers WHERE id = 2") >>> result = cur.fetchone() >>> id, first_name, last_name = result['id'], result['first_name'], result['last_name'] >>> print(f"Customer: {first_name} {last_name}'s id is {id}") Customer: Donald Knuth's id is 2 Conclusion In this article I gave a brief demonstration of what I feel are the most important features and functionalities of the sqlite3 Python interface to the lightweight single file SQLite database that comes pre-bundled with most Python installs. I also tried to give a few bits of advices regarding best practices when it comes to database programming, but I do caution the new-comer that the intricacies of database programming is generally one of the most prone to security holes at the enterprise level and further knowledge is necessary before such an undertaking. As always I thank you for reading and welcome comments and criticisms below.
https://stackabuse.com/a-sqlite-tutorial-with-python/
CC-MAIN-2019-43
refinedweb
2,094
54.12
Modular Hash Function Sooner or later a programmer will need to write a function for hashing a string: #include <stdint.h> #include <string.h> int64_t INITIAL_VALUE = 43; int64_t hash64(char *s, int64_t a, int64_t p) { size_t i; int64_t hash = INITIAL_VALUE; for (i = 0; i < strlen(s); ++i) { hash = (a * hash + s[i]) % p; } return hash; } The technique illustrated above is a modular hash function. If we have a byte string, where ci is the value of the i-th byte, then we can map each string to an integer with this function:(1) The function is injective if no strings have leading null bytes. The multiplicative hash function technique uses Horner's method to calculate the polynomial efficiently, and takes the modulus of the number to reduce the final hash code to the desired size. However, notice that the C code does not use the number of possible values for a byte as the value to insert in the place of the polynomial indeterminate. Instead it allows the caller to set this value. What are good choices of a and p. Knuth's constraints If we hash M values using a hash function with N distinct values, what is the chance of a collision? If each of the hash values is equally likely, then it is(2) where the approximation uses a first order Taylor series expansion of ex. This calculation is also known as the birthday problem, since it can be used to determine the chance that a set of randomly chosen people share the same birthday. The Chi-squared test can be used to check whether a hash function is fair. Suppose that M values are hashed into N buckets, and let Xi be the number of values that hash to the i-th bucket. Compute this statistic:(3) The statistic has a Chi-squared distribution with N – 1 degrees of freedom. Using R to get the p-value: hash.test = function(bins) { m = sum(bins) n = length(bins) statistic = 0 for (bin in bins) { statistic = statistic + (bin - m/n)^2 / (m/n) } 1 - pchisq(statistic, n - 1) } > hash.test(c(2,6,2,4,2,3,1)) [1] 0.434485 Fast Hash Functions multiplicative hash function Using a Mersenne prime to avoid the modulo function. Hash Tables chaining vs linear probing Randomized Hash Function Generate and append a random string to the keys. Is this the same as choosing a random INITIAL_VALUE? Diffusion property: h(m) gives no information about h(n) when m ≠ n. Cryptographic Hash Functions $ echo foo | cksum 3915528286 4 $ echo foo | openssl md5 d3b07384d113edec49eaa6238ad5ff00 $ echo foo | openssl sha1 f1d2d2f924e986ac86fdf7b36c94bcdf32beec15 $ echo foo | openssl dgst -sha256 b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c cksum is a 32-bit cyclic redundancy check. md5 is 4 times faster than sha1. Google researchers find a SHA1 collision.. Families of Hash Functions family of hash functions aka universal hashing testing the independence of two hash functions Bloom Filters Bloom filters are an example of why it is useful to have a family of hash functions. A Bloom filter is a bit array and a set of n hash functions. Each time a value is stored in the Bloom filter, we compute the n hash codes and set the corresponding bits to 1. To check whether a value is stored in a Bloom filter, we check the same n bits. If any of the bits are zero, we know for certain that the value is not stored in the Bloom filter. Chance that an item is really in the Bloom filter if all bits are set… Minhashing Suppose that we have a set P of n elements. If we enumerate the elements of P, then the subsets of P can be represented by vectors of length n, where the i-th component of the vector is 1 if the element is in the subset and 0 otherwise. For each permutation of P, there is a hash function of subsets of P which is the index of the first element in the subset according to the permutation. The Jaccard similarity of two sets is(4) The chance that two sets have the same minhash is equal to their Jaccard similarity. Here is some code illustrating how to perform a Fisher-Yates shuffle, which is an efficient way to generate a random permutation: #!/usr/bin/env python3 import random import sys if len(sys.argv) != 2: raise Exception('USAGE: {} N') n = int(sys.argv[1]) nums = list(range(n)) output = [] while nums: j = random.sample(nums, 1)[0] output.append(j) nums.remove(j) print(output) Variance on estimates of Jaccard similarity. Locality Sensitive Hashing If documents are represented by sets of features (so that Jaccard similiarty is well-defined), then finding all similar documents can be accomplished by a brute force O(m2) search. Locality sensitive hashing is a faster way to find the similar documents. Given m documents and n independent hash functions, we choose r × b = n. That is, we group the hash functions into b bands of size r. For each band, we compute the hash functions in the band to get a signature. All documents with the same signature in a band are hashed together and are candidate pairs. analysis of the technique; plot chance of being a candidate pair vs Jaccardsim LSH Amplification Suppose that we have a distance metric d and we want to find all "candidate pairs", which are pairs of items within a certain distance threshold. Suppose also that we have a family of independent hash functions which are related to the distance metric so as to give rise the following definition: A (d1, d2, p, q) family is a family of hash functions such that 0 ≤ d1 ≤ d2 ≤ 1 and p > q and if d(x, y) ≤ d1 then the probability that (x, y) is a candidate pair is at least p and if d(x, y) ≥ d2 then the probability that (x, y) is a candidate pair is at most q. A (d1, d2, p, q) family gives rise via the AND construction on r hash functions to a (d1, d2, pr, qr) family. A (d1, d2, p, q) family gives rise via the OR construction on b hash functions to a (d1, d2, 1-(1-p)b, 1 - (1 - q)b) family. Flajolet-Martin Algorithm Another application of a family of hash functions is estimating the number of distinct items in a stream without keeping a complete list of the items in memory. The idea is that for each hash function, we hash the values as they arrive, and we keep track of the hash value with the largest number of rightmost zero bits. If R is the number of rightmost zero bits for that value, then 2R is our estimate for the number of distinct values seen so far in the stream. The hash values must be sufficiently large. If a value hashed to all zeros, that would possibly represent overflow. If N distinct values are expected, then log2(N) bits are probably sufficient. If we use a single hash function, then our estimate will be a power of two. We can get a more accurate estimate by using multiple hash functions. When combining them, we cannot use the mean, since it will be biased by large values. The median, meanwhile, will always be a power of two. Hence we divide the hash functions into groups of size k, take the mean of each of those, and then the median of the groups. Hash Trick as used by Vowpal Wabbit
http://clarkgrubb.com/hashing
CC-MAIN-2019-18
refinedweb
1,255
68.6
of queues, cues and q-s Apparently Bungie released a game this week. This caused something of a massive line to form around the company store on Tuesday. And in a first for the physical company store, they opened the store at 7am to help handle the load. That means they shot down my suggestion of catapulting copies to people as they drove through the parking lot. Someday people will learn my system of projectile product delivery does work. I did succumb to the Halo Hype, or as I'm calling it now, Hypelo, but I took more of the lazy geek way. Also in a first, they allowed the online company store to take pre-orders. Sad thing here is the warehouse is in Georgia, so not only were we dinged for sales tax, but also with a shipping charge. But in this case, a scant 3 dollars per copy of Hypelo 3 to have it delivered to my door on launch day seemed to outweigh the standing in line factor. And what with gas prices, it probably would have cost me 3 bucks just to drive my car over there to get one copy. I'll find some time this weekend to play I'm sure. Unless it's clear out at night... My Hypelo 3 party was really halted by another package which arrived at my door the same day. The Sky 6. A really nice bit of software for us astro-geeks. Of course, I played with that right away. I had to drop it on the laptop and annoy my wife endlessly by sitting in the family room swinging my 'scope around (it was raining outside *sniff*) from my laptop. Sadly no Xbox gamer points for figuring out the LAT/LONG of my backyard, but f'eh, I'll survive. Maybe now I'll be able to find Cygnus easier the next time it's clear out. So on to some actual UMDF stuff. One of my new favorite things about UMDF is more of the framework's built in "cost of working with a driver" code. In this case, request cancellation! Let's take a random request from the hybrid sample driver; case IOCTL_ALLOCATE_ADDRESS_RANGE: { if ((sizeof (ALLOCATE_ADDRESS_RANGE) > InputBufferSizeInBytes) || (sizeof (ALLOCATE_ADDRESS_RANGE) > OutputBufferSizeInBytes)) { wdfRequest->CompleteWithInformation ( HRESULT_FROM_WIN32 (ERROR_INSUFFICIENT_BUFFER), sizeof (ALLOCATE_ADDRESS_RANGE)); } else { IRequestCallbackRequestCompletion * completionRoutine = \ RequestCompletion (); wdfRequest->SetCompletionCallback ( completionRoutine, (PVOID) &ControlCode); completionRoutine->Release (); hrSend = SubmitAsyncRequestToLower (wdfRequest); if (FAILED (hrSend)) { wdfRequest->Complete (hrSend); } } } // case IOCTL_ALLOCATE_ADDRESS_RANGE break; You'll notice the decided lack of a cancellation routine here. In this particular case, we're not doing any heavy lifting (allocation or request specific operations in the driver) so we don't have to worry about any clean up on cancellation. And since the request was submitted through the framework via the test application, the framework will actually handle cancellation for us. That leaves us with merely submitting the request to the next lower driver, and how easy is this? HRESULT CVDevParallelQueue::SubmitAsyncRequestToLower ( __in IWDFIoRequest * Request) { HRESULT hr; Request->FormatUsingCurrentType(); hr = Request->Send ( m_kmdfIoTarget, 0, // Submits Asynchronous 0); return (hr); } Again, since the request was submitted through the framework, we've already got a nice IWDFIoRequest object, so voila! After a little water is added to the package, we get request submission! The real only gotcha here is the completion routine. UMDF doesn't allow you to insert a pointer to a specific completion routine like KMDF / WDM do, so should we need to do anything specific for that request on completion, we'll have to actually create a dispatch routine within that completion routine to submit it to the appropriate child routine. You'll see that all that detail in the hybrid sample driver, which I'm happy to say, is finally all cued up for the next WDK release. Thanks to everybody who has emailed to commiserate about my trials of astronomy and ASCII. It's always nice to know you're not alone. Even though when staring out at space, only my neighbors can hear me scream... *Currently playing - Beatles, Ticket to Ride
https://docs.microsoft.com/en-us/archive/blogs/888_umdf_4_you/of-queues-cues-and-q-s
CC-MAIN-2020-24
refinedweb
674
61.26
Asked by: Need an example of how to make a simle Pie Chart. Question None of the examples I've found work. Here is what I am looking for: using System.Web.UI.DataVisualization.Charting; A simple C# pie chart showing two percentages with decimals, i.e. 51.40% Dogs and 48.60% Cats. Thank you. Wednesday, September 27, 2017 2:36 PM All replies Please refer to the following: Thanks, ATWednesday, September 27, 2017 2:46 PM - None of those examples work.Wednesday, September 27, 2017 3:16 PM Hello vbMark, Please tell me which program you are using , is it windows form, wpf , ASP.NET , xamarin , etc. And what result you want to achieve, simple or colourful?, September 28, 2017 7:31 AM Hi Neil, ASP.NET MVC. A simple C# pie chart showing two percentages with decimals, i.e. 51.40% Dogs and 48.60% Cats. Dogs series green and Cats series red. Numbers outside the chart with lines pointing to the chart area. Thank you.Thursday, September 28, 2017 2:21 PM
https://social.microsoft.com/Forums/en-US/e9d12e43-178f-4871-a0be-ff3c984ecabc/need-an-example-of-how-to-make-a-simle-pie-chart?forum=Offtopic
CC-MAIN-2022-40
refinedweb
176
77.84
Handling Player Input in Cross-Platform Games with LibGDX LibGDX is an open source Java library for creating cross-platform games and applications. In my last tutorial I covered getting the library setup and ready for development. This tutorial will cover handling player input with LibGDX, bringing interactivity to your cross-platform games. Different Types of Input States. Handling input is a simple task. If a key is down, then that key should register as being true and likewise for when a key is up. This simplicity can lead to a lot of problems, especially for games. What you want is a simple way to ask libGDX if a specific key is either pressed, down, or released. But what is the significance of these three different key states? They describe how a user interacted with their keyboard. - Pressed: A key was activated and it only triggers for one frame. - Down: A key is currently held down. - Released: A key was released and it only triggers for one frame. If you had logic within the rendering function such as: if (key.pressed()) print("pressed") if (key.down()) print("down") if (key.released()) print("released") You would expect to see something like: ..."pressed" //triggered one frame ..."down" //constant amongst all frames ..."down" ..."down" ..."released" //triggered one from The same logic could also apply to touch events. The Input Manager Class There are many different ways you could implement this class, but I suggest you strive to be as ‘Object Oriented’ as possible which means every key and touch event will be it’s own object. The first class to declare is the Input Manager itself (which will implement libGDX’s InputProcessor class). public class InputManager implements InputProcessor { } If you’re following along from the previous tutorial and are using IntelliJ, right click within this class and click generate -> override methods. (Due to code bloat, I’m going to leave those generated methods out of these code snippets for now.) Next, create an inner class called InputState. NOTE: All three classes below will be inner classes of InputManager. public class InputState { public boolean pressed = false; public boolean down = false; public boolean released = false; } Now create the KeyState and TouchState classes which extend InputState); } } The TouchState class is more complicated due to not only having Pressed, Down, and Released events, but also storing which finger is in control of this object and storing coordinates and a displacement vector for gesture movements. Here is the base structure of the entire class (excluding stub override methods): public class InputManager implements InputProcessor { public class InputState { public boolean pressed = false; public boolean down = false; public boolean released = false; }); } } } Since every key/touch event will be it’s own object, you need to store these within an array. Add two new array fields to InputManager. public Array<KeyState> keyStates = new Array<KeyState>(); public Array<TouchState> touchStates = new Array<TouchState>(); And add a constructor for InputManager to initialize these two objects. public InputManager() { //create the initial state of every key on the keyboard. //There are 256 keys available which are all represented as integers. for (int i = 0; i < 256; i++) { keyStates.add(new KeyState(i)); } //this may not make much sense right now, but I need to create //atleast one TouchState object due to Desktop users who utilize //a mouse rather than touch. touchStates.add(new TouchState(0, 0, 0, 0)); } Now you’re ready to control the logic of these objects utilizing the override methods generated. Starting with public boolean keyDown(int keycode). @Override public boolean keyDown(int keycode) { //this function only gets called once when an event is fired. (even if this key is being held down) //I need to store the state of the key being held down as well as pressed keyStates.get(keycode).pressed = true; keyStates.get(keycode).down = true; //every overridden method needs a return value. I won't be utilizing this but it can be used for error handling. return false; } Next is public boolean keyUp(int keycode). @Override public boolean keyUp(int keycode) { //the key was released, I need to set it's down state to false and released state to true keyStates.get(keycode).down = false; keyStates.get(keycode).released = true; return false; } Now that you have handled the logic for key state, you need a way to access these keys to check their states. Add these three methods to InputManager: //check states of supplied key public boolean isKeyPressed(int key){ return keyStates.get(key).pressed; } public boolean isKeyDown(int key){ return keyStates.get(key).down; } public boolean isKeyReleased(int key){ return keyStates.get(key).released; } Everything is taking shape, so now is a good moment to try and explain how everything fits together. libGDX represents every key as an integer used to grab an element from the keyStates array which returns a single keyState object. You can check the state of that key ( Pressed, Down, or Released), which is a boolean. You’re almost ready to test drive the InputManager but there a few more things to setup. Right now, the only state that is actually functional is the Down state. When a key is Down. Down is set to true. And when a key is Up, Down is set to false. The other two states, Pressed and Released don’t work properly yet. These are Trigger keys which should only trigger for one frame and then remain false. Now, once they’re activated, they continue to be True. You need to implement a new method for InputManager called update which will correctly handle the states of Pressed and Released. public void update(){ //for every keystate, set pressed and released to false. for (int i = 0; i < 256; i++) { KeyState k = keyStates.get(i); k.pressed = false; k.released = false; } } The Update method allows you to adjust the InputManager post frame. Anything that needs to be reset will go within this method. To use the InputManager, you need to instantiate it within the core namespace of the project. Within MyGdxGame, add InputManager as a new field. public class MyGdxGame extends ApplicationAdapter { InputManager inputManager; } Within the MyGdxGame constructor, instantiate the InputManager. For InputManager to be useful, you need to pass it to libGDX’s InputProcessor and libGDX will use this new object to process input events. Replace the current create method with the below: @Override public void create () { inputManager = new InputManager(); Gdx.input.setInputProcessor(inputManager); } Now you need to call InputManager‘s update method at the end of the MyGdxGame‘s render method. You should call update() after all game logic involving input has been processed. Replace the current create method with the below: @Override public void render () { inputManager.update(); } Here is how MyGdxGame currently looks: public class MyGdxGame extends ApplicationAdapter { SpriteBatch batch; Texture img; OrthographicCamera camera; InputManager inputManager; @Override public void create () { batch = new SpriteBatch(); img = new Texture("badlogic.jpg"); camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight()); inputManager = new InputManager(); Gdx.input.setInputProcessor(inputManager); } @Override public void render () { camera.update(); batch.setProjectionMatrix(camera.combined); Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); batch.draw(img, 0, 0); batch.end(); inputManager.update(); } } Test this code to make sure everything works as intended. @Override public void render () { //testing key states... if (inputManager.isKeyPressed(Input.Keys.A)) { System.out.println("A Pressed"); } if (inputManager.isKeyDown(Input.Keys.A)) { System.out.println("A Down"); } if (inputManager.isKeyReleased(Input.Keys.A)) { System.out.println("A Released"); } camera.update(); batch.setProjectionMatrix(camera.combined); Gdx.gl.glClearColor(1, 0, 0, 1); Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); batch.begin(); batch.draw(img, 0, 0); batch.end(); inputManager.update(); } You should now have more accurate input events. The InputManager is almost finished, all that’s left is implementing the logic for handling touch events. Luckily, KeyStates and TouchStates function in the same way. You will now be utilizing the generated touch event methods. Note: This method is heavily commented so I may repeat myself. @Override public boolean touchDown(int screenX, int screenY, int pointer, int button) { //There is always at least one touch event initialized (mouse). //However, Android can handle multiple touch events (multiple fingers touching the screen at once). //Due to this difference, the input manager will add touch events on the fly if more than one //finger is touching the screen. //check for existing pointer (touch) boolean pointerFound = false; //get altered coordinates int coord_x = coordinateX(screenX); int coord_y = coordinateY(screenY); /; } } //this pointer doesn't exist yet, add it to touchStates and initialize it. if (!pointerFound) { touchStates.add(new TouchState(coord_x, coord_y, pointer, button)); TouchState t = touchStates.get(pointer); t.down = true; t.pressed = true; t.lastPosition.x = coord_x; t.lastPosition.y = coord_y; } return false; } One of the main differences between KeyStates and TouchStates is the fact that all KeyStates get initialized within the constructor of InputManager due to a keyboard being a physical device. You know all the available keys for use, but a touch event is a cross-platform event. A touch on Desktop means the user has clicked the mouse, but a touch on Android means the user has touched the screen with their finger. On top of that, there can only be one touch event on Desktop (mouse), while there can be multiple touches on Android (finger/s). To handle this problem, add new TouchStates on-the-fly depending on what the user does. When a user triggers a touch event, you first want to convert it’s screenX and screenY values to something more usable. The coordinate 0,0 is at the upper-left of the screen, and the Y axis flipped. To accommodate this add two simple methods to InputManager to convert these coordinates to make 0,0 at the center of the screen and the Y axis right-side up which will work better with a SpriteBatch object. private int coordinateX (int screenX) { return screenX - Gdx.graphics.getWidth()/2; } private int coordinateY (int screenY) { return Gdx.graphics.getHeight()/2 - screenY; } Now you need a way to check if this is a new touch event or if InputManager has already discovered this event and added it to the list of TouchStates. To clarify, the user is holding their device and touches the screen with their right-hand thumb, this will be the first touch event within touchStates and InputManager knows that it can handle at least one touch event. If the user decides to touch the screen with both left-hand and right-hand thumbs, the second touch event will instantiate a new TouchState and add it to touchStates on the fly. InputManager now knows it can process two touch events. All this works from the pointer variable passed to this method. The pointer variable is an integer that allows you to distinguish between simultaneous touch events. Every time a user fires a touch event, use pointer to check whether to create a new TouchState or not. boolean pointerFound = false; Loop through all available TouchState objects to check if this pointer already exists and set the states of this TouchState. /; } } Notice how if a TouchState is not found, you create a new one and append it to the touchStates array. Also notice how to access the TouchState. touchStates.add(new TouchState(coord_x, coord_y, pointer, button)); TouchState t = touchStates.get(pointer); If one finger touches the screen, pointer is 0 which represents the one finger. Adding that new TouchState object to an array automatically set’s up the correct index value. It’s position in the array is 0 which is the same value as it’s pointer. If a second finger touches the screen, it’s pointer will be 1. When the new TouchState is added to the array, it’s index value is 1. If you need to find which TouchSate is which, use the given pointer value to find the correct TouchState from touchStates. Now handle the logic for when a finger is no longer touching the screen. @Override public boolean touchUp(int screenX, int screenY, int pointer, int button) { TouchState t = touchStates.get(pointer); t.down = false; t.released = true; return false; } NOTE: I chose not to remove TouchState objects even if they’re no longer used. If a new finger count has been discovered, I think it should stay discovered and ready to be re-used. If you feel differently about this, within the touchUp method is where you would implement the logic for removing a TouchState. Now calculate the displacement vector of a TouchState to handle finger gestures. @Override public boolean touchDragged(int screenX, int screenY, int pointer) { //get altered coordinates int coord_x = coordinateX(screenX); int coord_y = coordinateY(screenY); TouchState t = touchStates.get(pointer); //set coordinates of this touchstate t.coordinates.x = coord_x; t.coordinates.y = coord_y; //calculate the displacement of this touchstate based on //the information from the last frame's position t.displacement.x = coord_x - t.lastPosition.x; t.displacement.y = coord_y - t.lastPosition.y; //store the current position into last position for next frame. t.lastPosition.x = coord_x; t.lastPosition.y = coord_y; return false; } Like before with key states, you need to add three methods to access these touch states plus two other methods for getting the coordinates and displacement of a TouchState. //check states of supplied touch public boolean isTouchPressed(int pointer){ return touchStates.get(pointer).pressed; } public boolean isTouchDown(int pointer){ return touchStates.get(pointer).down; } public boolean isTouchReleased(int pointer){ return touchStates.get(pointer).released; } public Vector2 touchCoordinates(int pointer){ return touchStates.get(pointer).coordinates; } public Vector2 touchDisplacement(int pointer){ return touchStates.get(pointer).displacement; } You now have the same problem as before with KeyStates, when only the Down state was working properly. But that makes perfect sense, you never added logic to reset the trigger events. Returning to the update method, add the following code: for (int i = 0; i < touchStates.size; i++) { TouchState t = touchStates.get(i); t.pressed = false; t.released = false; t.displacement.x = 0; t.displacement.y = 0; } Here is the full update method again: public void update(){ for (int i = 0; i < 256; i++) { KeyState k = keyStates.get(i); k.pressed = false; k.released = false; } for (int i = 0; i < touchStates.size; i++) { TouchState t = touchStates.get(i); t.pressed = false; t.released = false; t.displacement.x = 0; t.displacement.y = 0; } } Note: The displacement vector is reset back to (0, 0) on every frame. Using the InputManager You’ve already seen an example of using the InputManager for checking KeyStates. Now I will explain using TouchStates. If the application is running on Desktop and only utilizes the mouse then use the only TouchState avaiable from touchStates. if (inputManager.isTouchPressed(0)) { System.out.println("PRESSED"); } if (inputManager.isTouchDown(0)) { System.out.println("DOWN"); System.out.println("Touch coordinates: " + inputManager.touchCoordinates(0)); System.out.println("Touch displacement" + inputManager.touchDisplacement(0)); } if (inputManager.isTouchReleased(0)) { System.out.println("RELEASED"); } If the application is running on Android, you need to loop over all available TouchStates and handle each one individually, but checking a TouchState directly is error prone. inputManager.touchStates.get(0); Cover this with a new method added to InputManager: public TouchState getTouchState(int pointer){ if (touchStates.size > pointer) { return touchStates.get(pointer); } else { return null; } } Now you have a cleaner way of accessing a TouchState with simple error checking. inputManager.getTouchState(0); If this was in a for loop to begin with, the bounds check of the for loop would essentially be free from error checking. for (int i = 0; i < inputManager.touchStates.size; i++) { TouchState t = inputManager.getTouchState(i); System.out.println("Touch State: " + t.pointer + " : coordinates : " + t.coordinates); } Game On This Input Manager was designed for controlling entities within a game environment. I first designed the basics of this class a while back for ‘variable’ jump heights for my game character. I later extended the class to handle input for Android with multiple touch events which is currently used in a game called Vortel. I needed a way to allow the user to control an entity with their fingers, no matter where their fingers were on the screen, and no matter how many fingers were on the screen at once. I achieved this affect by accumulating all displacement vectors from every TouchState, and then applying this new vector to the entity. If you have some free time, please check it out. I used the variable jump height feature in another game called Ooze. I used the different states of TouchState ( Pressed, Down, and Released) to accurately control how high the character jumps depending on how long the user is touching the screen. I hope this tutorial was useful to you with your game ideas and look forward to your questions and comments below.
https://www.sitepoint.com/handling-player-input-in-cross-platform-games-with-libgdx/
CC-MAIN-2019-47
refinedweb
2,778
57.98