text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi all, I am having a very annoying problem with my code. I am using SDL 2.00 with openGL on visual studio 2013. The problem starts when I try to create a windows using SDL_CreateWindow. The code compiles just fine but when i run it, the window is all white and won't accept any input whatsoever and also the mouse cursor appears as "waiting". So it seems like it gets into an infinite loop somewhere. I narrowed it down. It seems like ChoosePixelFormat gets into dead lock state so the window won't keep running after it. The thing is, I developed 2 other 2D games with the same build(SDL2.00 OpenGL VS2013) with no problems. I can still run those games and SDL_CreateWindow won't cause any issues. The code is so simple too. its supposed to draw a triangle in the window. Code :#include "stdafx.h" void init() { glClearColor(0.0, 0.0, 0.0, 1.0); glMatrixMode(GL_PROJECTION); glLoadIdentity(); gluPerspective(45, 640.0 / 480.0, 1.0, 500.0); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); } void display() { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_TRIANGLES); glVertex3f(0.0, 2.0, -5.0); glVertex3f(-2.0, -2.0, -5.0); glVertex3f(2.0, -2.0, -5.0); glEnd(); } int main(int argc, char* argv[]){ SDL_Init(SDL_INIT_EVERYTHING); SDL_Window * window; window = SDL_CreateWindow("OpenGLTest", 300, 300, 640, 480, SDL_WINDOW_SHOWN | SDL_WINDOW_OPENGL); init(); while (true){ display(); SDL_GL_SwapWindow(window); } return 0; } Code :#pragma once #include <iostream> #include <SDL.h> #include <Windows.h> #include <gl/GL.h> #include <gl/GLU.h> #define PI 3.14159265 using namespace std; The whole code is posted. The problem isnt the rest of the code. I removed everything else and left only the part where SDL_CreateWindow is called and the problem keeps happening. Anyone has any idea why this is happening and how i can fix it? Thanks! -Omer
https://www.opengl.org/discussion_boards/showthread.php/183492-ChoosePixelFormat-DeadLock-SDL_CreateWindow-gets-stuck?p=1257193&mode=threaded
CC-MAIN-2015-22
refinedweb
306
69.68
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "workaround" -. Open the patient and grumble about how tightly-coupled his spleen and circulatory system are. Examine the spleen’s outer surface to see if there are any obvious problems. Inform him that several of his organs are very old and he should consider replacing them with something more modern. 9. Compare the spleen to some pictures of spleens online. If anything looks different, try to make it look the same. 10. Remove the spleen completely. See if the patient’s leg is still broken. If so, put the spleen back in. 11. Tell the patient that you’ve noticed his body is made almost entirely out of cellular tissue, whereas most bodies these days are made out of cardboard. Explain that cardboard is a lot easier for beginners to understand, it’s more forgiving of newbie mistakes, and it’s the tissue franca of the Internet. Ask if he’d like you to rebuild his body with cardboard. It will take you longer, but then his body would be future-proof and dead simple. He could probably even fix it himself the next time it breaks. 12. Spend some time exploring the lymph nodes in the patient’s abdominal cavity. Accidentally discover that if the patient’s leg is held immobile for six weeks, it gets better. 13. Charge the patient for six weeks of work️13 - -29 - Boss asked one of our senior Linux engineers to look into an issue. When restarting a service, the person renting the server would get the errors e-mailed which occurred during the restart (it wasn't reachable so the service trying to reach it would throw errors). Although this was very expected behavior, the client found it unacceptable! Boss asked the engineer to look into this while acknowledging that it was probably an impossible task except for if you'd just disable logging but then all debug info would be gone which we frequently use to debug stuff ourselves. After two minutes: E (engineer): fixed it. V (boss): wait, WHAT? HOW?! I'VE BEEN TRYING TO FIND A FIX OR WORKAROUND FOR AGES! E (with the mist nonchalant/serious face): I disabled the log mailing in the configuration. B: 😶 B: . B: . B: . B: 😂 Everyone was laughing. The client thanked us for 'solving' it xD - - Kudos to the devrant team for epic response time! <50min from bug report on Twitter to workaround on prod!2 - - Known IPs for github (add to /etc/hosts) 192.30.253.113 github.com 192.30.253.113 ssh.github.com more.27 - Client informs dev team that he is upgrading all his machines from IE8 to IE11. ~700 lines of hacky js and css previously commented as "//Workaround for IE" removed. Pure satisfaction.6 - - Hell fucking yeah. Reported GNOME bug. Waited 2 weeks and holy crap we got a workaround it already and soon a patch will be in place hopefully. Running the workaround and even on fuckin 1.6Ghz its smooth as fucking butter. This is continuing to the previous GNOME rant. Yup lappy thats smooth as butter.4 - Sad story: User : Hey , this interface seems quite nice Me : Yeah, well I’m still working on it ; I still haven’t managed to workaround the data limit of the views so for the time limit I’ve set it to a couple of days Few moments later User : Why does it give me that it can’t connect to the data? Me : what did you do ? User : I tried viewing the last year of entries and compare it with this one Few comas later 100476 errors generated False cert authorization Port closed Server down DDOS on its way1 - I’m trying to add digit separators to a few amount fields. There’s actually three tickets to do this in various places, and I’m working on the last of them. I had a nightmare debugging session earlier where literally everything would 404 unless I navigated through the site in a very roundabout way. I never did figure out the cause, but I found a viable workaround. Basically: the house doesn’t exist if you use the front door, but it’s fine if you go through the garden gate, around the back, and crawl in through the side window. After hours of debugging I eventually discovered that if I unlocked the front door with a different key, everything was fine… but nobody else has this problem? Whatever. Onto the problem at hand! I’m trying to add digit separators to some values. I found a way to navigate to the page in question (more difficult than it sounds), and … I don’t know what view is rendering the page. Or what controller. Or how it generates its text. The URL is encrypted, so I get no clues there. (Which was lead dev’s solution to having scrapeable IDs instead of just, you know, fixing them). The encryption also happens in middleware, so it’s a nightmare to work through. And it’s by the lead dev, so the code is fucking atrocious. The view… could be one of many, and I don’t even know where they are. Or what layout. Or what partials go into building it. All of the text on the page are “resources” — think named translations that support plus nested macros. I don’t know their names, and the bits of text I can search for are used fucking everywhere. “Confirmation number” (the most unique of them) turns up 79 matches. “Fee” showed up in 8310 places before my editor gave up looking. Really. The table displaying the data, which is what I actually care about, isn’t built in JS or markup, but is likely a resource that goes through heavy processing. It gets generated in a controller somewhere (I don’t know the resource name so I can’t find it), and passed through several layers of “dynamic form” abstraction, eventually turned into markup, and rendered as a partial template. At least, that’s how it worked in the previous ticket. I found a resource that looks right, and there’s only the one. I found the nested macros it uses for the amount and total, and added the separators there… only to find that it doesn’t work. Fucking dead end. And i have absolutely nothing else to go on. Page title? “Show” URL? /~LiolV8N8KrIgaozEgLv93s… Text? All from macros with unknown names. Can’t really search for it without considerable effort. Table? Doesn’t work. Text in the table? doesn’t turn up anything new. Legal agreement? There are multiple, used in many places, generates them dynamically via (of course) resources, and even looking through the method usages, doesn’t narrow it down very much. Just. What the fuck? Why does this need to be so fucking complicated? And what genius decided “$100000.00” doesn’t need separators? Right, the lot of them because separators aren’t used ANYWHERE but in code I authored. Like, really? This is fintech. You’d think they would be ubiquitous. And the sheer amount of abstraction? Stupid stupid stupid stupid stupid.11 - One of my former coworkers was either completely incompetent or outright sabotaging us on purpose. After he left for a different job, I picked up the project he was working on and oh my God it's a complete shitshow. I deleted hundreds of lines of code so far, and replaced them with maybe 30-40 lines altogether. I'm probably going to delete another 400 lines this week before I get to a point where I can say it's fixed. He defined over 150 constants, each of which was only referenced in a single location. Sometimes performing operations on those constants (with other constants) to get a result that might as well have been hard-coded anyway since every value contributing to that result was hard-coded. He used troublesome and messy workarounds for language defects that were actually fixed months before this project began. He copied code that I wrote for one such workaround, including the comment which states the workaround won't be necessary after May 2019. He did this in August, three months later. Two weeks of work just to get the code to a point where it doesn't make my eyes bleed. Probably another week to make it stop showing ten warnings every time it builds successfully, preventing Jenkins from throwing a fit with every build. And then I can actually implement the feature I was supposed to implement last month - - The power of saying fuck you to apple to become a supporter when you have both android and iPhone - - Me when I read about another JS framework, gulp module or workaround for using ES2016 features today - So I had this conversation with my boss yesterday... Me: Hey, I found this bug in the other team's code that has a major impact on what we're trying to do. Can you ask them to look into it? Boss: No, I don't want to be the one who has to tell them there's a major bug in their code. Find a workaround. M: But... It isn't really a major bug, it just has a big impact on our side of things. B: Workaround! Fuck bosses who value how they think they look to other devs over a day of my time. Fuck.4 - - - Sadly my Unofficial devRant extension for firefox will no longer hold the last page you visited (When you comment it will no longer open the comment screen after you close it) because in the new API the HTML page reloads once you close and open it. I may find a workaround but once version 2.0.1 is out this will not work. Also the window will be smaller because there is a bug in firefox that limits the size of extension panel to heigh 580px (original one used 850)3 - - JUST BECAUSE I DISABLED GOOGLE PLAY SERVICES DOES *NOT* MEAN MY PHONE CAN'T TURN ON FOR ALARMS! i do have a workaround, but that feature was still nice10 - - Developer proposing a solution to architect-- Workaround😵 Architect asking a developer to use workaround-- Architect Solution 😎2 - Quick rant, I dont have time. I have no idea how the fuck but I managed my IDE to show me that it's confused if my class "PackModel" is "PackModel" or "PackModel" (I have only one definition if you are hands first to ask). its few years and first time when I see shit like that. Fun fact, it was working OK until I used getter that was returning another object and than IDE got absolutely lost. I had to use workaround in middle of nowhere as shown on image and suddenly its back fine with it. Not like it's returned by function hard typed and PHPDoc typed to return instance of this very object and in other scopes it just works... It's Jetbrains so Im confused, it's robust IDE ;-;...8 - - Poor fellas. I feel what you're going through. Look how spontaneous they were in mentioning the workaround part. Be vocal even if you have nothing to offer. 🤷🏻♀️2 - - Reinvent the wheel workaround: Rename the wheel. Since you cannot reinvent it, because it already exists and it's working, just call it "rolling circular object" and get away with it. Why in hell I should call packages "namespaces"? A package can contain a lot of things actually. It's visual, you can create an icon for it. Namespace? is that even a word? Next time I rent an apartment, I'll ask if they have a FuckSpace instead of a bedroom. "Well, it's a small studio, there is only a ShitSpace and an Open EatAndFuckSpace" Will do - Why should I make my fucking code messier and write some bullshit workaround just because you’re a stubborn idiot who refuses to upgrade your fucking operating system and browser. ARGHGHGGH1 - - My JS app can crash IE11. Totally reproducable. Had a fun debugging session to find out how this is triggered. Happens inside a deeply recursive call in a library I'm using which redraws the DOM. Found a hacky workaround to avoid that as I see no real solution. It's not like I'm responsible for fixing IE. These are the days where I'm happy I'm mainly a backend dev...4 - So we have this really annoying bug in our system that customers keep complaining about. I've explained in detail, multiple times, why the part they think is a bug is not a bug and the workaround they keep asking me to apply doesn't make sense, won't fix the issue, and won't even stick (the system will notice that the record they want me to delete has been removed and it will repopulate itself, by design). I've told them what we need to do as an actual workaround (change a field on the record) and what we need to do to properly fix the bug (change the default value on the record and give proper controls to change this value through the UI). We've had this conversation at least three times now over a period of several months. There is a user story in the backlog to apply the actual fix, but it just keeps getting deprioritized because these people don't care about bug fixes, only new features, new projects, new new new, shiny shiny new. Today another developer received yet another report of this bug, and offered the suggested workaround of deleting the record. The nontechnical manager pings everyone to let them know that the correct workaround is to delete the record and to thank the other developer for his amazing detective work. I ping the developer in a private channel to let him know why this workaround doesn't work, and he brushes it off, saying that it's not an issue in this case because nobody will ever try to access the record (which is what would trigger it being regenerated). A couple hours later, we get a report from support that one of the deleted records has been regenerated, and people are complaining about it. 🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄🙄3 -. - This is just a temporary workaround. I will come back to this and fix this later. NEVER HAPPENS :)3 - * - - A fellow student just showed me his fancy GET request "workaround"... i was just like: "i'm so gonna POST this on devrant"5 - Me to QA: I need an urgent signature. QA: That costs a cake. Me: If we baked cake at our company, that would have too much sugar, and we would use more salt as workaround.4 - One of my screens is currently not woking (out of three). BECAUSE THIS STUPID ADAPTER IS NOT WORKING!!! So I hang the ERD for my side projects on it... it's nice to have it on hand imo😎 - - - - Not actually solving the problem in an error and instead implementing a workaround thinking "no one's going to read this code anyway" when I'm actually just condemning my future self to a lot of hell.1 - Stackoverflow #1 Me posting question about how to prevent error. User1: You answered your question. Its because of the error. Me: I know. And want it gone. User1: Proposes working yet somehow horrible workaround. Me: Yes, that works, already did that. But i want to know why it happens. User: Your question says you want a solution and it is one. Me: One that doesn't solve the problem. User2: Just give up. Don't try to find a better one. Stackoverflow 2: UserQ: Question how to...? Me: Use this and that. UserR: That is not an answer, so i downvoted and requested review. I don't know a second community that is anti-encouraging like SO. - I really hate it when I work on a user story consisting only of a cryptic title: "Implement feature X". Esp. when I missed planning during a holiday and can only wonder who in their right mind would have given it 3 points. Why thank you. Sometimes, just pulling the acceptance criteria out of somebody's nose takes days. It doesn't get better once I realize that not all external dependencies have been properly resolved. It's worse if there are other departments involved, as then you get into politics. Me: "We are dependent on team X to deliver Y before we should have even planned this ticket. I'm amazed that our team was even able to estimate this ticket as I would have only raised a question mark during estimation meeting. We could have thrown dices during estimation as the number would have been as meaningful and I'd have more time to actually figure out what we should be doing." Dev lead / PO: "I understand. But let's just do <crazy workaround that will be live until hell freezes over> temporarily." It's borderline insane how much a chaotic work flow is branded as agile. Let's call it scrum but let's get rid of all the meaningful artefacts that make it scrum. - Quick android gif workaround The fix is to open the gif in fullscreen, wait 5 seconds, close it and open it again. It will load the second time.1 - Let's implement a workaround on top of another workaround which is a workaround of another ... workaround to the nth level. Just to avoid touching an ancient code¡¡¡3 - - - Another case of "couldn't you've told me BEFORE I started working on this?" I'm making a training in Unity3D for a client, and they want it to integrate with their learning management system (LMS). I made a simple SCORM package that gets the userID and then uses a custom URL scheme to launch the app with the user data from the LMS. Tested on multiple platforms, all works perfectly fine. Than, during a meeting, some says they "can't download it". I ask "which browser are you using?" and he says "I'm using the LMS app." ... the LMS has an APP? So I start figuring out ways to launch the system default browser from within a app's embedded browser, and nothing so far has worked. target=_system, nope. all kinds of weird javascript shenanigans, but the LMS APP browser just blocks everything. Probably to protect students from malicious software that could be injected in courses, but now I'm stuck trying to find a workaround for this too. But what sucks the most is that this happened DAYS BEFORE THE DEADLINE! Well, at least the deadline won't be my problem anymore soon. - got into development only a short time ago. My mother paired up with a partner who was a dev making some serious cheddar when I was just barely not a teenager anymore, while I was working shitty low-wage customer service gigs. Honestly, the only reason either of them could give me for doing it was the money. A couple years went by, I was extremely fortunate: found a job within 6 weeks of finishing a year-long program at the local technical college which only yielded me a basic cert. By that time, my mother's partner had long lost their job, and I had paid their rent (twice my own) on two separate occasions. I went from usually having about a hundred dollars after bills to last me until next paycheck to five times that. A couple more years go by, I'm doing pretty well supporting my own family now (my wife and child, not anyone else) and somehow doing way better now than the people who spurred me ever did. I no longer have a reason to compulsively check my bank account out of worry that I'm overdrawn. Now I'm locked in an endless battle in my mind to find a correction for every flaw in my life, or at the very least a workaround. I go to bed and wake up thinking about the same things: my work. Buuuutttt.... My family has everything they could ever need and more. So I guess I could say the support I got from my family was: * an initial nudge in the "right" direction * a reality check on what the industry can be like * a sentence to eternal damnation by changing my paradigm on everything -) - - Hacky code post: property var value: if(editing || !editing) function(othervalue) I am coding in a property system that only updates an expression if a variable involved emits a signal. Well the function won't get called unless I change a value. I also want to "value" to be updated when editing changes. I also want it to update even if editing doesn't change. So the or-ing of the state of editing. The result is the function gets called when object is initialized. Then when the editing flag changes it gets updated again. The workaround of doing this is much worse and requires more hacky code. So I am resigned myself to just or-negating the editing value.10 - *. - Me: hey guys there seems to be an integration problem? Vendor: hacky workaround Me: no that’s a hacky workaround, please check the integration Vendor (sometime later): yeah so we made an engineering change like, a while back, which fundamentally alters stuff. Me: so shit is fucked because you don’t think customers should be informed ahead of release? Dude do you not want our money? - My buddy needs website, I helped him to setup localhost, and told him I can work on backend if he wants, he is my good friend so I could even do it for free if he shares some income later down the line but he would need frontend anyway, so someone would need to do it. Some consideration later it turns out that there is noone to do the FE, so that is trashed and instead Im solving wordpress problems for him whenever he has them.. God... I forgotten why I hate wordpress... Every single time I helped him I felt like Im doing workaround to workaround and it somewhat works like its supposed to... Edit: fixed wording11 - What a vast and great eco-system we have (refering to js and npm) almost every time I am trying to use two libs and combine them to work together some shit happens. So I wanted to have lean and good written code without introducing unnecessary renders and logic. Ended up doing just that because 'we know about issue with our library, many users told us that, too bad we wont fix that shit', so I feel like a 'workaround' developer at some hackathon right now! - My another attempt to write something in rust and I wanted to try tauri as it’s promising competition to electron. Why use tauri not electron? Cause in tauri you can write rust plugins that you can interact with directly from javascript without stupid http servers, mangling code and stuff. From javascript point you only call one method and pass object with arguments into it. So it took me entire weekend to create draft plugin to interact with sqlite database. Documentation of tauri is inconsistent. I understand that cause it’s young project and plugins architecture changed frequently. Moreover my knowledge of rust is near to zero. But overall it was worth it. I like what I achieved. I can pass sql query and execute it inside mutex guarded singleton. Like I said before I like it cause I can call my plugin directly from javascript. I know I wasn’t fancy with my implementation. I just created file database connection from json configuration and managed to receive string sql statements. I just print results with rust to console for now. I will add sending back results later this week. For me tauri is already better then electron cause code is clear and there is no workaround ( except singleton with connection - cause of limitations of my rust knowledge ). Live long tauri and fuck you electron. if you’re interested.2 - - We once had to make another wordpress multilanguage site on a different domain but it should use the whole footer from previous site full with its images and sitemaps. The client said "just make it look like a copy of a footer". This would require us to copy the whole footer for 4 different languages every time somebody makes a change in the original site. So the workaround we did in the end was to make a specific page in original wordpress site which only returns the footer. In the new wordpress site we made a code which scrapes that whole page and puts its contents on the footer of the new wordpress site. It worked perfectly and we never needed to copy the whole footer again because it was "dynamic". - I discovered a startup bug in SpamPD on OpenBSD that hasn't been addresses for a few years:... It looks like everyone who has encountered it so far is using the workaround/hack I have listed in that post. I decided to get on the spampd issue tracker and the devs there have already started looking at it. Glad I can contribute to the community. - google pixel 2 vs iphone x fails recap pixel 2: -... -... -... -... recap iphone x: -... -... -... -... -... -... -... -... -... - Holy shit I just figured out a pretty decent (and way to complex) workaround to implement specialization in Rust, I'm so hyped2 - - Right now, in my student job. Create a multi-department + multi-lingual intranet, only using SharePoint and out of the box features. Can't add my own scripts or web parts. I'm now the master of workarounds1 - rather spend 3 hours trying to find a workaround to a bug in Windows instead of rebooting which would take 3 minutes. - - - Wouldn't it be awkward as hell if we would all only just rant in our native tongue ... Comment below your workaround, if needed 🤣 - you spend hours figuring out a workaround for a problem and then when you finally solve it. It's beautiful. But then you find out the solution's a few clicks away on stackoverflow 😡😡😡 - -.... -. - As I started learning React, I found the allure of declarative style of programming appealing. I try to avoid maintaining multiple state variables for data that can be derived from the base state itself that's stored in the redux store. It works wonders when I have to change something; as I just need to make changes to one function in the utils folder and that change is implemented across the whole app, rather than change the instances everywhere as was the case when I initially started working on this project after the previous dev left. But I see myself redefining a lot of computed values everywhere, and if I just try to define them in the root component, I'll end up with a huge list of props being passed to a couple of components. Shifting it to the utils folder helps a bit, but then I find myself defining even the simplest of array filtering methods to the utils folder. Is this need to define computed values everywhere a trade-off that you need to accept when you write declarative code, or is there a workaround/solution I am missing? As of now, the code-base is much better than how it used to be when they had a literal Java dev work on React with their knowledge of Java patterns being used in a framework that is the polar opposite of OOP, but I still feel like there's room for improvement in this duplication of computed values.2 -. - - - >where is the code that is in charge of that? >that's the infrastructure dependencies job >oh cool. So what if I want to do X Y Z? >the infra doesn't do that > well who is on charge of infra? >oh that was {guy that left 2 weeks ago} and anyway that code existed for AGES So now I'm drowning in foreign spaghetti because people didn't want to disturb the holy infra and just made workaround in the services themselves. Good thing I got my nylon overalls for maximum shit protection - - I wanted to fix ugly unittests of parser's function that uses some shitty workaround instead of intended unittest.mock.mock_open, but it turns out mock_open cannot mock different content for each file. Cause you know, noone needs it anyway. - - I hope people don't use container to workaround dependency issues. That's like buying a new computer because you don't know how to upgrade a tiny software. We should learn how to manage things properly, not wrapping shits up and pretend it is clean4 - reading the project's code, following "save" callback in jvascript, i find this comment "IMPORTANT : this is a workaround to solve memory leak" and below it code that basically removes all elements from th DOM and adds them again. so basically, someone could not find a cause to a memory leak and decided "solve it" in a specific place by reloading almost an entire page - - - Maybe it's a stupid question, but why the fuck there are still path length limitations in Windows in 2020??? Why there isn't a virtual automatic workaround for this issue? Sometimes I just can't make it shorter...3 -. - Only when the latest feature is implemented, the last bugfix and the last workaround are found, the last unit test is written, the latest CI/CD pipeline done, the customer guy does manual testing and acceptance tests on the staging server and let's them pass and a few days later it's pushed to production... You will be reminded (again) that shitty customers do exist! A customer is the least capable person to tell you what the customer actually wants and is also the least trustworthy person to test the features he requested... Holy fuck come on! Just test that shit on the staging Server! One Look could have already shown you that that's Not what you expected! I checked the logs after that and yup you guessed correctly... The said endpoints weren't even used on staging, only on production...1 -.. hated when edge people could scribble on any web pages and other chromium based browsers user can't. So i created a workaround. Please have a look and comment what you feel about - :< - WINDOWS?! Why don't you fucking recognise more than one partition?! It's there! You know that! Why don't you display it then?! I did find a workaround but seriously? I only use windows for gaming, ubuntu for everything else, so it wasn't that important.5 - Found bug in legacy code with comment "4 days to release workaround, works predictably". Added "No, it doesn't!" and committed to main branch before I start reworking the entire spaghetti mess of a codebase - This got me fucked up. Listen yo. So we have this issue on our modal right. The issue keeps poppin. It's a hotfix because its in prod. So my senior and I were on it. After a few hours, I showed him the part of the code that is buggy. It's 50 lines of code of nested if-else, else-if. And so we're still fighting it. He redid everything since we're using angular2 he did a subject, behavior-subject all that bs and I was still trying to understand what's the bug, because it's happening on the second click and so I did my own thing and found the cause bug and showed it to him, its this: setTimeout( () => {}, 0) the bootstrap-modal doesn't allow async inside it (I dont why, its in the package). So he explained to me why it's there. So I did my own thing again and find a workaround which I did, a one-line of angular property, showed it to him he didn't accept it because we'll still have to redo it with subjects and he was on it. I said ok. Went back to my previous issue. The director came in and ask for a fixed, my senior came up to me and told me to push my fix. Alright no problem. So we good now. Went back to our thing bla bla bla, then got an email that we will have a meeting, So we went, bla bla bla. The internal team wants a support for mobile, senior said no problem bla bla bla, after the meeting he approaches me and said (THIS IS WHERE IT GOT FUCKED UP) we wont be supporting bootstrap4 anymore because of the modal issue and since we're going to support mobile and BOOTSTRAP4 grid system is NONINTUITIVE we are moving to material design because the grid system is easier. I was blown away man. we have more than 100 components and just because of that modal and mobile support shit he decided to abandon bootstrap. Mater of fact its the modal its his code. I'm not expert in frontend but I looked at the material design implementation its the same thing other than the class names. OHHH LAWD!3 - Please give your opinions/experience, I'm tired of meetings with the legal team. :( Can a proprietary software link to a GPL-licensed dependency during runtime? Can it do if its GPL "with Classpath Exception"? What about CDDL? Case in point - propriety Java web app needs javax.* libraries (JakartaEE components) at runtime (from project or JavaEE app server), but they are licensed under GPL. Can they be used or is there any workaround - Salt is awesome, no questions about that. YAML is giving me headaches, but it's my fault and eventually I'll get used to it. But this being my first encounter with jinja, WHO THE HELL THOUGHT THIS PIECE OF CRAP DESERVES TO LIVE! Instead of writing python inside {% %} you have to write kinda pseudo python and I just spend over hour trying to build list inside for. Yes, great idea, scoping fors, and lets make it hard to escape scoping, beacause it would be a shame if somebody COULD ACTUALLY DO SOMETHING USEFULL. I though several times of using different renderer, but I want to keep my code readable and mainrainable and in the end I found a workaround, but still, Jinja, YOU SUCK!4 - - To the physicists among us: I'm in the process of planning a very lightweight mini drone that flies with the help of radio signals that's surrounding it. I'm targeting 100 MHz. I calculated the amount of energy (Joules) of it and just when I did change the formula from E=h*f to Power=E/time I realized that time is basically going to be infinite and now I am stuck finding a solution to this. I can't just use a potential infinite amount of time in this equation and need a workaround. Any help is appreciated.22 -'s your best hacky bodge that actually worked? Mine was probably adding an item in a list to iterate off when searching for matching accounts or else it crashed. Love you node.js. -. - - Using Perl as a workaround for shellscript. Tried learning shellscript, but looks too messy for my taste.4 - - - Working with nightly builds and concept tech is such a fucking hassle... I'm currently working on a WebAssembly proof of concept where I need to generate a unique id, but since threading is currently not supported (rust and webassembly) I cant use half of the libraries currently out. And the ones that does work... guess what... are not compatible with the nightly build of the compiler I'm using for Rust. Just fucking end me. The legit only workaround I can find is to make a server request and get the unique id from there... piece of cunt software...I need a break 😑 -? - .. - So I'm working on this little personal project (also as a way to keep my "skills" sharpened for the coming semester), that first started as a workaround to do this other thing, and I wanted to develop it and make it a full fledged thing, with a GUI (or something that resembles it, I don't know how to make GUIs yet, and IDK why is it a 3rd grade thing) and all instead of existing just in the IDE's terminal. When it was on the workaround stage it was just this ugly monster, with only 2 things one could do, but it worked. Now I'm going for a more polished thing and it's starting to break on me, and in places I didn't expect it to LoL It's like I'm on a boat and I'm getting leaks from everywhere. Arr gotta get me a bucket and save me boat from sinking - Soooo how does one manually change password here on devrant? Is the "forgot password" workaround the only way, or am I missing a hidden "change your password" button?1 - - Has anyone tried the workaround to get nvidia web drivers work on mojave? (without acceleration) and has anyone tried running android emulator on that? Does it lag?1 - Worked around a major blocker using iframes inside modals. The 8 hours saved will become 8 days extra in Web Developer Hell when I have to refactor it fully! Pray for me :/ - js = pia I wish you could manually trigger validation of a field using Parsleyjs ...always looking for a clean workaround -
https://devrant.com/search?term=workaround
CC-MAIN-2021-39
refinedweb
6,323
71.65
fcrepo 1.1 API implementation for the Fedora Commons Repository platform FCRepo, a client for the Fedora Commons Repository Info This package provides access to the Fedora Commons Repository. From the Fedora Commons Website:. This package uses WADL, Web Application Description Language to parse the WADL file that comes with Fedora so it offers support for the complete REST API. On top of that a more highlevel abstraction is written, which will be demonstrated in this doctest. This package has been written for FedoraCommons 3.3 and 3.4, it has not been tested with older versions. REST API documentation can be found in the Fedora wiki. This package can be installed using buildout which will also fetch the Fedora installer, and install it locally for testing purposes. Use the following steps to install and run this doctest: python2.6 bootstrap.py ./bin/buildout ./bin/install_fedora ./bin/start_fedora ./bin/test Using the fcrepo package Connecting to the Repository To connect to the running Fedora, we first need a connection. The connection code was largely copied from Etienne Posthumus (“Epoz”) duraspace module. >>> from fcrepo.connection import Connection >>> connection = Connection('', ... username='fedoraAdmin', ... password='fedoraAdmin') Now that we have a connection, we can create a FedoraClient: >>> from fcrepo.client import FedoraClient >>> client = FedoraClient(connection) PIDs A Fedora object needs a unique PID to function. The PID consists of a namespace string, then a semicolon and then a string identifier. You can create your own PIDs using a random UUID, but you can also use the nextPID feature of Fedora which returns an ascending number. >>> pid = client.getNextPID(u'foo') >>> ns, num = pid.split(':') >>> ns == 'foo' and num.isdigit() True We can also get multiple PIDs at once >>> pids = client.getNextPID(u'foo', numPIDs=10) >>> len(pids) 10 This method returns unicode strings or a list of unicode strings if multiple PIDs are requested. The client abstraction provides wrappers around the ‘low-level’ API code which is generated from the WADL file. Here’s the same call through the WADL API: >>> print client.api.getNextPID().submit(namespace=u'foo', format=u'text/xml').read() <?xml ...?> <pidList ...> <pid>...</pid> </pidList> So the client methods call the methods from the WADL API, parse the resulting xml and uses sensible default arguments. This is how most client method calls work. Normally you would never need to access the WADL API directly, so let’s move on. Creating Objects Now that we can get PIDs we can use them and create a new object: >>> pid = client.getNextPID(u'foo') >>> obj = client.createObject(pid, label=u'My First Test Object') You can’t create an object with the same PID twice. >>> obj = client.createObject(pid, label=u'Second try?') Traceback (most recent call last): ... FedoraConnectionException: ... The PID 'foo:...' already exists in the registry; the object can't be re-created. Fetching Objects Off course it’s also possible to retrieve an existing object with the client: >>> obj = client.getObject(pid) >>> print obj.label My First Test Object You’ll get an error if the object does not exist: >>> obj = client.getObject(u'foo:bar') Traceback (most recent call last): ... FedoraConnectionException: ...HTTP code=404, Reason=Not Found... Deleting Objects Deleting objects can be done by calling the delete method on an object, or by passing the pid to the deleteObject method on the client. >>> pid = client.getNextPID(u'foo') >>> o = client.createObject(pid, label=u'About to be deleted') >>> o.delete(logMessage=u'Bye Bye') >>> o = client.getObject(pid) Traceback (most recent call last): ... FedoraConnectionException: ...HTTP code=404, Reason=Not Found... Note that in most cases you don’t want to delete an object. It’s better to set the state of the object to deleted. More about this in the next section. Object Properties In the previous examples we retrieved a Fedora object. These objects have a number of properties that can be get and set: >>> obj.label u'My First Test Object' >>> date = obj.lastModifiedDate >>> obj.label = u'Changed it!' The last line modified the label property on the Fedora server, the lastmodified date should now have been updated: >>> obj.lastModifiedDate > date True >>> obj.label u'Changed it!' Setting properties can also be used to change the state of a FedoraObject to inactive or deleted. The following strings can be used: - A means active - I means inactive - D means deleted>>> obj.state = u'I' Let’s try a non supported state: >>> obj.state = u'Z' Traceback (most recent call last): ... FedoraConnectionException: ... The object state of "Z" is invalid. The allowed values for state are: A (active), D (deleted), and I (inactive). Setting the modification or creation date directly results in an error, they can not be set. >>> obj.lastModifiedDate = date Traceback (most recent call last): ... AttributeError: can't set attribute An ownerId can also be configured using the properties: >>> obj.ownerId = u'me' >>> print obj.ownerId me Object DataStreams A Fedora object is basicly a container of Datastreams. You can iterate through the object to find the datastream ids or call the datastreams method: >>> print obj.datastreams() ['DC'] >>> for id in obj: print id DC >>> 'DC' in obj True To actually get a datastream we can access it as if it’s a dictionary: >>> ds = obj['DC'] >>> ds <fcrepo.datastream.DCDatastream object at ...> >>> obj['FOO'] Traceback (most recent call last): ... FedoraConnectionException: ...No datastream could be found. Either there is no datastream for the digital object "..." with datastream ID of "FOO" OR there are no datastreams that match the specified date/time value of "null". Datastream Properties A datastream has many properties, including label, state and createdDate, just like the Fedora object: >>> print ds.label Dublin Core Record for this object>>> print ds.state A There are different types of datastreams, this one is of type X, which means the content is stored inline in the FOXML file . FOXML is the internal storage format of Fedora. >>> print ds.controlGroup X A datastream can be versionable, this can be turned on or off. >>> ds.versionable True The datastream also has a location, which is composed of the object pid, the datastream id, and the version number >>> ds.location u'foo:...+DC+DC1.0' Let’s change the label, and see what happens: >>> ds.label = u'Datastream Metadata' >>> ds.location u'foo:...+DC+DC.1'>>> ds.label = u'Datastream DC Metadata' >>> ds.location u'foo:...+DC+DC.2' The location ID changes with every version, and old versions of the datastream are still available. The fcrepo client code contains no methods to retrieve old versions of datastreams or view the audit trail of objects. The methods that implement this are available in the WADL API though. Fedora can create checksums of the content stored in a datastream, by default checksums are disabled, if we set the checksumType property to MD5, Fedora will generate the checksum for us. >>> ds.checksumType u'DISABLED' >>> ds.checksumType = u'MD5' >>> ds.checksum # the checksum always changes between tests u'...' There are some additional properties, not all of them can be set. Have a look at the REST API Documentation for a full list >>> ds.mimeType u'text/xml' >>> ds.size > 0 True >>> ds.formatURI u'' Getting and Setting Content - 1 We can also get and set the content of the datastream: >>> xml = ds.getContent().read() >>> print xml <oai_dc:dc ...> <dc:title>My First Test Object</dc:title> <dc:identifier>foo:...</dc:identifier> </oai_dc:dc>>>> xml = xml.replace('My First Test Object', 'My First Modified Datastream') >>> ds.setContent(xml) Getting and Setting Content - 2 We can also get and set the content directly, as if it is a dictionarie of dictionaries >>> print obj['DC']['title'] [u'My First Modified Datastream'] >>> obj['DC']['title'] = [u'My Second Modified Datastream'] >>> print obj['DC']['title'] [u'My Second Modified Datastream'] Special Datastream: DC This DC datastream that is always available is actually a special kind of datastream. The Dublin Core properties from this XML stream are stored in a relational database which can be searched. The values are also used in the OAIPMH feed. Fedora uses the legacy /elements/1.1/ namespace which contains the following terms: - contributor - coverage - creator - date - description - format - identifier - language - publisher - relation - rights - source - subject - title - type View the Dublin Core website for a description of these properties. Since editing the Dublin Core XML data by hand gets a bit cumbersome, the DC datastream allows access to the DC properties as if the datastream is a dictionary: >>> ds['title'] [u'My Second Modified Datastream'] This can also be used to set values: >>> ds['subject'] = [u'fcrepo', u'unittest'] >>> ds['description'].append(u'A test object from the fcrepo unittest')>>> for prop in sorted(ds): print prop description identifier subject title >>> 'subject' in ds True To save this, we call the setContent method again, but this time with no arguments. This will make the code use the values from the dictionary to generate the XML string for you >>> ds.setContent() >>> print ds.getContent().read() <oai_dc:dc ...> ... <dc:description>A test object from the fcrepo unittest</dc:description> ... </oai_dc:dc> Inline XML Datastreams Let’s try adding some datastreams, for example, we want to store some XML data: >>> obj.addDataStream('FOOXML', '<foo/>', ... label=u'Foo XML', ... logMessage=u'Added an XML Datastream') >>> obj.datastreams() ['DC', 'FOOXML'] >>> print obj['FOOXML'].getContent().read() <foo></foo> Managed Content Datastreams We can also add Managed Content, this will be stored and managed by fedora, but it’s not inline xml. The data is stored in a seperate file on the harddrive. We do this by setting the controlGroup param to M >>> obj.addDataStream('TEXT', 'Hello!', label=u'Some Text', ... mimeType=u'text/plain', controlGroup=u'M', ... logMessage=u'Added some managed text') >>> obj.datastreams() ['DC', 'FOOXML', 'TEXT'] >>> ds = obj['TEXT'] >>> ds.size == 0 or ds.size == 6 # this does not work in Fedora 3.3 True >>> ds.getContent().read() 'Hello!' This is perfectly fine for small files, however when you don’t want to hold the whole file in memory you can also supply a file stream. Let’s make a 3MB file: >>> import tempfile, os >>> fp = tempfile.NamedTemporaryFile(mode='w+b', delete=False) >>> filename = fp.name >>> fp.write('foo' * (1024**2)) >>> fp.close() >>> os.path.getsize(filename) 3145728... Now we’ll open the file and stream it to Fedora. We then read the whole thing in memory and see if it’s the same size: >>> fp = open(filename, 'r') >>> ds.setContent(fp) >>> fp.close() >>> content = ds.getContent().read() >>> len(content) 3145728... >>> os.remove(filename) Externally Referenced Datastreams For large files it might not be convenient to store them inside Fedora. In this case the file can be hosted externally, and we store a datastream of controlGroup type E (Externally referenced) >>> obj.addDataStream('URL', controlGroup=u'E', ... location=u'') >>> obj.datastreams() ['DC', 'FOOXML', 'TEXT', 'URL'] This datastream does not have any content, so trying to read the content will result in an error >>> ds = obj['URL'] >>> ds.getContent() Traceback (most recent call last): ... FedoraConnectionException:..."Error getting" . We can get the location though: >>> ds.location u'' The last of the datastream types is an externally referenced stream that redirects. This datastream has controlGroup R (Redirect Referenced) >>> obj.addDataStream('HOMEPAGE', controlGroup=u'R', ... location=u'') >>> obj.datastreams() ['DC', 'FOOXML', 'TEXT', 'URL', 'HOMEPAGE'] This datastream works the same as an externally referenced stream. Deleting Datastreams A datastream can be deleted by using the python del keyword on the object, or by calling the delete method on a datastream. >>> len(obj.datastreams()) 5 >>> ds = obj['HOMEPAGE'] >>> ds.delete(logMessage=u'Removed Homepage DS') >>> len(obj.datastreams()) 4 >>> del obj['URL'] >>> len(obj.datastreams()) 3 Another Special Datastream: RELS-EXT Besides the special DC datastream, there is another special datastream called RELS-EXT. This datastream should contain flat RDFXML data which will be indexed in a triplestore. The RELS-EXT datastream has some additional methods to assist in working with the RDF data. To create the RELS-EXT stream we don’t need to supply an RDFXML file, it will create an empty one if no data is send. >>> obj.addDataStream('RELS-EXT') >>> ds = obj['RELS-EXT'] Now we can add some RDF data. Each predicate contains a list of values, each value is a dictionary with a value and type key, and optionally a lang and datatype key. This is identical to the RDF+JSON format. >>> from fcrepo.utils import NS >>> ds[NS.rdfs.comment].append( ... {'value': u'A Comment set in RDF', 'type': u'literal'}) >>> ds[NS.rdfs.comment] [{'type': u'literal', 'value': u'A Comment set in RDF'}] >>> NS.rdfs.comment in ds True >>> for predicate in ds: print predicate To save this we call the setContent method without any data. This will serialise the RDF statements to RDFXML and perform the save action: >>> ds.setContent() >>> print ds.getContent().read() <rdf:RDF ...> <rdf:Description rdf: <rdfs:comment>A Comment set in RDF</rdfs:comment> </rdf:Description> </rdf:RDF> We are not allowed to add statements using the DC namespace. This will result in an error. I suppose this is because it should be set through the DC datastream. >>> ds[NS.dc.title].append({'value': u'A title', 'type': 'literal'}) >>> ds.setContent() Traceback (most recent call last): ... FedoraConnectionException: ... The RELS-EXT datastream has improper relationship assertion: dc:title. We can also use RDF to create relations between objects. For example we can add a relation using the Fedora isMemberOfCollection which can be used to group objects into collections that are used in the OAIPMH feed. >>> colpid = client.getNextPID(u'foo') >>> collection = client.createObject(colpid, label=u'A test Collection') >>> ds[NS.fedora.isMemberOfCollection].append( ... {'value': u'info:fedora/%s' % colpid, 'type':u'uri'}) >>> ds.setContent() >>> print ds.getContent().read() <rdf:RDF ...> <rdf:Description rdf: <fedora:isMemberOfCollection rdf:</fedora:isMemberOfCollection> <rdfs:comment>A Comment set in RDF</rdfs:comment> </rdf:Description> </rdf:RDF>>>> print ds.predicates() ['', 'info:fedora/fedora-system:def/relations-external#isMemberOfCollection'] Notice that the Fedora PID needs to be converted to an URI before it can be referenced in RDF, this is done by prepending info:fedora/ to the PID. Service Definitions and Object Methods Besides datastreams, a Fedora object can have methods registered to it through service definitions. We don’t provide direct access to the service definitions but assume that all the methods have unique names. >>> obj.methods() ['viewObjectProfile', 'viewMethodIndex', 'viewItemIndex', 'viewDublinCore']>>> print obj.call('viewDublinCore').read() <html ...> ... <td ...>My Second Modified Datastream</td> ... </html> Searching Objects Fedora comes with 2 search functionalities: a fielded query search and a simple query search. They both search data from the DC datastream and the Fedora object properties. The fielded search query can search on the following fields: - cDate - contributor - coverage - creator - date - dcmDate - description - format - identifier - label - language - mDate - ownerId - pid - publisher - source - state - subject - title - type - rights Fedora has a query syntax where you can enter one or more conditions, separated by space. Objects matching all conditions will be returned. A condition is a field (choose from the field names above) followed by an operator, followed by a value. The = operator will match if the field’s entire value matches the value given. The ~ operator will match on phrases within fields, and accepts the ? and * wildcards. The <, >, <=, and >= operators can be used with numeric values, such as dates. Examples: - pid~demo:* description~fedora - Matches all demo objects with a description containing the word fedora. - cDate>=1976-03-04 creator~*n* - Matches objects created on or after March 4th, 1976 where at least one of the creators has an n in their name. - mDate>2002-10-2 mDate<2002-10-2T12:00:00 - Matches objects modified sometime before noon (UTC) on October 2nd, 2002 So let’s create 5 objects which we can use to search on: >>> pids = client.getNextPID(u'searchtest', numPIDs=5) >>> for pid in pids: client.createObject(pid, label=u'Search Test Object') <fcrepo.object.FedoraObject object at ...> <fcrepo.object.FedoraObject object at ...> <fcrepo.object.FedoraObject object at ...> <fcrepo.object.FedoraObject object at ...> <fcrepo.object.FedoraObject object at ...> Now we’ll search for these objects with a pid search, we also want the label returned from the search. >>> client.searchObjects(u'pid~searchtest:*', ['pid', 'label']) <generator object searchObjects at ...> The search returns a generator, by default it queries the server for the first 10 objects, but if you iterate through the resultset and come to the end the next batch will automatically be added. To illustrate this we will query with a batch size of 2: >>> results = client.searchObjects(u'pid~searchtest:*', ['pid', 'label'], ... maxResults=2) >>> result_list = [r for r in results] >>> len(result_list) >= 5 True >>> result_list[0]['pid'] [u'searchtest:...'] >>> result_list[0]['label'] [u'Search Test Object'] As shown we actually get more results then the max of 2, but the client asks Fedora for results in batches of 2 while we iterate through the results generator. When we want to search in all fields, we just have to drop the condition ‘pid:’, and specify ‘terms=True’. The search is case-insensitive, and use * or ? as wildcard. >>> client.searchObjects(u'searchtest*', ['pid', 'label'], terms=True) <generator object searchObjects at ...> RDF Index Search Besides searching the DC datastream in the relational database, it’s also possible to query the RELS-EXT datastream through the triplestore in the SPARQL language. Let’s find all objects that are part of the collection we created above in the RELS-EXT datastream example >>> sparql = '''prefix fedora: <%s> ... select ?s where {?s fedora:isMemberOfCollection <info:fedora/%s>.} ... ''' % (NS.fedora, colpid) >>> result = client.searchTriples(sparql) >>> result <generator object searchTriples at ...> >>> result = list(result) >>> len(result) 1 >>> result[0]['s']['value'] u'info:fedora/foo:...' Other output formats and query languages can be specified as parameters, by default only SPARQL is supported. The searchTriples method also has a flush argument. If you change a RELS-EXT datastream in Fedora, the triplestore is actually not updated! You have to set this flush param when you’re searching to true to make sure the triplestore is updated. By default Fedora sets the flush parameter to false which is understandable for performance reasons but can be very confusing. This library sets the param to true by default, which is not always very efficient, but you are sure the triplestore is up to date. FCRepo Changes 1.1 (2010-11-04) - Added simple searching (via searchObject), courtesy of Steen Manniche - Removed buildout versions from buildout.cfg - Fixed bug when decoding empty text - Updated readme 1.0 (2010-09-30) - Added support for Fedora3.4 - Changed contact info, switched from Subversion to Mercurial Changes - Fixed bug triggered when retrieving DC datastream values that contain no text fcrepo 1.0b2 (2010-05-17) Changes - Full Windows compatibility through patches from Owen Nelson - Bugfix in datastreams handling fcrepo 1.0b1 (2010-05-03) Changes - Initial Code release with working API-A, API-M search and index search. - Downloads (All Versions): - 9 downloads in the last day - 85 downloads in the last week - 261 downloads in the last month - Author: Infrae - License: BSD - Categories - Package Index Owner: gbuijs, jsproc - DOAP record: fcrepo-1.1.xml
https://pypi.python.org/pypi/fcrepo
CC-MAIN-2015-32
refinedweb
3,171
50.33
Apollo Angular 0.11 Explore our services and get in touch. New name, AoT support, TypeScript improvements, and Angular 4 readiness We recently released version 0.11.0of apollo-angular and a lot of things have changed and improved since our last update blog! First, here is a overview list of the main changes: - New name - Support for Apollo Client 0.8+ - AoT support - Multiple Apollo Client instances in a single app - TypeScript improvements and TypeScript codegen - Apollo Client Developer Tools - ES6 Modules and Tree Shaking - Support for Angular 4 So let’s dive into it! New name As you all know, the term “Angular 2” is no longer a thing, now it’s just “Angular”, without the version suffix (#justAngular). So we renamed the package to be “apollo-angular”. We really wanted “angular-apollo” to match up with react-apollo, but it was already taken. That means that from now on the “angular2-apollo” package is deprecated. We’ve applied this rule to the service as well, so there is no “Angular2Apollo” anymore. It’s just Apollo. Simpler and more convenient. The migration process is very simple: import { Angular2Apollo } from 'angular2-apollo'; class AppComponent { constructor(apollo: Angular2Apollo) {} } import { Apollo } from 'apollo-angular'; class AppComponent { constructor(apollo: Apollo) {} } Apollo Client 0.8 We’ve updated our dependency to apollo-client 0.8 which includes a lot of improvements in size and performance. check out the full list here. Ahead-of-Time Compilation One of the most interesting features of Angular is Ahead-of-Time compilation. Angular’s compiler converts the application, components, and templates to executable JavaScript code at build time. AoT compilation improves the size of the app as well as the performance and stability thanks to static-code analysis at build time. To support this feature, we had to change the way of providing ApolloClient to ApolloModule. Instead of using an instance of ApolloClient directly, it has to be wrapped with a function. Here’s an example: import { ApolloClient } from 'apollo-client'; import { ApolloModule } from 'apollo-angular'; const client = new ApolloClient(); function provideClient() { return client; } ApolloModule.withClient(provideClient); Multiple clients We’re happy to introduce a support for multiple clients. Yes, it’s now possible to use many instances of the ApolloClient inside of the ApolloModule, meaning you can call multiple GraphQL endpoints from your single client app. The use case for this feature came from some of our enterprise users. Some common use cases are when working with a server endpoint as well as a 3rd party API, or in case you are calling multiple microservices GraphQL endpoints from on client app. While it’s always better to have all of your data in one GraphQL service to be able to get all of the data you need in one request, sometimes it’s unavoidable to have to call multiple APIs. We decided to make it as an optional feature and to implement it in a way that doesn’t break your existing app. Let me explain how it works. First thing, you need to define a function to return a map of clients: function provideClients() { return { default: defaultClient, extra: extraClient, }; } Then, you can use a new method of ApolloModule called forRoot to provide clients, so you can use it in your app: ApolloModule.forRoot(provideClients) The Apollo service has now two new methods: use() and default(). First one takes a key of a client you want to use, second one returns the default client. class AppComponent { apollo: Apollo; ngOnInit() { // uses the defaultClient this.apollo.watchQuery({...}).subscribe(() => {}); // works the same as the one above this.apollo.default().watchQuery({...}).subscribe(() => {}); // uses the extraClient this.apollo.use('extra').watchQuery({...}).subscribe(() => {}); } } It’s important to know that if you want to have a default client, you need to use defaultas a key. More control Apollo-Client and Apollo-Angular both are written in TypeScript, but we still had room for improvements for our users, here are some of them. Thanks to the recent change we were able to take advantage of TypeScript’s feature called Generic Types. It’s now possible to easily define an interface of the “data” property in methods like watchQuery, query, mutation and many more. This gives you more control over the code, making it more predictable and easier to prevent bugs. Take a look at an example. const query = gql` query currentUser { currentUser { name } } `; interface User { name: string; } interface Data { currentUser: User; } class AppComponent { apollo: Apollo; currentUser: User; ngOnInit() { this.apollo .watchQuery<Data>({ query }) .subscribe((result) => { this.currentUser = result.data.currentUser; }); } } It’s very helpful and convenient, especially when used with RxJS operators. You gain more control over the result modifications. But there are even more improvements! Let’s talk about observables In Angular world, we commonly use RxJS. Unfortunately, Apollo’s standard observable shim is not compatible with RxJS, so to have the best developer experience, we created the ApolloQueryObservable. They both behave the same, containing the same methods (like refetch for example), except the RxJS support. We recently changed the logic of the ApolloQueryObservable’s generic type. Here’s an example to see how to migrate: class AppComponent { user: ApolloQueryObservable<ApolloQueryResult<Data>>; getUser() { this.user.subscribe((result) => { // result is of type ApolloQueryResult<Data> }); } } class AppComponent { user: ApolloQueryObservable<Data>; getUser() { this.user.subscribe((result) => { // result is of type ApolloQueryResult<Data> }); } } It is human nature to be lazy We love automation, just to avoid keep repeating the same things on and on again. I have a great news for you! As we know, GraphQL is strongly typed, so we have created a tool to generate API code or type annotations based on a GraphQL schema and query documents. This tool is called “apollo-codegen”. Thanks to Robin Ricard’s work, Apollo Codegen now supports TypeScript, so Angular developers no longer have to define types for their queries manually. Better Developer Experience We are happy to announce that Angular integration works great with the Apollo Client Developer Tools. It’s a Chrome DevTools extension for Apollo Client which has 3 main features: - A built-in GraphiQL console that allows you to make queries against your GraphQL server using your app’s network interface directly (no configuration necessary). - A query watcher that shows you which queries are being watched by the current page, when those queries are loading, and what variables those queries are using. - A cache inspector that displays your client-side Redux store in an Apollo-Client-friendly way. You can explore the state of the store through a tree-like interface, and search through the store for specific field keys and values. Try the dev tools in your Angular Apollo app today! ES6 Modules and Tree Shaking App load time is an important part of the overall user experience. Earlier, I talked about AoT compilation, which radically improves performance, but there is still room to speed things up. To make our app even smaller we can use a process called Tree Shaking. It basically follows the trail of import and export statements by statically analyzing the code. This way we get rid of unused parts of the application. As you know, every angular package has a UMD bundle (to support CommonJS and AMD) and a separate space for ES6 Modules. Thanks to recent changes in the apollo-client and apollo-client-rxjs, we do the same, so you can use tree shaking in your app! Ready for the future With the first stable version of Angular, the core team announced a predictable release schedule. It means that every 6 months there’s going to be a new major version of the framework. We have good news! Angular 4.0.0 is now still in beta but it’s fully compatible with Apollo so you don’t have to worry about any breaking changes. Keep improving We are working hard to give Angular developers the best developer experience we can. We want to hear more from you — what should we do next, what can we improve? And if you are really interested in GraphQL, Did you know Apollo is hiring?
https://the-guild.dev/blog/apollo-angular-011
CC-MAIN-2021-31
refinedweb
1,345
55.03
Opened 10 years ago Closed 10 years ago Last modified 10 years ago #969 closed Bug (Fixed) FileFindNextFile @extended = 0 for folder $Recycle.Bin Description Win7 RC1, AutoIt 3.3.1.0 FileFindNextFile returns @extended = 0 (a file) for the hidden, system folder $Recycle.Bin when searching the root of the drive. Ex. - $hFind = FileFindFirstFile("C:\*.*") While 1 $file = FileFindNextFile($hFind) If @error Then ExitLoop ConsoleWrite(@extended & " : " & FileGetAttrib("C:\" & $file) & " : " & $file & @CRLF) WEnd FileClose($hFind) Attachments (0) Change History (4) comment:1 Changed 10 years ago by Jpm - Keywords Win7 RC1 added comment:2 Changed 10 years ago by Jpm comment:3 Changed 10 years ago by Jpm - Milestone set to 3.3.1.1 - Owner set to Jpm - Resolution set to Fixed - Status changed from new to closed Fixed in version: 3.3.1.1 comment:4 Changed 10 years ago by wraithdu I just wanted to add to this, in case the problem you fixed was specific to that directory. I put together a little script to do an extensive search to see if the problem was more widespread. I did a recursive search of my entire C: drive, and out of 121761 files and folders, 2820 folders were incorrectly identified as files by @extended. Again, just adding this as additional info. BTW, this thing whipped through my whole HDD in ~40 seconds. Impressive speed! Script - #include <array.au3> Dim $aErrors[1] = [0], $total = 0 _Find("C:") _ArrayInsert($aErrors, 0, "Total Items: " & $total) $aErrors[1] = "Total Errors: " & $aErrors[1] _ArrayInsert($aErrors, 2, '@extended : attrib "D" : path') _ArrayDisplay($aErrors) Func _Find($path) Local $hFind = FileFindFirstFile($path & "\*.*") While 1 Local $item = FileFindNextFile($hFind) If @error Then ExitLoop Local $v1 = @extended Local $v2 = StringInStr(FileGetAttrib($path & "\" & $item), "D") If $v2 > 0 Then $v2 = 1 If $v1 <> $v2 Then $aErrors[0] += 1 _ArrayAdd($aErrors, $v1 & " : " & $v2 & " : " & $path & "\" & $item) EndIf $total += 1 If $v2 Then _Find($path & "\" & $item) ; recurse directory WEnd FileClose($hFind) EndFunc Guidelines for posting comments: - You cannot re-open a ticket but you may still leave a comment if you have additional information to add. - In-depth discussions should take place on the forum. For more information see the full version of the ticket guidelines here. same thing is true under Vista
https://www.autoitscript.com/trac/autoit/ticket/969
CC-MAIN-2019-22
refinedweb
372
60.04
The JSP specification supports two types of JSP pages: regular JSP pages containing any type of text or markup, and JSP Documents, which are well-formed XML documents; i.e., documents with XHTML and JSP elements. To satisfy the well-formed-ness requirements, JSP directives and scripting elements in a JSP Document must be written with a different syntax than a regular JSP page: <%@ page attribute list %> <jsp:directive.page attribute list /> <%@ include file="path" %> <jsp:directive.include <%! declaration %> <jsp:declaration>declaration</jsp:declaration> <%= expression %> <jsp:expression>expression</jsp:expression> <% scriptlet %> <jsp:scriptlet>scriptlet</jsp:scriptlet> Tag libraries are declared as XML namespaces in a JSP Document. For instance, a JSP Document with XHTML template text and JSP actions from the standard and the JSTL core libraries should have an <html> root element with these namespace declarations: <html> <html xmlns="" xmlns:jsp="" xmlns:c="" xml: Related Reading JavaServer Pages By Hans Bergsten The xmlns attribute sets the default namespace to the XHTML namespace, the xmlns:jsp attribute associates the jsp prefix with elements defined as JSP standard actions, and the xmlns:c attribute associates the c prefix with the elements defined by the JSTL core library. xmlns xmlns:jsp jsp xmlns:c c JSP Documents have been part of the JSP specification from day one, but initially as an optional feature and later with many limitations. JSP 2.0 lifts most of these limitations, making it much easier to work with the combination of XML and JSP. Prior to JSP 2.0, a JSP Document had to have a <jsp:root> root element, to tell the container what type of JSP page it was. JSP 2.0 removes this limitation by defining new ways to identify a file as a JSP Document. A file is processed as a JSP Document by a JSP 2.0 container if one of these conditions is true: <jsp:root> The request path matches the URL pattern for a web.xml JSP property group declaration with an <is-xml> element set to true. See part two of this series for more on JSP property group declarations. <is-xml> true The request path extension is .jspx, unless this extension matches the URL pattern for a JSP property group declaration with an <is-xml> element set to false. In other words, .jspx is the default extension for JSP Documents, but it can be explicitly disabled by a property group declaration. false The request path extension is either .jsp or matches a URL pattern for a JSP property group declaration and the root element in the file is <jsp:root>. These new rules make it possible to write a JSP Document as a regular XHTML file (with JSP elements for the dynamic content, of course), for instance, without having to place all content within a <jsp:root> element. You can even use .html as the extension for such files if you create a JSP property group declaration like this: ... <jsp-config> <jsp-property-group> <url-pattern>*.html</url-pattern> <is-xml>true</is-xml> </jsp-property-group> </jsp-config> ... If you've tried to write JSP Documents with JSP 1.2, you've most likely run into problems dynamically assigning values to XML element attributes. For instance, say you want to set the class attribute of an XML element to the value of a bean property holding the user's style preferences. Your first attempt may look something like this: class <table class="%= user.getTableClass() %"> This type of Java expression can be used as the attribute value of a JSP action element in a JSP Document, but JSP doesn't recognize this syntax in template text, so it doesn't work as used here. Using a JSP action element to set the attribute value is also a no-go: <table class="<c:out"> This doesn't work because a well-formed XML document mustn't have a less-than (<) character in an element attribute value. < The only way to set a markup element attribute value dynamically with JSP 1.2 and still fulfill the well-formed-ness requirement is with nasty-looking CDATA sections, treating the beginning and the end of the markup element as raw text (wrapped around the dynamically generated value) rather than as markup: <jsp:text><!CDATA[<table class="]]></jsp:text> <c:out <jsp:text><!CDATA[">]]></jsp:text> JSP 2.0 gives you two simple alternatives for this scenario: use an EL expression in the template text, or use a set of new standard actions to generate the element. With an EL expression, the example can be written like this: <table class="${user.tableClass}"> A JSP 2.0 container evaluates EL expressions it encounters in template text as well as in action attributes, so in most cases, this solution fits the bill. If you can't express the value you want to assign as an EL expression, you can instead build the whole XML element dynamically with three new standard actions and generate the attribute value with any type of JSP code: <jsp:element <jsp:attribute <c:out </jsp:attribute> <jsp:body> ... </jsp:body> </jsp:element> The <jsp:element> action creates an XML element with the attributes created by nested <jsp:attribute> actions. The attribute value is set to the evaluation result of the <jsp:attribute> body, so you can use custom actions to generate the value, such as the <c:out> action used in this example. Similarly, the element body is set to the evaluation result of a nested <jsp:body> element. <jsp:element> <jsp:attribute> <c:out> <jsp:body> An XML document should have an XML declaration at the very top of the document, possibly followed by a DOCTYPE declaration. You control the generation of these two declarations in JSP 2.0 with the new <jsp:output> standard action. DOCTYPE <jsp:output> Unless the JSP Document has a <jsp:root> element as its root element (or represents a tag file, which I'll cover in the next article in this series), the JSP container generates an XML declaration like this by default: <? xml version="1.0" encoding="encodingValue" ?> The value of the encoding attribute is the character encoding specified by the contentType attribute of the JSP page directive, or UTF-8 if you don't specify a character encoding. If you don't want an XML declaration to be generated (maybe because the JSP Document is included in another JSP page), you need to tell the JSP container by including a <jsp:output> action element like this in the JSP Document: encoding contentType page UTF-8 <jsp:output Use the attribute values true or yes to disable the declaration generation, and false or no to enable it. yes no A DOCTYPE declaration tells an XML parser (such as the one used by a browser) with which Document Type Declaration (DTD) the document is supposed to comply. The parser may use this information to validate that the document contains only the XML elements declared by the DTD. You can't put the DOCTYPE declaration for the generated document in the JSP Document, because then you're saying that the JSP Document itself complies with the DTD. Instead, use the <jsp:output> action to tell the JSP container to add a declaration to the generated response: <jsp:output <jsp:directive.page As used in this example, the <jsp:output> action adds a DOCTYPE declaration for XHTML to the response: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> I also included a <jsp:directive.page> declaration with a contentType attribute to set to the MIME type for the response to text/html in this example, to tell the browser how to treat the response content. Note that the proper MIME type for XHTML is actually application/xhtml+xml, but some modern browsers (notably Internet Explorer 6) don't recognize it; text/html is an accepted MIME type for XHTML 1.0 that most browsers know how to deal with. <jsp:directive.page> text/html application/xhtml+xml JSP 2.0 makes it a lot easier to write JSP pages as XML documents, as you've seen in this installment. The final article in this series will cover the new features related to custom tag libraries: the new tag file format and the new simple tag API..
http://www.onjava.com/pub/a/onjava/2004/04/21/JSP2part3.html
CC-MAIN-2014-15
refinedweb
1,382
51.48
In my addin I visualize the attribute values of a feature layer in a TableControl. Until recently the whole setup worked fine, and for whatever reason does the addin not work any more correct. I can show the attribute values of the feature layer, but when I try to retrieve the selected row in the TableControl then nothing comes back. To utilize the TableControl I define the namespace in the xaml file: xmlns:editing="clr-namespace:ArcGIS.Desktop.Editing;assembly=ArcGIS.Desktop.Editing" and add the control: <editing:TableControl Grid. <i:Interaction.Triggers> <i:EventTrigger <i:InvokeCommandAction </i:EventTrigger> <i:EventTrigger <i:InvokeCommandAction </i:EventTrigger> </i:Interaction.Triggers> </editing:TableControl> I am getting an XDG0008 error from the component, saying the name "TableControl" does not exist in the namespace "clr-namespace:ArcGIS.Desktop.Editing;assembly=ArcGIS.Desktop.Editing". Even though it is claimed the component does not exist within the namespace, the component is shown in the addin, populated with data, and then when I change the selected row nothing happens. Every time the selected row is changed I ask the TableControl to return the selected row indexes: public async Task SelectedRowForStepper() { IReadOnlyList<long> curSelectedRowList = await QueuedTask.Run(() => _tableControl.GetSeletedRowIndexes()); if (curSelectedRowList.Any()) { ... here, _tableControl is of type TableControl and its content was created through TableControlContentFactory.Create(workLayer), where worklayer is of type MapMember. The resulting content object contains the correct value in its MapMember property. However, when running _tableControl.GetSeletedObjectIds() as List<long>) the list has a count of zero. When I look at _tableControl at the point when one of the rows is selected, then _tableControl has the following property values: How can it be, that I do get these values, even though the table is populated with correct data, the control itself counts 234 items/rows in it, and when I select one or more rows the count of selected rows is shown correctly in the bottom of the control as e.g. "1 of 234"? Hi Thomas, Can you tell me what version of ArcGIS and the Pro SDK you are using? Also what type of data you are displaying in the table. Is it file gdb, SDE data, feature service data? The xaml error that you are getting "TableControl" does not exist in the namespace "clr-namespace:ArcGIS.Desktop.Editing;assembly=ArcGIS.Desktop.Editing". is something that you can ignore. As you are aware you can compile and run your project despite this error message. The messages are coming from the 'XAML Designer' and if you close all XAML windows and rebuild you shouldn't get any messages. In 2.8 all Pro assemblies were switched from mixed (x86 and x64) to 64 bit only. Unfortunately the XAML designer loader cannot load x64 bit assemblies. Hopefully the upcoming release of Visual Studio 2022 will fix this issue. With regards to the other problems, I am not seeing any issues with the GetSelectedRowIndexes or GetSeletedObjectIds methods in my test (I am using file gdb data), Knowing which release and what type of data you are using will allow me to help track it down further. Also when checking the properties of the tableControl is your breakpoint on the UI thread or the background thread? Thanks Narelle Hi Narelle, thanks for your fast reply. We are running ArcGIS Pro version: 2.8.1 and the addin is currently build with ArcGIS Pro SDK version: 2.8.0.29751. With respect to the data source, I display QueryLayer, data request goes to an Oracle database. The breakpoint is directly on line 3 in the SelectedRowForStepper method. Hence, I believe I am looking at the TableControl object that should hold the information visualized in the UI. Thank you so much for looking deeper into things. Bests Thomas Hi Thomas, Is there anything else you can tell me about the queryLayer that you have. Are you displaying the entire data from the Oracle feature class or does the query layer have a where clause so that only a subset of data is displayed. Do you happen to know what version of Oracle is being used? I have connected to an Oracle database and loaded a querylayer with a point feature class. I have tested on a 2.8.1 build of ArcGIS Pro and am not seeing any problems with the TableControl or it's properties. In your original post you mention "Until recently the whole setup worked fine, and for whatever reason does the addin not work any more correct." Did you perform any software upgrades in the timeframe that you noticed it stopped working. Or made additional changes to the add-in? Finally, would you be able to share your entire visual studio project with me. That might help me find the problem. Narelle Hi Narelle, about the QueryLayer, I do not display the entire data of the oracle table, but have a where clause in. SELECT * FROM SCHEMA1.TABLE1 WHERE FEATURETYPE = 'turbine' AND EXISTS (SELECT 1 FROM SCHEMA2.TABLE2 v WHERE v.LOCALID = TO_CHAR(OTLOCALID) AND TRUNC(v.REGISTRATIONFROM, 'MI') = TRUNC(TO_TIMESTAMP_TZ(TO_CHAR(FROM_TZ(CAST(OTREGISTRETIONFROM AS TIMESTAMP), 'Europe/Copenhagen'),'YYYY-MM-DD HH24:MI:SS,ff6 TZH:TZM'),'YYYY-MM-DD HH24:MI:SS,ff6 TZH:TZM'), 'MI')) ORDER BY OBJECTID I set the unique identifier through queryDescription.SetObjectIDFields("OBJECTID"); Our current version of Oracle is Oracle Database 18c EE Extreme Perf Release 18.0.0.0.0 - Production Version 18.10.0.0.0 and yes, we had changes in our infrastructure. I only touched the add-in because the location of the Oracle server changed in connection with an upgrade from version 12.1.0.1.0, and since then the add-in goes haywire. Thomas
https://community.esri.com/t5/arcgis-pro-sdk-questions/tablecontrol-does-not-return-selected-row/td-p/1100214
CC-MAIN-2022-40
refinedweb
952
56.55
Best Raspberry Pi add-ons: the top extras for your Pi 14th Dec 2013 | 08:04 Essential add-ons for your Raspberry Pi The Raspberry Pi is probably the most successful British computing product in a decade, but it's also one of the most misunderstood. Too many people think of the Pi as just a cheap desktop, but by the time you've bought a monitor, keyboard, mouse and SD card, you'll have spent almost as much as you would buying a cheap laptop - and it's a whole lot less powerful. The real innovation of the Pi, then, isn't its cost, but its form factor. It's small, it can be run off a few batteries (or solar cells), and has GPIOs (General Purpose Input and Output pins) exposed. This trio of features is almost unprecedented in computing, and before the Pi, it had never been done at this price point. Because it's not just a new device, but a new type of device, a lot of people struggle to understand how to use it, though. This was a particular problem when the Raspberry Pi first came out: there were few tutorials explaining how to use it, and if you wanted to use the GPIO you had to build any add-ons for yourself. Fortunately, times have changed quite rapidly, and a whole ecosystem of components for the Pi has sprung up. Every day, it seems, we hear news of some new device that connects to the Pi to enable some function or add some feature.# Many of these haven't been developed by large companies, but by hobbyists who saw a need and filled it. It's been great to see just how quickly new and innovative devices have come on to the market. Here, we're going to look at three of our favourite add-ons, so if you've got a Raspberry Pi acting as a paperweight, blow off the dust and put it to a more productive use; if your Pi project is already complete, well, you know it needs just one more feature don't you? Pi Lite A multi-purpose LED display When the folks at Ciseco mulled over the problem of underused Raspberry Pis, they came up with a simple solution: stick a shedload of LEDs on a board and create a simple way for people to turn them on and off. Why? That's up to the user's imagination, but it made the Pi stand out from a regular PC and forced people to think about its particular niche. From this simple idea, the Pi Lite was born. It contains 126 red LEDs (and there's a white LED version coming soon) on a board that plugs into the GPIO pins on the Pi, and then sits neatly over the top of the main board so the unit doesn't take up any more space than a naked Pi. It's not quite a plug and play add-on, and there's a little configuration needed to enable the board to use the serial port, but it's not too complex as long as you're using the Raspbian OS, and it's well documented on Ciseco's openmicros.org website. It should take no more than 10 minutes, though you will need a network connection to install some software which could make it a little tricky on a model A. With all that done, you can control your Pi Lite over the serial port. From the command line type: minicom -b9600 -o -D /dev/ttyAMA0 and you will have an interface to the LEDs. Anything you type will scroll across in glorious red light. This is pretty cool by itself, but it's only the beginnings of what the Pi Lite can do. As well as text, you can also send commands to the unit. These are anything preceded by three dollar signs. For example: $ALL,ON will turn all the pixels on the Pi Lite. There are additional modes to display vertical and horizontal graphs, and to manipulate individual pixels. This final mode takes a string of 126 1s and 0s, each of which represents a pixel, such as: $F000000000000000000000111000011111110011111110111111111111101111 111101111011000110011000110000000000000000000000000000000000000 This final mode allows you to draw any images that you like, though it's probably most useful when used with scripts rather than through typing. Since all communication to the Pi Lite goes through the serial port, you can access it using any language that supports serial communication. Python (using the serial module) is probably the easiest to try, and there are plenty of examples to get you started, again on Ciseco's Open Micros website. Since the Pi Lite is driven from a serial port, it's actually quite portable, and can be run from any device with such capabilities, whether that's Raspberry Pi, Linux PC or almost any other computer (such as via a USB FTDI connection). Getting this set up will require a little soldering, but shouldn't be too complex. Pi Lite emulator If you're interested in seeing how the Pi Lite works, but aren't quite ready to part with any money, Ciseco has made an emulator so you can try out the hardware before you purchase it. This is available from (yes, you guessed it) Ciseco's Open Micros website. The Pi Lite has one more trick up its sleeve: it uses an ATMEGA chip to drive the LEDs. This just happens to be the same family of chips that are used in the popular Arduino boards, and the Pi Lite comes with the Arduino bootloader installed. In other words, you can program the microcontroller on the board to do whatever you want. In fact, you could even run the Pi Lite board as a standalone unit without any other computer attached. The board also exposes the five analogue inputs from the ATMEGA, which means that with a bit of programming, you can make those accessible for your project. This is a bit more complex than the standard use of the Pi Lite, but it is an excellent example of how a single piece of hardware can help you develop a wide range of technical skills. Camera module Take pictures on your Pi. Before we get on to what the camera module is, let's first clear up what it isn't. If you're looking for a cheap webcam to Skype with your family, the Raspberry Pi camera module isn't for you. Not least because Skype doesn't run on the device. The Raspberry Pi camera is easy to use, but not in a plug-in-and-use-graphical-tools kind of way. Instead, it's designed to be scriptable. Now, with that cleared up, let's get started. The camera module comes as a ribbon interface that slots into the vertical connector between the Ethernet port (or the Ethernet port-shaped gap on the model A) and the HDMI connector. Lift the top of the connector up, slot the ribbon in with the silver side facing the HDMI port, then push the top back down. With this done, you'll need to make sure you've got the latest version of Raspbian with: sudo apt-get update sudo apt-get upgrade sudo rpi-update Then run raspi-config and make sure that the camera is enabled. Finally, restart and you should be ready to go. All the magic is done with two commands: raspistill and raspivid. It should be pretty obvious which one takes still images and which takes videos. You can, of course, just run it like this, and type the command each time you want to use the camera. To capture a still image, it just takes: raspistill -o image.jpg However, that's not where the fun lies. Because you can run these from the command line, you have full power to include it in your programs. For example, if you never know what white balance to use, why not use all of them with the following Python script: from subprocess import call for awb in ['off','auto','sun','cloud','shade','tungsten','fluoresce nt','incandescent','flash','horizon']: call(["raspistill -n -awb " + awb + " -o image" + awb + ". jpg"], shell=True) As you can see from the above, you can control the camera from Python, even though there aren't any Python bindings, by using a system call. You can do this in most languages, so hack away in your language of choice. Since all the options are available as command-line switches, you should easily be able to build the options you want into the system call. Next, how about trying to make a stop-motion cartoon of your life? raspistill -ifx cartoon -ISO 800 -t1 100 -t 10000 -w 300 -h 300 -o test_%04d.jpg Here we're using quite a few of the command line options. -ifx is image effects, and it allows you to do all sorts of cool stuff. In this case, we're using it to render the pictures in a cartoon style. -ISO sets the ISO sensitivity. We've used a high one since image quality doesn't matter too much in this case. -t1 sets the timeout between the photos in milliseconds, while -t sets the total time for the timelapse capture, again in milliseconds. -w and -h are width and height in pixels. Finally, -o is the image filename (the _%04d is a number that's incremented with every picture). These, of course, are just a few examples to get you started. For a full list of options for the camera, just run raspistill from the command line (with no options). Some of them won't make any sense unless you've got some photography experience, but with a bit of fiddling, you should get the hang of things. As we mentioned before, raspivid can be used in a similar way to capture videos. To capture a simple video, run: raspivid -t 5000 -o video.h264 and the more advanced options are similar to raspistill. Once you've mastered the basics, you can try out some more advanced projects. How about facial recognition using OpenCV? There's a OpenCV tutorial from the Think RPI blog to get you started. Quick2Wire GPIO expansions Extra ports to improve your Pi's connectivity. One of the best features of the Raspberry Pi is the exposed GPIO pins. These allow you to add whatever circuitry that you want to your Raspberry Pi. They're both easy to use and easy to understand; with a single line of Python or Bash, you can turn them on or off, or read their input. While they're good, though, they're not perfect. You can see a lot of pins, but not all of them are available to use, and there's no analogue input or output. They also have very little protection, and if you apply the wrong voltage (or even if you accidentally link the wrong two pins together) you can fry your Raspberry Pi a little too easily. Several companies have developed products to help simplify the process and each works better in different situations. For example, Ciseco's Slice of Pi gives access to 16 GPIOs that are protected to work at both 3V and 5V; the Pi Face adds some useful features, such as buttons, LEDs and relays (to control motors etc); and the Gertboard adds a wide range of input and output features that are useful for learning about the various applications. You can even hook up an Arduino and use this to control input and output, although this will require programming in C. However, we're going to look at Quick2Wire's I2C and SPI board. GPIO expanders SPI and I2C are both Serial Peripheral Interface and Inter-Intergrated Circuits (often written I2C and pronounced 'eyesquared-C'). In its basic usage, the Quick2Wire setup comes in two parts: the main board breaks out the I2C and SPI ports as well as adding protection for the Pi and voltage selectors. The company also makes I2C boards that add GPIO ports and analogue input and output to your device. These boards can also be daisy-chained to add even more ports to the Pi. You will have to do plenty of work yourself and solder the boards on your own but, once done, connecting them is simply a case of plugging in the ribbon cables. There's quite a bit of setup to get a Quick2Wire board up and running. Not least the need to configure kernel modules. You will find an online guide on the Quick2Wire website, though it did miss a few things, such as the location of the Python library. We also found that we needed to install the python3-setuptools package. Once up and running, though, it was straightforward to program the board with Python. Right now, it's not a question of which Raspberry Pi GPIO board is best, but which is most suitable for your project: Ciseco's Slice of Pi is a great, low-cost device to make Pi GPIO use a bit safer; the Pi Face is a really good board for experimenting with physical computing, especially if you don't have a specific use in mind; while the Quick2Wire boards are incredibly useful if you need the power of SPI or I2C to daisy-chain peripherals and port expanders. - Now why not read Raspberry Pi operating systems: 5 reviewed and rated
http://m.techradar.com/news/computing/pc/best-raspberry-pi-add-ons-the-top-extras-for-your-pi-1205045
CC-MAIN-2014-10
refinedweb
2,258
66.98
Something. If you haven’t figured it out already, Apple’s Documentation is trash. If you find a useful piece of documentation from Apple’s website that genuinely helps you on your mission to implement push notifications, I think you should double-check that you’re not dreaming and send me a link because I’d love to see it. Anyways, it took me a surprisingly long time to figure out what is really a pretty simple process if you know what to do. The basic concept is: - Generate a certificate for your server to authenticate with Apple’s Push Notification Service. - Convert that certificate into something that can actually be used by your web service. - Request device Push Notification tokens - Send a notification How do you generate an Apple Push Notification Service Certificate? The first step in sending push notifications is getting a certificate. You’ll want to generate two certificates: one to use when you’re developing and one to use in prod. Apple is pretty clear about which kind you’ll be generating during the process. Follow the following steps to generate your certificates. Step 1. Log into your apple developer account Go to Go to developer.apple.com and login. Step 2: Edit Certificates for your App Once you’re logged in to your Apple Developer account, click on Certificates, Identifiers & Profiles. Step 3: Edit Your App Identifier Click on the Identifiers item in the lefthand menu once you’re on the Certificates, Identifiers & Profiles page. Find the app you’re working on in the table that shows up and click on it to edit it. Step 4: Enable and Edit Push Notifications If you haven’t enabled push notifications already, do that now. Just check the box on the left side of the Push Notifications row, then click save in the top right corner. Once your Push Notifications have been enabled, click the Edit or Configure button which will be in the same row to the right. Click “Create Certificate” and you’ll be ready to upload your certificate signing request and download your certificate. Keep the page that it redirects you to open and we’ll revisit this in step 6. Step 5: Generate a Certificate Signing Request Before you get your certificate, you’ll need to generate and upload a certificate signing request. To do this, follow these steps: - Open up the Keychain Access app on your Mac. - In the menu, go to Keychain Access > Certificate Assistant > Request a Certificate from a Certificate Authority - Fill out the form with your email address, a useful description of the certificate your requesting, and select the option to save to disk. Step 6: Upload your Certificate Signing Request (CSR) and Download your Certificate Go back to the page from the end of Step 4. It should look like this: Upload the CSR you just created, then, download your new push notification certificate. This should be called something like aps.cer or aps_development.cer. Step 7: Convert your Certificate from a .cer to a .pem Most frameworks you’ll use to actually send notifications, such as PyAPNS2, will actually require you to provide them with a .pem file for your certificate. We can transform our newly generated .cer file into a .p12 file and then into a .pem file using a relatively simple process. - First, double-click your certificate file that you downloaded. This will add it to the Keychain Access app. - Open Keychain Access and go to the “My Certificates” category - Find your certificate in the list. - Right click it and select the Export option - Save the file as a .p12. - Run the following command to convert the .p12into a .pem: openssl pkcs12 -in /path/to/cert.p12 -out /path/to/cert.pem -nodes -clcerts Send a Push Notification Now that you have a certificate in the correct format, it’s pretty easy to send a push notification. As an example, I’ll show you how to do this using PyAPNs2. PyAPNs2 is a neat little library based off the original PyAPNs2 but updated to meet the latest and greatest specs for sending push notifications. Sending a notification should be as easy as this: from apns2.client import APNsClient from apns2.payload import Payload token_hex = 'b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b87' payload = Payload(alert="Hello World!", sound="default", badge=1) topic = 'com.example.App' client = APNsClient('key.pem', use_sandbox=False, use_alternative_port=False) client.send_notification(token_hex, payload, topic) Which combination of Sandbox and Prod/Dev Certificates should you use? One thing I found confusing was knowing when to use the sandbox mode and when to use my dev/prod certs. When switching my app builds between Debug/Release/Testflight versions, I got some varying results using different combinations of sandbox/certificates. This is what I was able to find out: - To send notifications to Xcode builds loaded directly on your device, you should be using sandbox mode with the development certificate. Although sandbox mode with the production certificate also works, I wouldn’t recommend it. - For sending push notifications to Testflight builds, you should not use sandbox mode and should be using your production certificate. If you’re curious, these were my actual findings when testing different combinations: This was all a bit of a mess to work through, but at the end of the day I was able to get push notifications working in my app again after migrating onto the Expo bare workflow and deciding to upgrade from the deprecated token-only method of sending push notifications that was so easy to do with Expo’s exponent_server_sdk. Now I have notifications beautifully integrated into my Django API, using Celery to handle the heavy lifting of sending/retrying notifications. If you have any questions or more takeaways from your experience, comment with them below.
https://michaelwashburnjr.com/blog/apple-push-notification-service-certificates
CC-MAIN-2021-31
refinedweb
963
56.15
Openstack instance ip_address How can i get the fixed ip_address of an openstcak instance using python sdk ? find_server as documented in the openstacksdk API reference. EDIT: find_server() returns a server object that only has the links populated. After this, you use get_server() to get all the details. import openstack; conn = openstack.connect( CONNECTION DETAILS ) server = conn.compute.find_server(YOUR SERVER NAME) server = conn.compute.get_server(server) print(str(server)) for addrinfo in server.addresses['private']: print("{} ipv{} address {}\n".format(addrinfo['OS-EXT-IPS:type'],addrinfo['version'],addrinfo['addr'])) I get the following result, which includes both static and floating IPs: openstack.compute.v2.server.Server(OS-EXT-STS:task_state=None, addresses={u'private': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:47:41:56', u'version': 4, u'addr': u'10.0.0.25', u'OS-EXT-IPS:type': u'fixed'}, {u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:47:41:56', u'version': 4, u'addr': u'172.24.4.2', u'OS-EXT-IPS:type': u'floating'}]}, .... fixed ipv4 address 10.0.0.25 floating ipv4 address 172.24.4.2 As far as I understand it, it's a field in the Server object. Instructions how to set that field when launching an instance. Reading the field should not be harder. "opensttack server list" display like the table that contain all server details.I need the ip_address associated to networks for each instance. How can i get it ? +------+-------+--------+----------+--------+ ID |Name |Status| Networks| Image +------+-------+--------+-----------+----- I have never tried it, but why don’t you just print the server object returned by find_server() to find out? Asked: 2019-01-12 06:33:49 -0600 Seen: 143 times Last updated: Jan 15 '19 VM filesystem not resizing Help creating instance in Openstack Dashboard Fail to launch the 10th instance on one compute node Grizzly - Instances not get DHCP IP how to troubleshoot instance generating status error Could not retrieve public key from instance metadata, retrying in 5 seconds... OpenStack is a trademark of OpenStack Foundation. This site is powered by Askbot. (GPLv3 or later; source). Content on this site is licensed under a CC-BY 3.0 license.
https://ask.openstack.org/en/question/118839/openstack-instance-ip_address/?sort=latest
CC-MAIN-2021-04
refinedweb
363
52.76
27 April 2012 05:28 [Source: ICIS news] By Ong Sheau Ling ?xml:namespace> SINGAPORE Operating rates at GCC PP plants averaged about 80% this month, they said. Persistent tight supply have kept PP prices in the Middle East well supported at high levels in April, in spite of weak downstream demand, market sources said. From the start of the year, PP raffia prices have increased by 18-19% to $1,540-1,570/tonne CFR (cost and freight) GCC, and $1,530-1,560/tonne (€1,163-1,186/tonne) CFR (cost and freight) East Med ( “The recent outages and forthcoming turnaround will keep PP supply from the Gulf tight. Although we do admit that prices are relatively high now, converters do not really have much bargaining power because of the short availability,” a Saudi PP maker said. GCC-based PP makers are targeting a roll-over in prices for May delivery, because of the unchanged supply-demand fundamentals from this month. Saudi producers plan to offer May PP raffia grade at $1,570-1,600/tonne DEL (delivered) GCC/East Med - the same prices they quoted in April. In the Saudi domestic market, most May shipments for the material were concluded $1,550-1,570/tonne Petrochemical major SABIC has been running some of its PP facilities in “Inventories are still very low whether it is in A power outage at Al Jubail in January had also reduced PP production. Early this month, another PP producer Saudi National Industrialisation Co (TASNEE) also had an outage at its 720,000 tonne/year PP unit because of technical issues. Meanwhile, Advanced Petrochemical Company (APC) will shut its two PP lines in Al-Jubail in May for a 30-day turnaround. Each of its PP line has a nameplate capacity of 225,000 tonnes/year. In the UAE, Borouge had a brief outage at one of its two PP lines in Elsewhere in the GCC, Oman Oil Refineries and Petroleum Industries Company (Orpic) is currently running its Sohar-based 340,000 tonne/year PP unit at 70% of capacity on a lack of propylene supply from its upstream refinery at the site, a source close to the company said. SABIC markets PP from the following plants in.
http://www.icis.com/Articles/2012/04/27/9554232/gulf-may-pp-supply-stays-tight-on-low-production-in-april.html
CC-MAIN-2013-20
refinedweb
376
53.34
Tales from the Script - November 2002 Running WMI Scripts Against Multiple Computers Before we delve into this month's topic, the Scripting Guys want to say "Thank you!" The scripter alias has been flooded with great feedback, and its really nice to hear from so many of you. And even though we can't individually answer each e-mail that comes our way (although we try to answer as many as possible), we still need your suggestions to keep us on track. So far, you haven't let us down. Thanks again. In this column, we're going to look at how you can modify the typical WMI scripts found in the Script Center, modifications that will enable these scripts to run against multiple computers. We're always pointing out that one of the primary advantages of these WMI scripts — which are usually designed to run only against the local computer — is that they can be easily modified to run against a given set of computers. That's great, but as we all know, easily is a relative term; after all, nothing in scripting is easy until someone teaches you how to do it. And because we haven't gotten around to showing you how to do this, it's not surprisingly that we've received a number of questions like the following: "If I have a script to find out all the running services on a remote computer, can I use the same script to prompt me to enter the remote computer name at the command prompt?" "Is there any way that these scripts can run against more than one computer, maybe by getting computer names out of a file or something?" "Can I get this script to run against all my domain controllers?" The answer to all these questions is: you bet. So, with that in mind, let's begin this month's column. On This Page A Brief Review Entering Computer Names as Command-Line Parameters Entering Multiple Computer Names as Command-Line Arguments Retrieving Computer Names from a Text File Retrieving Computer Names from an Active Directory Container A Brief Review The best way to learn how to script is to actually do it. So let's start with a script from the script center that uses WMI to display the status of all the services installed on the local computer. Before we start modifying the script to get it to run against more than just the local computer, let's take a look at how the script appears in the Script If you’re not familiar with WMI scripts this code might look a bit cryptic. If thats the case, you have a few options open to you. If you want to, you can rush off to MSDN and read the WMI Primer series that has been featured in the Scripting Clinic column; after you've been properly immersed in the world of WMI scripting, you can pick up where you left off with this column (don't worry, we'll wait for you). Or, you can stick with us for the time being, and we'll give you a really quick overview of what's going on here. You can then delve into the WMI Primer later on to learn the details. Either way, here's how the preceding script works. This part of the script connects to WMI. To be more exact, it connects to something called a namespace: the root\cimv2 namespace. Again, if you are in need of details, go check out MSDN. In a nutshell, this section of the script just gets you connected to the WMI service on the computer whose name is specified in the strComputer variable (we'll talk about that in a second). Note: So what is a namespace? In WMI, you work with things called classes. Classes are virtual representations of real, live things; for example, there is a Win32_Service class that represents all the services on a computer. A namespace is simply the location where a set of classes are stored; in this case, the Win32_Service class is stored in the root\cimv2 namespace. OK, but what about the computer name? A minute ago we said the computer name was stored in the variable strComputer. But this variable has been set to a dot (.). Who the heck has a computer named dot? Well, probably nobody. In WMI, however, setting the computer name to a dot is just a way of saying the local computer without committing yourself to any particular local computer. Suppose you have a single computer named TestComputer. In that case, either of these two lines of code will cause your script to run against TestComputer: In fact, if the IP address of TestComputer is 192.168.1.1, this line of code will also work: As implied above, we could have hard-coded the computer name into the script. For example, this line of code connects to the WMI service on a computer named HRServer01: So why didn't we just hard-code in the name? Well, by using a variable, we're making it easy to run the script against any computer; all we have to do is figure out a way to change the value of the variable strComputer. And that is what this column is all about. The next part of the script asks the WMI service for some information. It requests a list of all the services installed on the computer, and then stores that information in the colRunningServices variable. (If you want to impress your friends, tell them that colRunningServices is actually a special type of variable known as an object reference. Let's just hope that they don't ask you what an object reference is, because we don't have time to get into that.) The final part of the script simply displays the name and state of each service that was stored in colRunningServices:. Go ahead and type the complete script into Notepad and save it as ListServices.vbs. It doesn't really matter where you save the script, but just so were all on the same page, you might want to save the script in the C:\Scripts directory. If you don't already have a C:\Scripts directory, create one before you attempt to save the script. A Scripting Guys Freebie: OK, here's a way to really impress your friends; this script will create the folder C:\Scripts for you: After you've saved the script, open up a command prompt window, navigate to the C:\Scripts directory, and then run the script by typing the following and pressing ENTER: You should see output similar to the following, showing all the services on your local machine along with the current state of the service (such as Stopped or Running). Now, there's nothing wrong with this, but what if you want to get that same service information from a remote machine? Well, you might remember that on the first line of the script the strComputer variable is set to a dot (.), indicating the local computer. Later in the script, strComputer is used to connect to the WMI service on that computer. If you change the dot to the name of another computer, the script will connect to the WMI service on that computer. In turn, the information that retrieved by the script will be information about the remote computer, not the computer on which the script is being run. For example, let's say you want to modify the script to connect to a remote computer named HRServer01. To do that, simply change the value of the strComputer variable from . to "HRServer01". If you're playing along at home, choose a remote computer on your network and replace the dot with its name. Note that, by default, you must be in the local administrators group of the remote computer in order for this script to work. That's because only members of the local administrators group can use WMI to retrieve information from remote computers. For example, here's the new script that retrieves service information from HRServer01: strComputer = "HRServer01" Run the modified script in the same way as you did before, by typing the following and pressing ENTER: You should see results similar to those produced by the last script (your actual results will vary, depending on the services installed on the remote computer). The key difference between this script and the first script we wrote: this time the services listed are those running on the remote computer HRServer01. For example, you might have noticed that the Application Management Service that was running on the local computer (previous screenshot) is stopped on HRServer01. Note: What if you chose a computer that doesn't exist, or that is currently offline? Well, in that case, the script will blow up. As a temporary workaround, make the first line in the script On Error Resume Next. Sometime soon will discuss better ways to work around this issue Entering Computer Names as Command-Line Parameters Changing the strComputer variable certainly works; it enables you to run the script against a remote computer. However, it would be nice if you didn't have to modify the script each time you had it to run against a different computer. In fact, it would be nice if you could just specify the computer's name as a command-line parameter. That might sound hard, but it's actually remarkably easy. When you start a script from the command prompt (for example, by typing cscript ListServices.vbs) any characters you type after the script name are interpreted as command-line parameters. In fact, not only are they interpreted as command-line parameters, but they are automatically retrieved and stored so you can access them from within your script. For example, suppose you type this to start a script: In this case, both HRServer01 and WebServer01will be recognized as command-line parameters. In turn, they will be stored in a special collection (the WSHArguments collection) so that you can access them from within the ListServices.vbs script. Arguments are stored in the collection along with an index number indicating the order in which they were entered. Because index numbers start with 0, the collection looks like this: Within the script, you refer to the first parameter (HRServer01) using WScript.Arguments(0) and the second parameter (WebServer01) by using WScript.Arguments(1). If there were a third parameter, you'd refer to it by using WScript.Arguments(2) and so on. When you start a script using command-line arguments, the individual arguments must be separated by at least one space. Note: Uh-oh; what if you have to enter an argument that includes a space, like Default Domain Policy? In that case, you must enclose the argument in quotation marks, like so: "Default Domain Policy". If you don't, the string Default Domain Policy will be interpreted as three separate arguments: Default, Domain, and Policy. So let's try the simplest possible case: let's modify our script so that it runs against whatever computer we specify as a command-line argument. To do that, we simply need to set the strComputer variable equal to WScript.Arguments(0), the first command line parameter. Thus: strComputer = WScript.Arguments(0) WScript.Echo "Running Against Remote Computer Note: We also added a line that echoes the name of the computer that the script is running against, just so you don't get confused about what the script is doing. Modify the script, and then run it. Be aware that if you try to run it as you did before, by just typing cscript ListServices.vbs, you will get this error message: Why? Because you didn't enter a command-line parameter following ListServices.vbs the reference to WScript.Arguments(0) doesn't make any sense; you told the script to set strComputer to the value of the first argument, but that argument is nowhere to be found. Consequently, VBScript gives you a hard time about it. (Incidentally, the first digit in (1,1) tells you the line number in the script where the error VBScript is complaining about is occurring.) Lets try to appease VBScript by typing something like the following (replace HRServer01 with the name of an actual computer on your network): (If you just have one computer, type in the name of that machine. Even better, type a dot, and see if the script runs against the local machine, the way we keep saying it will.) Once again you should once see a list of services and the current state of those services. The difference is that you can run this script against any computer simply by typing the appropriate name at the command-line. So, there you have it. Thank you, and good night. Entering Multiple Computer Names as Command-Line Arguments Oh, right; we still haven't touched on what we claimed this column was about: running scripts against multiple computers. Now, it's true that the preceding script could run against multiple computers, as long as you we're willing to run it multiple times. For example, typing the following commands will list the services and their status on 3 different computers: WebServer01, FileServer01 and SQLServer01. But you're right: that's far too much repetition and work for a scripter! Shouldn't we be able to just type something like this and be done with it: Well, of course we can. As we noted earlier, when command-line parameters are stored, they are stashed in a collection called the WSHArguments collection. A collection is just what it sounds like: a bunch things that go together. The WSHArguments collection is just a list of all the command-line arguments that were entered when a script was run. So how does that help us? Well we can retrieve each of these arguments (each computer name) from the WSHArguments collection by using a For Each loop. Heres a script that runs against each computer included as a command-line argument: For Each strArgument in WScript.Arguments strComputer = strArgument WScript.Echo "***** Computer ******" & vbCrLf & strComputer & vbCrLf And here's what the script does: It takes the command-line arguments — in this example, WebServer01, FileServer01, SQLServer01 — and put them into the WSHArguments collection. It takes the value of the first argument (WebServer01), and assigns that to the variable strComputer. It connects to the computer represented by strComputer, and retrieves and displays information about all the services. It loops back to the start of the For Each loop, takes the value of the second argument (FileServer01) and assigns that to the variable strComputer. It connects to the computer represented by strComputer, and retrieves and displays information about all the services. It loops back around, takes the value of the third argument (SQLServer01), etc. etc. The script continues repeating these steps over and over, until it has run through the entire collection of command-line arguments. Incidentally, there's nothing magical about the number three. Try this with 1, 2, 3, or even 100 arguments. This code will work as long as there is at least 1 argument. Note: OK, so what if someone does forget to supply an argument? If that's a concern, tack this code at the beginning of the script. It uses the WSHArgument's Count property to count the number of arguments in the collection. If the Count is 0, meaning no arguments were supplied, it echoes a message to that effect, and then terminates the script: Retrieving Computer Names from a Text File So, we started with a WMI script from the TechNet Script Center that retrieved service information about a single computer: the computer on which it was run. We're now at the point where we can enter any number of different computer names on the command-line when we run the script, and the result will be the display of service information from all of those computers. Note: That could be an awful lot of information. In a future article, we tell you how to save that information in a variety of formats; for now, you can use the > symbol to redirect output to a file instead of the command-prompt window. After that, you can open the file with Notepad and browse it at your convenience instead of seeing it scroll by at lightening speed. To run the script and have its output redirected to a file named report.txt, type the following at the command-prompt: Now, we know what you're thinking. You're thinking, Well, thanks, Scripting Guys, but, um, I have to run my script against 50 computers. Are you telling me I have to type all 50 computer names at the command prompt when I run the script? Of course not. Instead, you can have your script read a text file of computer names and then run against each of those computers. To do this, start by creating a simple text file of computer names, one computer name per line. For demonstration purposes, save the file as C:\Scripts\Computers.txt. The result is a display of information about the services on all of the computers whose names are listed in the file. So, we've got a script that does what we wanted to do; now we have to figure out how it works. The first two lines of the script are just constants that, as you will see, make the script easier to read and understand. INPUT_FILE_NAME holds the path to the file that contains our computer names. FOR_READING is used to indicate that, when we open the file, we want to read from it. We'll discuss file manipulation in a future column; for now you just need to know that, when working with text files, you can either read from them or write to them. Because you can't do both simultaneously, you have to specify the desired mode in the script. Another Scripting Guys' freebie: If this doesn't impress your friends, then you need to get new friends. Change the first line of the script to this: What does that gain you? Well, suppose you have a bunch of text files that contain computer names: one with your DHCP servers, one with your domain controllers, one with your email servers. Do you need to create separate scripts for each of these? Heck no. In Windows Explorer, drag the appropriate text file onto the icon for your script (ListServices.vbs). The script will use the name of the text file as an argument, and then automatically open and read that file. Try it, and see what we mean. The line: Set objFSO = CreateObject("Scripting.FileSystemObject")gets your script ready to work with files. VBScript doesn't know how to deal with files; instead, it needs assistance in the form of a COM object. In this case, we're using the FileSystemObject, a COM object that is installed along with VBScript and Windows Script Host and is great at reading from and writing to text files. The line: Set objFile = objFSO.OpenTextFile(INPUT_FILE_NAME, FOR_READING) is fairly easy to decipher. It opens a text file, the one whose path is stored in the INPUT_FILE_NAME constant, for reading. By the way, this is why we used constants. This line of code — which uses the hard-coded value 1 — also opens a text file for reading, but is much less intuitive: The line: strComputers = objFile.ReadAll is where the actual work of reading the file contents finally takes place. Those contents are read in their entirety (ReadAll) and stored in the strComputers variable. The next line of the script simply closes the file. At this point, we're almost on familiar ground. When we used command-line parameters, we ended up with the computer names in a collection (they seem to be pretty popular in system administration scripting). We could then use a For Each loop to work our way through all the items in the collection. This time we have all of the computer names in the strComputers variable; in fact, if you were to echo the value of strComputers, you'd get output that looked exactly like the text file. Unfortunately, though, this is not a collection, so we can't just toss in a For Each loop and expect to get reasonable results. The line: arrComputers = Split (strComputers, vbCrLf) takes the contents of strComputers (our list of computer names), extracts the individual computer names, and places them into something called an array. An array is similar to a collection (there are important differences, but for our purposes those differences don't really matter). What is important is the fact that arrays can be traversed using our old friend, For Each. You might be wondering how the Split function can distinguish between individual computer names stored in the strComputers variable. As you might recall, each computer name is on a separate line in the file. At the end of every line of a text file like that there's an invisible marker that VBScript knows as vbCrLf (carriage return, linefeed). Basically we're telling the Split function that individual items strComputers are separated by carriage return, linefeeds. Split starts reading through the data. It finds the letters HRServer01, and then encounters a carriage return, linefeed. This is the signal that the first item — HRServer01 — has been found. It puts HRServer01 into the array, and then continues reading. It finds the letters WebServer01, and then encounters another carriage return, linefeed. Most likely you can figure out the rest of the story. After we have our computer names in the arrComputers array, the remainder of the script is the same as it was previously. It simply uses a For Each loop to retrieve all of the computer names in the array (collection). The result is the display of corresponding information about services on each of those computers. Retrieving Computer Names from an Active Directory Container Stopping at this point wouldn't be a crime (so put down the phone and don't bother to dial 9-1-1). We do have a pretty good solution. However, think about how you might gather all of those computer names to put in your text file. Maybe, if you are in an Active Directory-based environment, you'd fire up Active Directory Users and Computers and check the directory. But you're a scripter! Why are you doing this manual labor? Shouldn't you instruct your script to do this work for you? Sure you should. And Microsoft has just the scripting library for you: Active Directory Service Interfaces (ADSI). ADSI enables your scripts to talk to directory services like Active Directory. Explaining how ADSI works is well beyond the scope of this article. That said, this story wouldn't be complete without an ADSI-enabled script that grabs computer names right out of the directory. Therefore, we're going to give you a sample script, provide a very cursory explanation, and then make a sales pitch for our just about on the presses book from Microsoft Press, the Microsoft Windows 2000 Scripting Guide, where ADSI is explained in detail (Don't worry, we're not high pressure sales people; we're going to provide the book for free online in case you're saving your pennies to buy an X-Box.) So, here you go. This script uses ADSI to attach to the Computers container in the fabrikam.com domain and grab a collection of the names of all the computers located there. The script then looks very familiar as it uses For Each to walk through the collection, gathering service information as it goes. Set colComputers = GetObject("LDAP://CN=Computers, DC=fabrikam, DC=com") For Each objComputer in colComputers strComputer = objComputer.CN What if your computer accounts aren't stored in the Computers container? Hey, no problem; the string LDAP://CN=Computers, DC=fabrikam, DC=com can be modified to connect to other containers in the directory. For example, this line of code connects to the Finance OU (note that you must use the syntax OU= rather than CN=): That's all we have time (and space) for this month. Any questions or comments, please write to us at [email protected] (in English, if possible). For a list and additional information on all Tales from the Script columns, click here.
https://technet.microsoft.com/en-us/library/ee692838
CC-MAIN-2017-51
refinedweb
4,083
61.16
19 April 2007 17:52 [Source: ICIS news] TORONTO (ICIS news)--Dow Chemical’s planned joint venture in Libya is a modest near-term positive but its longer-term potential lies in possible petrochemicals expansions in the North African country, analysts said on Thursday. ?xml:namespace> “Should the venture build a world-scale 900,000 tonne/year ethane cracker using low-cost Middle East ethane, assuming a 50% joint venture, we believe that the value to the Dow shareholder is in a range of $1.50-2.00 per share,” JPMorgan said in a research note to clients. The analysts said that the Libyan operations were currently relatively small, with a 330,000 tonne/year ethylene naphtha cracker and capacities of 170,000 tonnes/year of propylene and 160,000 tonnes/year of polyethylene. Deutsche Bank said in a note to clients that Dow may have obtained good terms from ?xml:namespace> Since Dow was the first Both Deutsche and JPMorgan said the Dow appeared to be the partner of choice of Middle East countries keen on expanding downstream chemical industries. Apart from Dow's shares were priced at $45.11/share, down 0.42% in Thursday morning trading in New Y
http://www.icis.com/Articles/2007/04/19/9022172/dows-libya-jv-has-long-term-value-analysts.html
CC-MAIN-2014-41
refinedweb
202
59.84
Bind Check Boxes in MVC After the last post on how to create check boxes that use the bootstrap “btn-group” to modify the look and feel of check boxes, I thought it would be good to show how to bind these check boxes using MVC. After all, you will most likely need to display check boxes based on data from a table. Figure 1: Check boxes should be bound to an entity class Musical Tastes Entity Class The first step is to have an entity (or model) class that contains the appropriate properties to bind to these check boxes. Below is a class I called MusicalTastes that simply has three Boolean properties that correspond to the three check boxes on the screen shown in Figure 1. public class MusicalTastes { public bool IsJazz { get; set; } public bool IsCountry { get; set; } public bool IsRock { get; set; } } View for Musical Tastes Create a .cshtml view and add a @model statement at the top of the page to bind to an instance of this MusicalTastes class. Use the @Html.CheckBoxFor() helper to bind to each property instead of the @Html.CheckBox() helper as you did in the last blog entry. @model BootstrapCheckBoxes2.MusicalTastes @using (Html.BeginForm()) { <div class="form-group"> <div class="btn-group" data- <label class="btn btn-primary"> <span class="glyphicon glyphicon-unchecked"></span> @Html.CheckBoxFor(m => m.IsJazz) Jazz </label> <label class="btn btn-primary"> <span class="glyphicon glyphicon-unchecked"></span> @Html.CheckBoxFor(m => m.IsCountry) Country </label> <label class="btn btn-primary"> <span class="glyphicon glyphicon-unchecked"></span> @Html.CheckBoxFor(m => m.IsRock) Rock </label> </div> </div> <div class="form-group"> <button type="submit" class="btn btn-success">Submit </button> </div> } Notice that the expressions you pass to the first parameter of this CheckBoxFor helper have the names of each of the properties in the MusicalTastes class. This is what binds this check box to each of the properties. Binding to Musical Tastes In the controller for this .cshtml page create an instance of the MusicalTastes class and set one or more of the properties to true in order to see the check box checked when the page displays. public ActionResult BindingTest() { MusicalTastes entity = new MusicalTastes(); entity.IsCountry = true; return View(entity); } jQuery for Musical Tastes In order to get the correct display for any property set to true you need to write some JavaScript/jQuery to toggle the glyphs. Below is the code you would add to the end of the $(document).ready(). Keep the same code you had in the previous blog post to toggle the check boxes when you click on each one, but add code that will run when the page loads as shown in the bold code below: @section scripts { <script> $(document).ready(function () { // Connect to 'change' event in order to toggle glyphs $("[type='checkbox']").change(function () { if ($(this).prop('checked')) { $(this).prev().addClass('glyphicon-ok-circle'); $(this).prev().removeClass('glyphicon-unchecked'); } else { $(this).prev().removeClass('glyphicon-ok-circle'); $(this).prev().addClass('glyphicon-unchecked'); } }); // Detect checkboxes that are checked and toggle glyphs var checked = $("input:checked"); checked.prev().removeClass('glyphicon-unchecked'); checked.prev().addClass('glyphicon-ok-circle'); }); </script> } This code selects all check boxes checked via the automatic data binding. It then removes the unchecked glyph and adds the ok-circle glyph to all those check boxes. Posting Back Musical Tastes Selected There is nothing to do to get the selected check boxes to post back to your entity class. Simply create a method in your controller with the [HttpPost] attribute. Pass in the entity class to this method and MVC will take care of matching the names of the check boxes to the appropriate properties in your entity class. [HttpPost] public ActionResult BindingTest(MusicalTastes entity) { System.Diagnostics.Debugger.Break(); return View(entity); } I added the Debugger.Break() statement so I can hover over the ‘entity’ variable and verify that the check boxes checked have been updated in the instance of the MusicalTastes class passed in. Summary Binding an entity class with boolean properties to a set of check boxes on a .cshtml is very easy to do. Simply create your class and use the @Html.CheckBoxFor() helper class to bind your check boxes to the appropriate properties. Add a little bit of client-side JavaScript/jQuery to toggle the glyphs and you have a very nice looking interface for your check box controls. Past Blog Content Blog Archive 2015 2014 (18) 2013 (11) 2012 (19) 2011 (29) 2010 (19) 2009 (28) 2008 (0) 2007 (14) 2006 (6)
https://weblogs.asp.net/psheriff/bind-check-boxes-in-mvc
CC-MAIN-2022-21
refinedweb
754
54.12
Torsten, although I've said that doing this with xslt is feasible I think I was wrong. So here is a transformer that does a) handle the complete xform binding spec and adds instance data as <value/> child element to form controls and b) adds a @name attribute to them containing the form name + "/" + path to instance data (e.g. name="order_form/address/street") It does not remove anything from the document, e.g. form declarations or @refs. The package name is a temporal convenience and will change. I did waste quite some time toying with an DTM representation, since this transformer is quite slow and DTM looked like it would speed it up a lot. E.g. creating the cached form declaration took half the time. Unfortunately I had to find out the hard way that a number of important API calls are not implemented yet.... :-( Since this transformer caches form declarations which also contain the model necessary for validation, I think it would be nice to "register" this model (+submitInfo) somewhere so that it is available when validation takes place. This doesn't need to be session bound (doesn't register instance data). This information would be useful also when determining which form has been submitted. Caching is done only during the lifetime of a transformer instance as I have no idea how to decide easily if a cached form declaration & instance data is still valid. Ideas? BTW if I understand the xforms WD correctly, form controls need to set the default namespace or child elements like <caption/>, <hint/> &c. need to explicitly carry their namespace. So your sample xform.xml needs to be changed: <xform:textbox <caption>City</caption> </xform:textbox> becomes <xform:textbox <caption>City</caption> </xform:textbox> I will next look into such a "registry" that keeps track of xform declarations and provides them to a validator. Obviously, I will go and see if I can use the code you posted earlier :-)
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200108.mbox/%[email protected]%3E
CC-MAIN-2014-23
refinedweb
327
61.46
Hi, I have a problem with journalctl and syslog output within a C program: When I use journalctl in follow mode: $ journalctl -f and I run the folowing program in another terminal #include <stdio.h> #include <syslog.h> int main() { FILE *pf; int i; /* logging made in file /var/log/syslog */ openlog("martins",LOG_CONS,LOG_USER); pf = fopen("not_here","r"); if (!pf) syslog(LOG_ERR | LOG_USER,"oops -- %m\n"); return 0; } no syslog error message appears in the journal. But, when I modify the programm by adding a 'sleep(1);' right before 'return 0;' there is a correct error message shown in the journal. Is this a bug in systemd? Or do I understand something wrong with systemd's journal? I'm actually running linux-3.10-12-1-lts systemd 207 with a 32Bit installation Thanks for any hints, Martin Last edited by thesofty (2013-10-01 19:10:13) Offline You may need to call closelog() for correct operation. Offline Nope, I tried it out. It didn't help. It's just the same. With some artificial delay, the message appears in the journal DB. Without, no message will arrive the journal DB, even if closelog() is called before. Meanwhile I could reproduce this behavior on 64Bit system too. The delay needs not to be a sleep(), an incrementing loop does the same job when the counting limit is high enough. On my system, a for (i=0;i<1000000;i++) ; before the exit() works. But a limit of 100000 instead of 1000000 is not sufficient. Offline Meanwhile I'm pretty sure that's a serious bug. Serious, because the user will not see the reasons when services terminate on his system because of some error conditions. Hence, I've file a bug ticket to freedesktop.org. Offline Now, I understand what's going on: 1) There is an known issue about a race condition in journald when the logging process exits. In this case the message isn't assigned to the user whose process generated the message. 2) Apparently the group of the journal files have been changed from 'adm' to 'systemd-journal' in Arch Linux around April 2013. But still I only have been member of 'adm' as described in the journal tutorial on. So I can only see syslog messages, which are correctly assigned to my own user account. The messages with the race condition, as mentioned above, have been invisible to me. After joining to the new right group again: $ sudo usermod -d G martin systemd-journal I also see the syslog messaes just before exiting the process again :-). Offline
https://bbs.archlinux.org/viewtopic.php?id=170495
CC-MAIN-2017-04
refinedweb
434
65.62
: - Version. - Version 0.5.0.1 - Update bounds on package constraints to try and get a successful build on ghc 7.2; removed parallel constraint as not used. - Version 0.5.0.0 - The constructors for ScopedName and QName have been removed to hide some experimental optimisations (partly added in 0.4.0.0); Namespace has seen a similar change but no optimisation. Output speed should be improved but no systematic analysis has been performed. - Version 0.4.0.0 - Moving to using polyparse for parsing and Text rather than String where appropriate. Use of URI and Maybe Text rather than String in the Namespace type. Removed the Swish.Utils.DateTime and Swish.Utils.TraceHelpers modules. Symbols have been removed from the export lists of the following modules: Swish.Utils.LookupMap, Swish.Utils.ListHelpers, Swish.Utils.MiscHelpers, Swish.Utils.ShowM. Some significant improvements to parsing speed, but no concerted effort or checks made yet. - Version 0.3.2.1 - Marked a number of routines from the Swish.Utils modules as deprecated. Use foldl' rather than foldl. - Version 0.3.2.0 - The N3 parser no longer assumes a set of pre-defined namespaces. There is no API change worthy of a bump to the minor version number, but it is a large-enough change in behaviour that I felt the need for the update. -: - No changelog available Properties Modules - Data - Interned -.6.0.1.tar.gz [browse] (Cabal source package) - Package description (included in the package) Maintainers' corner For package maintainers and hackage trustees
http://hackage.haskell.org/package/swish-0.6.0.1
CC-MAIN-2014-42
refinedweb
254
59.3
An adaptable unary function used to create objects using the clone() method. More... #include <utilities/memutils.h> An adaptable unary function used to create objects using the clone() method. This class is for use with the Standard Template Library. Note that the template argument need not be a pointer class. If the template argument is T, this unary function will accept a pointer to T and call clone() upon the corresponding object, returning a pointer to the newly created clone of type T. Thas method T* clone() const. The declared return type may be different, but the result must be castable to T*. The argument type for this unary function. The return type for this unary function. Creates a new object using the clone() method.
http://regina.sourceforge.net/engine-docs/structregina_1_1FuncNewClonePtr.html
CC-MAIN-2014-10
refinedweb
125
75.91
Quoting Andrew Morton ([email protected]):> On Fri, 16 Jan 2009 20:02:48 -0600> "Serge E. Hallyn" <[email protected]> wrote:> > > IPC namespaces are completely disjoint id->object mappings.> > A task can pass CLONE_NEWIPC to unshare and clone to get> > a new, empty, IPC namespace. Until now this has supported> > SYSV IPC.> > > > Most Posix IPC is done in userspace. The posix mqueue> > support, however, is implemented on top of the mqueue fs.> > > > This patchset implements multiple mqueue fs instances,> > one per IPC namespace to be precise.> > > > To create a new ipc namespace with posix mq support, you> > should now:> > > > unshare(CLONE_NEWIPC|CLONE_NEWNS);> > umount /dev/mqueue> > mount -t mqueue mqueue /dev/mqueue> > > > It's perfectly valid to do vfs operations on files> > in another ipc_namespace's /dev/mqueue, but any use> > of mq_open(3) and friends will act in your own ipc_ns.> > After the ipc namespace has exited, you can still> > unlink but no longer create files in that fs (since> > accounting is carried.> > > > Changelog:> > v14: (Jan 16 2009) port to linux-next> > v13: (Dec 28 2009)> > 1. addressed comments by Dave and Suka> > 2. ported Cedric's patch to make posix mq sysctls> > per-namespace> > > > When convenient, it would be great to see this tested> > in -mm.> > hm. Who is going to test it?Everyone using posix mq with an -mm kernel :)There are ltp testcases which I hope can be pushed once thesepatches appear headed upstream.thanks,-serge
https://lkml.org/lkml/2009/1/27/327
CC-MAIN-2017-09
refinedweb
242
73.58
Problem Statement In this problem, we are given a linked list and we have to swap the elements in a pairwise manner. It is not allowed to swap the data, only links should be changed. Let me explain this with an example - if the linked list is then the function should change it to Well, this is one of the most popular questions to be asked in interviews and may seem a bit difficult at first but it is easy to comprehend. Problem Statement Understanding Let's first understand the problem statement with the help of an example. Linked list = 1->2->3->4->5, The term Pairwise swap indicates that we need to swap the positions of the adjacent two nodes in the linked list. So, - 1->2 will become 2->1 - 3->4 will become 4->3 - 5 has no one to be paired with hence it remains as it is. Finally the linked list will be = 2->1->4->3->5. Let's take an example with an even length linked list. Linked list = 4->1->6->3->8->9 So performing pairwise swap, - 4->1 will become 1->4. - 6->3 will become 3->6. - 8->9 will become 9-8. Finally the linked list will be = 1->4->3->6->9->8. You should take more examples and get the output according to the above understanding. Analyzing different examples will help you create the logic for this question. Approach I hope you got a basic idea on what we need to do to solve this question. The idea is simple, we have to change the links of the nodes alternatively for every 2 nodes. We will traverse the linked list from the beginning and for every two nodes, we will change the pointers of the next nodes to previous nodes. Since it is clear what we need to do, take some time and think on how we are going to do it. Helpful Observations Since we need to swap two nodes, that is change the links, we should have two pointers pointing on the nodes. Suppose linked list = &1->2->3->4,and two pointers be prev and curr. - The prev pointing to the node(1) and curr pointing to the node(2). - We can see that we need to change curr->next and point it to prev i.e 2->1. - Also finally the linked list will be 2->1->4->3 i.e. the next of nodes with value 1 will point to the node with value 4, so that means prev->next should be curr->next->next. (Think why this step should be done before step 2). - Step 3 should be done before Step 2 because we need to access node(4) from node(2) but in step 2, node(2)->next is changed to node(1), hence connection to node(4) is lost. - Or we can simply store in temporary node node(2)->next and follow the above steps in given order. There are some corner conditions to handle, take some examples, use the above observations and try to get those corner conditions. Below is the proper algorithm for the question. Algorithm - Initialize prev and curr pointers. - Traverse the list, store in temp node the value of curr->next and change next of curr as of the prev node. - If temp is NULL or temp is the last node then change prev->next to NULL and break the iteration. (Above mentioned corner conditions). - Else we have to change next of prev to next of next of curr. - Update prev and curr nodes for next iteration. Code Implementation The most efficient way to Pairwise swap elements of a given linked list #include using namespace std; class node { public: int data; node* next; }; node* pairWiseSwap(node* head) { if (head == NULL || head->next == NULL) return head; node* prev = head; node* curr = head->next; head = curr; while (true) { node* temp = curr->next; curr->next = prev; if (temp == NULL || temp->next == NULL) { prev->next = temp; break; } prev->next = temp->next; prev = temp; curr = prev->next; } return head; } void push(node** head_ref, int new_data) { node* new_node = new node(); new_node->data = new_data; new_node->next = (*head_ref); (*head_ref) = new_node; } void print(node* node) { while (node != NULL) { cout << node->data << " "; node = node->next; } } int main() { node* start = NULL; push(&start, 5); push(&start, 4); push(&start, 3); push(&start, 2); push(&start, 1); start = pairWiseSwap(start); print(start); return 0; } Output: 2 1 4 3 5 Space Complexity: O(1), constant space required as no extra space is used. [forminator_quiz id="3466"] In this blog, we have discussed how to swap pairwise nodes and return the modified linked list by changing the links of the nodes directly in the most efficient way. This is a quite popular interview problem and advised to practice this and understand how to solve similar problems. To practice, similar types of problems check out PrepBytes - MYCODE | Contest
https://www.prepbytes.com/blog/linked-list/the-most-efficient-way-to-pairwise-swap-elements-of-a-given-linked-list/
CC-MAIN-2022-21
refinedweb
821
78.08
Since a lot of my future posts are going to be about NUnit, I want to make sure that you all have the basics of NUnit covered before we start talking in depth about the framework. In this post, I am going to go through how to build your first unit test. I am going to make some assumptions in this post, primarily that you have the basic understanding of .NET development and that you've used Visual Studio before. If that is all new to you, check out Microsoft's site on Visual Studio for getting started - Getting Started Prerequisites: - .NET Framework - Download - IDE (Visual Studio and Visual Studio Code are what I use) - Download - MSBuild (Only needed if you don't have Visual Studio) - Download Before we begin, I want to point out the NUnit landing page on GitHub - NUnit. This is going to be where we get all the necessary code, executables, code samples, and documentation for using NUnit. I will be referencing their GitHub frequently for documentation, and it's just a good place to have bookmarked for when you are developing with NUnit. Method Under Test Now let's get started! In this example, I am going to have two projects. One project that holds the code and one that holds the tests. - BlogSamples - BlogSamples.NUnit I am going to use the classic programming example of 'Hello, World.' In BlogSamples, we have a class HelloWorld and a method Hello. public static string Hello(string greeting) { return $"Hello, {greeting}."; } Hello is a simple method that just takes in a string, mutates it, and returns the mutated string. So Hello("Bojangles") would return, Hello, Bojangles. Now to set up our tests for Hello. Installing NUnit In order to write our NUnit tests for Hello, we need to add NUnit framework to our BlogSamples.NUnit project. The easiest way is going to be using Nuget. All the ways to install NUnit are listed here - Install Inside Visual Studio, go to Tools -> NuGet Package Manager -> Package Manager Console. You will see the package manager console open at the bottom of the development window. In the window, type Install-Package -Project BlogSamples.NUnit NUnit and hit return. This will grab the latest stable build of NUnit and install it to our BlogSamples.NUnit project. You will see Successfully installed 'NUnit 3.8.1' to BlogSamples.NUnit in the console if it successfully installed. Under references, you should see that you now have a reference to nunit.framework. This will be what we will be using to build up our tests. Writing the Test Now that we have our reference to NUnit, we need to add a project reference to the BlogSamples project. This will allow us to instantiate our HelloWorld class to test it. I'll point you to this page if you don't know how to add a reference - Add Reference. When we have all the references we need, we can start writing the tests for our Hello method. When I am writing unit tests, I like to make a class with the same name of the class that we are going to be testing. In BlogSamples, we have our Hello method in the HelloWorld class. So we will create the same Hello method in the HelloWorld class inside of the BlogSamples.NUnit project. Using this naming convention makes it very easy for discovering where the unit tests are for a particular class. Once we create our Hello test method, we want to add the using statement for NUnit - using Nunit.Framework. Then we will declare our HelloWorld class as an NUnit TestFixture. We do this by adding the [TestFixture] attribute to the class. Now we have something that looks like this... using NUnit.Framework; namespace BlogSamples.NUnit { [TestFixture] public class HelloWorld { } } Now that we have laid out the structure of the test class, we can add our first test case. Our first iteration of our test might look like this... [Test] public void Hello() { string greeting = BlogSamples.HelloWorld.Hello("get-testy"); Assert.That(greeting.Equals("Hello, get-testy.", StringComparison.Ordinal)); } For those that have never seen an NUnit test, it can seem like a lot is going on in this method. Let's break down what comprises an nunit test. - The first thing is that we decorate the test method with the [Test]attribute. It is one of many attributes that you can put on a method to classify it as a test. In later posts, I will go over all the different ways that you can define tests. For now, the most basic way to create a test is to use the [Test]attribute. More info can be found here on Attributes. - Since we are testing the Hellomethod in the HelloWorld class, we actually want to call that method with something that we define. This way we know exactly what the outcome with be. In this instance, we are sending in "get-testy"to Hello, so we would expect our method to return "Hello, get-testy.". - After we call the Method Under Test, we want to verify the results from method. We do this in the Assertion. In NUnit3, they have changed their Assertion model to be constraint based. This is in favor of their classic model, where in our case we might use Assert.AreEqual("Hello, get-testy.", greeting). Using the constraint based model gives a lot more freedom to do what we want inside the assertions. More info can be found here on Assertions. Running the Test There are a couple ways that we can run NUnit tests. Depending on your needs, there will always be a better way to run your tests. - If you are going to be actively writing tests, it is probably the best to use the Visual Studio adapter. That way when you are writing your code and your tests, you don't need to leave your IDE in order to run them. - If you are going to be running your tests nightly, or during your build process, you are going to want to use the nunit console. - If you are going to want to run all existing tests, not making any changes, against your code changes, you might use the GUI. **The NUnit3 GUI is still in preview, so I might wait to use this until it gets fully release. Let's just use the Visual Studio adapter for this example. We can install the extension into our Visual Studio environment by going to Tools -> Extension Manager. Search for NUnit 3.0 Test Adapter and click install. I had to restart my Visual Studio in order for the installation to complete, it might be different for you. Just follow the directions that it gives you. Once you have the extension installed, we want to open up the Test Explorer. If you don't see it, go to Test -> Windows -> Test Explorer. By default, when you first open Visual Studio, it won't have any tests loaded. Once you build your code, you should see the new test we created. Now it is as simple as right-clicking on the Hello test, and selecting Run Selected Tests. Now we should have everything set up in our development environment. We can start getting into more advanced topics. Let me know if this was helpful, too in-depth, not enough in-depth, etc. Please share your thoughts!
https://get-testy.com/getting-started-with-nunit/
CC-MAIN-2018-22
refinedweb
1,235
73.88
I have some code to export all files within a zipfile to a path but what I want to do is create a new folder with the same name as the zipfile minus the ".zip" just like the windows explorer option does. I have commented out the code that doesn’t work. It seems to be .. Category : zip AH! I’m new to Python. Trying to get the pattern here, but could use some assistance to get unblocked. Scenario: testZip.zip file with test.rpt files inside The .rpt files have multiple areas of interest ("AOI") to parse AOI1: Line starting with $$ADD AOI2: Line starting with # test.local AOI3: Lines starting with single $ AOI4: .. I have many zip files which are big. Inside each zip file I have many XML files all of which have the same structure. I want to extract specific information under certain namespaces from each of this XML files. I was able to hardcode it with ElementTree for the case of one file. But I .. I have a url_file.txt file with few links to download the zip files directly. The content of url_file.txt file has 6 lines. Source: Python.. Olá! Preciso ler um grande CSV, quebrá-lo em CSVs de 1000 linhas, armazená-los em memória e então zerar um zip com estes arquivos menores. Este é o código até o momento: import pandas as pd from io import BytesIO, StringIO import gzip csvfile = pd.read_csv(‘file.csv’) buffer = BytesIO() f = StringIO() with gzip.open(buffer, ‘wb’) as .. I am trying to extract an xls file from a zip folder with python. The issue I’m having is the system that generates the report generates it as an xls file and there isn’t anything I can do about that. I get this error with using zipfile: import zipfile zip_path = ‘file_path’ folder = ‘folder_path’ .. I have a file structure something like this: /a.zip /not_a_zip/ contents /b.zip contents and I want to create a directory a and extract a.zip into it and all the nested zipped files where they are so I get something like this: /a/ /not_a_zip/ contents /b/ contents I tried this solution, but I was getting errors .. Is there a way to use Python to unrar or unzip to extract a large file without loading its whole content to memory? Source: Python-3x.. Is there a way to use Python unrar package to extract a large file without loading its whole content to memory? I can use binary os.system(f’unrar x -y {file_path} {destination} > /dev/null’) But I’m looking for a more Pythonic way to do it (without using external binaries) Source: Python-3x.. i m running following brute force code in python 3.5 for cracking zip file: import zipfile numberlist = ‘abcdefghijklmnopqrstuvwxyz’ complete = [] for current in range (4): a = [i for i in numberlist] for x in range (current): a = [y + i for i in numberlist for y in a] complete = complete .. Recent Comments
https://askpythonquestions.com/category/zip/
CC-MAIN-2021-04
refinedweb
499
75.1
feof() Test a stream's end-of-file flag Synopsis: #include <stdio.h> int feof( FILE* fp ); Arguments: - fp - The stream you want to test. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The feof() function tests the end-of-file flag for the stream specified by fp. Because the end-of-file flag is set when an input operation attempts to read past the end-of-file, the feof() function detects the end-of-file only after an attempt is made to read beyond the end-of-file. Thus, if a file contains 10 lines, the feof() won't detect the end-of-file after the tenth line is read; it will detect the end-of-file on the next read operation. Returns: 0 if the end-of-file flag isn't set, or nonzero if the end-of-file flag is set. Examples: #include <stdio.h> #include <stdlib.h> void process_record( char *buf ) { printf( "%s\n", buf ); } int main( void ) { FILE *fp; char buffer[100]; fp = fopen( "file", "r" ); fgets( buffer, sizeof( buffer ), fp ); while( ! feof( fp ) ) { process_record( buffer ); fgets( buffer, sizeof( buffer ), fp ); } fclose( fp ); return EXIT_SUCCESS; } Classification: Last modified: 2013-12-23
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/f/feof.html
CC-MAIN-2014-10
refinedweb
209
64.81
All users were logged out of Bugzilla on October 13th, 2018 Testing object with instanceof against its type returns false if type defined within a namespace . RESOLVED WORKSFORME Status () People (Reporter: andorsalga, Unassigned) Tracking Firefox Tracking Flags (Not tracked) Details (Whiteboard: [WFM?]) Attachments (2 attachments) User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 Build Identifier: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.3a1pre) Gecko/20091006 Minefield/3.7a1pre Creating an instance of a class which is in a 'namespace' and then testing it with instanceof returns false in Minefield 3.7a1pre. If the class isn't in a namespace, the conditional returns true. Reproducible: Always Steps to Reproduce: 1. create a namespace var ns = {}; 2. create a class within that namespace. ns.test = function(){} 3. create an instance of that class var iTest = new ns.test(); 4. test if instance instanceof type if(iTest instanceof ns.test) { // will not run } Actual Results: instanceof return false Expected Results: instanceof should return true instanceof returns true if the class isn't in a namespace. Created attachment 406894 [details] test prints 2 lines. Both should be positive. Script has 2 tests. first test check if object is instanceof type. Other test checks if object is instanceof type within a namespace. Second test fails. Summary: Testing object with instanceof against its class returns false if class is in a namespace. → Testing object with instanceof against its type returns false if type defined within a namespace. Version: unspecified → Trunk This WFM in Fx3 and 3.5 (1.9.4). Any add-ons? Anyone able to confirm? The shell testcase: var ns = {}; ns.test = function(){}; var iTest = new ns.test(); print(iTest instanceof ns.test); prints true as well, tm and m-c shells. /be It works fine in FF3.5, but not in FF 3.7a1pre. I had a JavaScript Debugger. I uninstalled it, but I get the same result. I ran this code in the error console: var ns = {}; ns.test = function(){}; var iTest = new ns.test(); if(iTest instanceof ns.test){alert('PASS');} and it strangely returned true. Created attachment 406914 [details] only tests class in namespace. Prints either PASS or FAIL Same as first test, but simply prints either PASS or FAIL. Please disregard comment #4, I was running this in 3.5 by mistake. The test fails in 3.7a1pre when running it in the error console as well as the JavaScript shell. I did an hg pull and did a full build, but I still have the same problem. This appears to pass for me in Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.3a1pre) Gecko/20091016 Minefield/3.7a1pre I tested with steps from comment 4 and it passes for me too: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.3a4pre) Gecko/20100403 Minefield/3.7a4pre Andor, does it work for you now? Whiteboard: [WFM?] Still works. Status: UNCONFIRMED → RESOLVED Last Resolved: 5 years ago Resolution: --- → WORKSFORME
https://bugzilla.mozilla.org/show_bug.cgi?id=522924
CC-MAIN-2018-43
refinedweb
529
80.38
Python provides extensive support in its standard library for working with email (and newsgroup) messages. There are three general aspects to working with email, each supported by one or more Python modules. Communicating with network servers to actually transmit and receive messages. The modules poplib, imaplib, smtplib, and nntplib each address the protocol contained in its name. These tasks do not have a lot to do with text processing per se, but are often important for applications that deal with email. The discussion of each of these modules is incomplete, addressing only those methods necessary to conduct basic transactions in the case of the first three modules/protocols. The module nntplib is not documented here under the assumption that email is more likely to be automatically processed than are Usenet articles. Indeed, robot newsgroup posters are almost always frowned upon, while automated mailing is frequently desirable (within limits). Examining the contents of message folders. Various email and news clients store messages in a variety of formats, many providing hierarchical and structured folders. The module mailbox provides a uniform API for reading the messages stored in all the most popular folder formats. In a way, imaplib serves an overlapping purpose, insofar as an IMAP4 server can also structure folders, but folder manipulation with IMAP4 is discussed only cursorily?that topic also falls afield of text processing. However, local mailbox folders are definitely text formats, and mailbox makes manipulating them a lot easier. The core text processing task in working with email is parsing, modifying, and creating the actual messages. RFC-822 describes a format for email messages and is the lingua franca for Internet communication. Not every Mail User Agent (MUA) and Mail Transport Agent (MTA) strictly conforms to the RFC-822 (and superset/clarification RFC-2822) standard?but they all generally try to do so. The newer email package and the older rfc822, rfc1822, mimify, mimetools, MimeWriter, and multifile modules all deal with parsing and processing email messages. Although existing applications are likely to use rfc822, mimify, mimetools, MimeWriter, and multifile, the package email contains more up-to-date and better-designed implementations of the same capabilities. The former modules are discussed only in synopsis while the various subpackages of email are documented in detail. There is one aspect of working with email that all good-hearted people wish was unnecessary. Unfortunately, in the real-world, a large percentage of email is spam, viruses, and frauds; any application that works with collections of messages practically demands a way to filter out the junk messages. While this topic generally falls outside the scope of this discussion, readers might benefit from my article, "Spam Filtering Techniques," at: <> A flexible Python project for statistical analysis of message corpora, based on naive Bayesian and related models, is SpamBayes: <> Without repeating the whole of RFC-2822, it is worth mentioning the basic structure of an email or newsgroup message. Messages may themselves be stored in larger text files that impose larger-level structure, but here we are concerned with the structure of a single message. An RFC-2822 message, like most Internet protocols, has a textual format, often restricted to true 7-bit ASCII. A message consists of a header and a body. A body in turn can contain one or more "payloads." In fact, MIME multipart/* type payloads can themselves contain nested payloads, but such nesting is comparatively unusual in practice. In textual terms, each payload in a body is divided by a simple, but fairly long, delimiter; however, the delimiter is pseudo-random, and you need to examine the header to find it. A given payload can either contain text or binary data using base64, quoted printable, or another ASCII encoding (even 8-bit, which is not generally safe across the Internet). Text payloads may either have MIME type text/* or compose the whole of a message body (without any payload delimiter). An RFC-2822 header consists of a series of fields. Each field name begins at the beginning of a line and is followed by a colon and a space. The field value comes after the field name, starting on the same line, but potentially spanning subsequence lines. A continued field value cannot be left aligned, but must instead be indented with at least one space or tab. There are some moderately complicated rules about when field contents can split between lines, often dependent upon the particular type of value a field holds. Most field names occur only once in a header (or not at all), and in those cases their order of occurrence is not important to email or news applications. However, a few field names?notably Received?typically occur multiple times and in a significant order. Complicating headers further, field values can contain encoded strings from outside the ASCII character set. The most important element of the email package is the class email.Message.Message, whose instances provide a data structure and convenience methods suited to the generic structure of RFC-2822 messages. Various capabilities for dealing with different parts of a message, and for parsing a whole message into an email.Message.Message object, are contained in subpackages of the email package. Some of the most common facilities are wrapped in convenience functions in the top-level namespace. A version of the email package was introduced into the standard library with Python 2.1. However, email has been independently upgraded and developed between Python releases. At the time this chapter was written, the current release of email was 2.4.3, and this discussion reflects that version (and those API details that the author thinks are most likely to remain consistent in later versions). I recommend that, rather than simply use the version accompanying your Python installation, you download the latest version of the email package from <> if you intend to use this package. The current (and expected future) version of the email package is directly compatible with Python versions back to 2.1. See this book's Web site, <>, for instructions on using email with Python 2.0. The package is incompatible with versions of Python before 2.0. Several children of email.Message.Message allow you to easily construct message objects with special properties and convenient initialization arguments. Each such class is technically contained in a module named in the same way as the class rather than directly in the email namespace, but each is very similar to the others. Construct a message object with a Content-Type header already built. Generally this class is used only as a parent for further subclasses, but you may use it directly if you wish: >>> mess = email.MIMEBase.MIMEBase('text','html',charset='us-ascii') >>> print mess From nobody Tue Nov 12 03:32:33 2002 Content-Type: text/html; charset="us-ascii" MIME-Version: 1.0 Child of email.MIMEBase.MIMEBase, but raises MultipartConversionError on calls to .attach(). Generally this class is used for further subclassing. Construct a multipart message object with subtype subtype. You may optionally specify a boundary with the argument boundary, but specifying None will cause a unique boundary to be calculated. If you wish to populate the message with payload object, specify them as additional arguments. Keyword arguments are taken as parameters to the Content-Type header. >>> from email.MIMEBase import MIMEBase >>> from email.MIMEMultipart import MIMEMultipart >>> mess = MIMEBase('audio','midi') >>> combo = MIMEMultipart('mixed', None, mess, charset='utf-8') >>> print combo From nobody Tue Nov 12 03:50:50 2002 Content-Type: multipart/mixed; charset="utf-8"; boundary="===============5954819931142521==" MIME-Version: 1.0 --===============5954819931142521== Content-Type: audio/midi MIME-Version: 1.0 --===============5954819931142521==-- Construct a single part message object that holds audio data. The audio data stream is specified as a string in the argument audiodata. The Python standard library module sndhdr is used to detect the signature of the audio subtype, but you may explicitly specify the argument subtype instead. An encoder other than base64 may be specified with the encoder argument (but usually should not be). Keyword arguments are taken as parameters to the Content-Type header. >>> from email.MIMEAudio import MIMEAudio >>> mess = MIMEAudio(open('melody.midi').read()) SEE ALSO: sndhdr 397; Construct a single part message object that holds image data. The image data is specified as a string in the argument imagedata. The Python standard library module imghdr is used to detect the signature of the image subtype, but you may explicitly specify the argument subtype instead. An encoder other than base64 may be specified with the encoder argument (but usually should not be). Keyword arguments are taken as parameters to the Content-Type header. >>> from email.MIMEImage import MIMEImage >>> mess = MIMEImage(open('landscape.png').read()) SEE ALSO: imghdr 396; Construct a single part message object that holds text data. The data is specified as a string in the argument text. A character set may be specified in the charset argument: >>> from email.MIMEText import MIMEText >>> mess = MIMEText(open('TPiP.tex').read(),'latex') Return a message object based on the message text contained in the file-like object file. This function call is exactly equivalent to: SEE ALSO: email.Parser.Parser.parse() 363; Return a message object based on the message text contained in the string s. This function call is exactly equivalent to: SEE ALSO: email.Parser.Parser.parsestr() 363; The module email.Encoder contains several functions to encode message bodies of single part message objects. Each of these functions sets the Content-Transfer-Encoding header to an appropriate value after encoding the body. The decode argument of the .get_payload() message method can be used to retrieve unencoded text bodies. Encode the message body of message object mess using quoted printable encoding. Also sets the header Content-Transfer-Encoding. Encode the message body of message object mess using base64 encoding. Also sets the header Content-Transfer-Encoding. Set the Content-Transfer-Encoding to 7bit or 8bit based on the message payload; does not modify the payload itself. If message mess already has a Content-Transfer-Encoding header, calling this will create a second one?it is probably best to delete the old one before calling this function. SEE ALSO: email.Message.Message.get_payload() 360; quopri 162; base64 158; Exceptions within the email package will raise specific errors and may be caught at the desired level of generality. The exception hierarchy of email.Errors is shown in Figure 5.1. SEE ALSO: exceptions 44; The module email.Generator provides support for the serialization of email.Message.Message objects. In principle, you could create other tools to output message objects to specialized formats?for example, you might use the fields of an email.Message.Message object to store values to an XML format or to an RDBMS. But in practice, you almost always want to write message objects to standards-compliant RFC-2822 message texts. Several of the methods of email.Message.Message automatically utilize email.Generator. Construct a generator instance that writes to the file-like object file. If the argument mangle_from_ is specified as a true value, any occurrence of a line in the body that begins with the string From followed by a space is prepended with >. This (non-reversible) transformation prevents BSD mailboxes from being parsed incorrectly. The argument maxheaderlen specifies where long headers will be split into multiple lines (if such is possible). Construct a generator instance that writes RFC-2822 messages. This class has the same initializers as its parent email.Generator.Generator, with the addition of an optional argument fmt. The class email.Generator.DecodedGenerator only writes out the contents of text/* parts of a multipart message payload. Nontext parts are replaced with the string fmt, which may contain keyword replacement values. For example, the default value of fmt is: [Non-text (%(type)s) part of message omitted, filename %(filename)s] Any of the keywords type, maintype, subtype, filename, description, or encoding may be used as keyword replacements in the string fmt. If any of these values is undefined by the payload, a simple description of its unavailability is substituted. Return a copy of the instance with the same options. Write an RFC-2822 serialization of message object mess to the file-like object the instance was initialized with. If the argument unixfrom is specified as a true value, the BSD mailbox From_ header is included in the serialization. Write the string s to the file-like object the instance was initialized with. This lets a generator object itself act in a file-like manner, as an implementation convenience. SEE ALSO: email.Message 355; mailbox 372; The module email.Charset provides fine-tuned capabilities for managing character set conversions and maintaining a character set registry. The much higher-level interface provided by email.Header provides all the capabilities that almost all users need in a friendlier form. The basic reason why you might want to use the email.Header module is because you want to encode multinational (or at least non-US) strings in email headers. Message bodies are somewhat more lenient than headers, but RFC-2822 headers are still restricted to using only 7-bit ASCII to encode other character sets. The module email.Header provides a single class and two convenience functions. The encoding of non-ASCII characters in email headers is described in a number of RFCs, including RFC-2045, RFC-2046, RFC-2047, and most directly RFC-2231. Construct an object that holds the string or Unicode string s. You may specify an optional charset to use in encoding s; absent any argument, either us-ascii or utf-8 will be used, as needed. Since the encoded string is intended to be used as an email header, it may be desirable to wrap the string to multiple lines (depending on its length). The argument maxlinelen specifies where the wrapping will occur; header_name is the name of the header you anticipate using the encoded string with?it is significant only for its length. Without a specified header_name, no width is set aside for the header field itself. The argument continuation_ws specified what whitespace string should be used to indent continuation lines; it must be a combination of spaces and tabs. Instances of the class email.Header.Header implement a .__str__() method and therefore respond to the built-in str() function and the print command. Normally the built-in techniques are more natural, but the method email.Header.Header.encode() performs an identical action. As an example, let us first build a non-ASCII string: >>> from unicodedata import lookup >>> lquot = lookup("LEFT-POINTING DOUBLE ANGLE QUOTATION MARK") >>> rquot = lookup("RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK") >>> s = lquot + "Euro-style" + rquot + " quotation" >>> s u'\xabEuro-style\xbb quotation' >>> print s.encode('iso-8859-1') Euro-style quotation Using the string s, let us encode it for an RFC-2822 header: >>> from email.Header import Header >>> print Header(s) =?utf-8?q?=C2=ABEuro-style=C2=BB_quotation?= >>> print Header(s,'iso-8859-1') =?iso-8859-1?q?=ABEuro-style=BB_quotation?= >>> print Header(s, 'utf-16') =?utf-16?b?/v8AqwBFAHUAcgBvACOAcwBOAHkAbABl?= =?utf-16?b?/v8AuwAgAHEAdQBvAHQAYQBOAGkAbwBu?= >>> print Header(s,'us-ascii') =?utf-8?q?=C2=ABEuro-style=C2=BB_quotation?= Notice that in the last case, the email.Header.Header initializer did not take too seriously my request for an ASCII character set, since it was not adequate to represent the string. However, the class is happy to skip the encoding strings where they are not needed: >>> print Header('"US-style" quotation') "US-style" quotation >>> print Header('"US-style" quotation','utf-8') =?utf-8?q?=22US-style=22_quotation?= >>> print Header('"US-style" quotation','us-ascii') "US-style" quotation Add the string or Unicode string s to the end of the current instance content, using character set charset. Note that the charset of the added text need not be the same as that of the existing content. >>> subj = Header(s,'latin-1',65) >>> print subj =?iso-8859-1?q?=ABEuro-style=BB_quotation?= >>> unicodedata.name(omega), unicodedata.name(Omega) ('GREEK SMALL LETTER OMEGA', 'GREEK CAPITAL LETTER OMEGA') >>> subj.append(', Greek: ', 'us-ascii') >>> subj.append(Omega, 'utf-8') >>> subj.append(omega, 'utf-16') >>> print subj =?iso-8859-1?q?=ABEuro-style=BB_quotation?=, Greek: =?utf-8?b?zqk=?= =?utf-16?b?/v8DyQ==?= >>> unicode(subj) u'\xabEuro-style\xbb quotation, Greek: \u03a9\u03c9' Return an ASCII string representation of the instance content. Return a list of pairs describing the components of the RFC-2231 string held in the header object header. Each pair in the list contains a Python string (not Unicode) and an encoding name. >>> email.Header.decode_header(Header('spam and eggs')) [('spam and eggs', None)] >>> print subj =?iso-8859-1?q?=ABEuro-style=BB_quotation?=, Greek: =?utf-8?b?zqk=?= =?utf-16?b?/v8DyQ==?= >>> for tup in email.Header.decode_header(subj): print tup ... ('\xabEuro-style\xbb quotation', 'iso-8859-1') (', Greek:', None) ('\xce\xa9', 'utf-8') ('\xfe\xff\x03\xc9', 'utf-16') These pairs may be used to construct Unicode strings using the built-in unicode() function. However, plain ASCII strings show an encoding of None, which is not acceptable to the unicode() function. >>> for s,enc in email.Header.decode_header(subj): ... enc = enc or 'us-ascii' ... print `unicode(s, enc)' ... u'\xabEuro-style\xbb quotation' u', Greek:' u'\u03a9' u'\u03c9' SEE ALSO: unicode() 423; email.Header.make_header() 354; Construct a header object from a list of pairs of the type returned by the function email.Header.decode-header(). You may also, of course, easily construct the list decoded_seq manually, or by other means. The three arguments maxlinelen, header_name, and continuation_ws are the same as with the email.Header.Header class. SEE ALSO: email.Header.decode_header() 353; email.Header.Header 351; The module email.Iterators provides several convenience functions to walk through messages in ways different from email.Message.Message.get_payload() or email.Message.Message.walk(). Return a generator object that iterates through each content line of the message object mess. The entire body that would be produced by str(mess) is reached, regardless of the content types and nesting of parts. But any MIME delimiters are omitted from the returned lines. >>> import email.MIMEText, email.Iterators >>> mess1 = email.MIMEText.MIMEText('message one') >>> mess2 = email.MIMEText.MIMEText('message two') >>> combo = email.Message.Message() >>> combo.set_type('multipart/mixed') >>> combo.attach(mess1) >>> combo.attach(mess2) >>> for line in email.Iterators.body_line_iterator(combo): ... print line ... message one message two Return a generator object that iterates through each subpart of message whose type matches maintype. If a subtype subtype is specified, the match is further restricted to maintype/subtype. Write a "pretty-printed" representation of the structure of the body of message mess. Output to the file-like object file. SEE ALSO: email.Message.Message.get_payload() 360; email.Message.Message.walk() 362; A message object that utilizes the email.Message module provides a large number of syntactic conveniences and support methods for manipulating an email or news message. The class email.Message.Message is a very good example of a customized datatype. The built-in str() function?and therefore also the print command?cause a message object to produce its RFC-2822 serialization. In many ways, a message object is dictionary-like. The appropriate magic methods are implemented in it to support keyed indexing and assignment, the built-in len() function, containment testing with the in keyword, and key deletion. Moreover, the methods one expects to find in a Python dict are all implemented by email.Message.Message:has_key(), .keys(), .values (), .items(), and .get(). Some usage examples are helpful: >>> import mailbox, email, email.Parser >>> mbox = mailbox.PortableUnixMailbox(open('mbox'), ... email.Parser.Parser().parse) >>> mess = mbox.next() >>> len(mess) # number of headers 16 >>> 'X-Status' in mess # membership testing 1 >>> mess.has_key('X-AGENT') # also membership test 0 >>> mess['x-agent'] = "Python Mail Agent" >>> print mess['X-AGENT'] # access by key Python Mail Agent >>> del mess['X-Agent'] # delete key/val pair >>> print mess['X-AGENT'] None >>> [fld for (fld,val) in mess.items() if fld=='Received'] ['Received', 'Received', 'Received', 'Received', 'Received'] This is dictionary-like behavior, but only to an extent. Keys are case-insensitive to match email header rules. Moreover, a given key may correspond to multiple values?indexing by key will return only the first such value, but methods like .keys(), .items(), or .get_all() will return a list of all the entries. In some other ways, an email.Message.Message object is more like a list of tuples, chiefly in guaranteeing to retain a specific order to header fields. A few more details of keyed indexing should be mentioned. Assigning to a keyed field will add an additional header, rather than replace an existing one. In this respect, the operation is more like a list.append() method. Deleting a keyed field, however, deletes every matching header. If you want to replace a header completely, delete first, then assign. The special syntax defined by the email.Message.Message class is all for manipulating headers. But a message object will typically also have a body with one or more payloads. If the Content-Type header contains the value multipart/*, the body should consist of zero or more payloads, each one itself a message object. For single part content types (including where none is explicitly specified), the body should contain a string, perhaps an encoded one. The message instance method .get_payload(), therefore, can return either a list of message objects or a string. Use the method .is_multipart() to determine which return type is expected. As the epigram to this chapter suggests, you should strictly follow content typing rules in messages you construct yourself. But in real-world situations, you are likely to encounter messages with badly mismatched headers and bodies. Single part messages might claim to be multipart, and vice versa. Moreover, the MIME type claimed by headers is only a loose indication of what payloads actually contain. Part of the mismatch comes from spammers and virus writers trying to exploit the poor standards compliance and lax security of Microsoft applications?a malicious payload can pose as an innocuous type, and Windows will typically launch apps based on filenames instead of MIME types. But other problems arise not out of malice, but simply out of application and transport errors. Depending on the source of your processed messages, you might want to be lenient about the allowable structure and headers of messages. SEE ALSO: UserDict 24; UserList 28; Construct a message object. The class accepts no initialization arguments. Add a header to the message headers. The header field is field, and its value is value.The effect is the same as keyed assignment to the object, but you may optionally include parameters using Python keyword arguments. >>> import email.Message >>> msg = email.Message.Message() >>> msg['Subject'] = "Report attachment" >>> msg.add_header('Content-Disposition','attachment', ... filename='report17.txt') >>> print msg From nobody Mon Nov 11 15:11:43 2002 Subject: Report attachment Content-Disposition: attachment; filename="report17.txt" Serialize the message to an RFC-2822-compliant text string. If the unixfrom argument is specified with a true value, include the BSD mailbox "From_" envelope header. Serialization with str() or print includes the "From_" envelope header. Add a payload to a message. The argument mess must specify an email.Message.Message object. After this call, the payload of the message will be a list of message objects (perhaps of length one, if this is the first object added). Even though calling this method causes the method .is_multipart () to return a true value, you still need to separately set a correct multipart/* content type for the message to serialize the object. >>> mess = email.Message.Message() >>> mess.is_multipart() 0 >>> mess.attach(email.Message.Message()) >>> mess. is_multipart () 1 >>> mess.get_payload() [<email.Message.Message instance at 0x3b2ab0>] >>> mess.get_content_type() 'text/plain' >>> mess.set_type('multipart/mixed') >>> mess.get_content_type() 'multipart/mixed' If you wish to create a single part payload for a message object, use the method email.Message.Message.set-payload(). SEE ALSO: email.Message.Message.set_payload() 362; Remove the parameter param from a header. If the parameter does not exist, no action is taken, but also no exception is raised. Usually you are interested in the Content-Type header, but you may specify a different header argument to work with another one. The argument requote controls whether the parameter value is quoted (a good idea that does no harm). >>> mess = email.Message.Message() >>> mess.set_type('text/plain') >>> mess.set_param('charset','us-ascii') >>> print mess From nobody Mon Nov 11 16:12:38 2002 MIME-Version: 1.0 Content-Type: text/plain;>> mess.del_param('charset') >>> print mess From nobody Mon Nov 11 16:13:11 2002 MIME-Version: 1.0 content-type: text/plain Message bodies that contain MIME content delimiters can also have text that falls outside the area between the first and final delimiter. Any text at the very end of the body is stored in email.Message.Message.epilogue. SEE ALSO: email.Message.Message.preamble 361; Return a list of all the headers with the field name field. If no matches exist, return the value specified in argument failobj. In most cases, header fields occur just once (or not at all), but a few fields such as Received typically occur multiple times. The default nonmatch return value of None is probably not the most useful choice. Returning an empty list will let you use this method in both if tests and iteration context: >>> for rcv in mess.get_all('Received',[]): ... print rcv ... About that time A little earlier >>> if mess.get_all('Foo',[]): ... print "Has Foo header(s)" Return the MIME message boundary delimiter for the message. Return failobj if no boundary is defined; this should always be the case if the message is not multipart. Return a list of string descriptions of contained character sets. Return a string description of the message character set. For message mess, equivalent to mess.get_content_type().split ("/") [0]. For message mess, equivalent to mess.get_content_type().split ("/") [1]. Return the MIME content type of the message object. The return string is normalized to lowercase and contains both the type and subtype, separated by a /. >>> msg_photo.get_content_type() 'image/png' >>> msg_combo.get_content_type() 'multipart/mixed' >>> msg_simple.get_content_type() 'text/plain' Return the current default type of the message. The default type will be used in decoding payloads that are not accompanied by an explicit Content-Type header. Return the filename parameter of the Content-Disposition header. If no such parameter exists (perhaps because no such header exists), failobj is returned instead. Return the parameter param of the header header. By default, use the Content-Type header. If the parameter does not exist, return failobj. If the argument unquote is specified as a true value, the quote marks are removed from the parameter. >>> print mess.get_param('charset',unquote=l) us-ascii >>> print mess.get_param('charset',unquote=0) "us-ascii" SEE ALSO: email.Message.Message.set_param() 362; Return all the parameters of the header header. By default, examine the Content-Type header. If the header does not exist, return failobj instead. The return value consists of a list of key/val pairs. The argument unquote removes extra quotes from values. >>> print mess.get_params(header="To") [('<[email protected]>', '')] >>> print mess.get_params(unquote=0) [('text/plain', ''), ('charset', '"us-ascii"')] Return the message payload. If the message method is_multipart() returns true, this method returns a list of component message objects. Otherwise, this method returns a string with the message body. Note that if the message object was created using email.Parser.HeaderParser, then the body is treated as single part, even if it contains MIME delimiters. Assuming that the message is multipart, you may specify the i argument to retrieve only the indexed component. Specifying the i argument is equivalent to indexing on the returned list without specifying i. If decode is specified as a true value, and the payload is single part, the returned payload is decoded (i.e., from quoted printable or base64). I find that dealing with a payload that may be either a list or a text is somewhat awkward. Frequently, you would like to simply loop over all the parts of a message body, whether or not MIME multiparts are contained in it. A wrapper function can provide uniformity: #!/usr/bin/env python "Write payload list to separate files" import email, sys def get_payload_list(msg, decode=l): payload = msg.get_payload(decode=decode) if type(payload) in [type(""), type(u"")]: return [payload] else: return payload mess = email.message_from_file(sys.stdin) for part,num in zip(get_payload_list(mess),range(1000)): file = open('%s.%d' % (sys.argv[1], num), 'w') print >> file, part SEE ALSO: email.Parser 363; email.Message.Message.is_multipart() 361; email.Message.Message.walk() 362; Return the BSD mailbox "From_" envelope header, or None if none exists. SEE ALSO: mailbox 372; Return a true value if the message is multipart. Notice that the criterion for being multipart is having multiple message objects in the payload; the Content-Type header is not guaranteed to be multipart/* when this method returns a true value (but if all is well, it should be). SEE ALSO: email.Message.Message.get_payload() 360; Message bodies that contain MIME content delimiters can also have text that falls outside the area between the first and final delimiter. Any text at the very beginning of the body is stored in email.Message.Message.preamble. SEE ALSO: email.Message.Message.epilogue 358; Replaces the first occurrence of the header with the name field with the value value. If no matching header is found, raise KeyError. Set the boundary parameter of the Content-Type header to s. If the message does not have a Content-Type header, raise HeaderParserError. There is generally no reason to create a boundary manually, since the email module creates good unique boundaries on it own for multipart messages. Set the current default type of the message to ctype. The default type will be used in decoding payloads that are not accompanied by an explicit Content-Type header. Set the parameter param of the header header to the value value. If the argument requote is specified as a true value, the parameter is quoted. The arguments charset and language may be used to encode the parameter according to RFC-2231. Set the message payload to a string or to a list of message objects. This method overwrites any existing payload the message has. For messages with single part content, you must use this method to configure the message body (or use a convenience message subclass to construct the message in the first place). SEE ALSO: email.Message.Message.attach() 357; email.MIMEText.MIMEText 348; email.MIMEImage.MIMEImage 348; email.MIMEAudio.MIMEAudio 347; Set the content type of the message to ctype, leaving any parameters to the header as is. If the argument requote is specified as a true value, the parameter is quoted. You may also specify an alternative header to write the content type to, but for the life of me, I cannot think of any reason you would want to. Set the BSD mailbox envelope header. The argument s should include the word From and a space, usually followed by a name and a date. SEE ALSO: mailbox 372; Recursively traverse all message parts and subparts of the message. The returned iterator will yield each nested message object in depth-first order. >>> for part in mess.walk(): ... print part.get_content_type() multipart/mixed text/html audio/midi SEE ALSO: email.Message.Message.get_payload() 360; There are two parsers provided by the email.Parser module: email.Parser.Parser and its child email.Parser.HeaderParser. For general usage, the former is preferred, but the latter allows you to treat the body of an RFC-2822 message as an unparsed block. Skipping the parsing of message bodies can be much faster and is also more tolerant of improperly formatted message bodies (something one sees frequently, albeit mostly in spam messages that lack any content value as well). The parsing methods of both classes accept an optional headersonly argument. Specifying headersonly has a stronger effect than using the email.Parser.HeaderParser class. If headersonly is specified in the parsing methods of either class, the message body is skipped altogether?the message object created has an entirely empty body. On the other hand, if email.Parser.HeaderParser is used as the parser class, but headersonly is specified as false (the default), the body is always read as a single part text, even if its content type is multipart/*. Construct a parser instance that uses the class _class as the message object constructor. There is normally no reason to specify a different message object type. Specifying strict parsing with the strict option will cause exceptions to be raised for messages that fail to conform fully to the RFC-2822 specification. In practice, "lax" parsing is much more useful. Construct a parser instance that is the same as an instance of email.Parser.Parser except that multipart messages are parsed as if they were single part. Return a message object based on the message text found in the file-like object file. If the optional argument headersonly is given a true value, the body of the message is discarded. Return a message object based on the message text found in the string s. If the optional argument headersonly is given a true value, the body of the message is discarded. The module email.Utils contains a variety of convenience functions, mostly for working with special header fields. Return a decoded string for RFC-2231 encoded string s: >>> Omega = unicodedata.lookup("GREEK CAPITAL LETTER OMEGA") >>> print email.Utils.encode_rfc2231(Omega+'[email protected]') %3A9-man%40gnosis.cx >>> email.Utils.decode_rfc2231("utf-8"%3A9-man%40gnosis.cx") ('utf-8', '', ':[email protected]') Return an RFC-2231-encoded string from the string s. A charset and language may optionally be specified. Return a formatted address from pair (realname,addr): Return an RFC-2822-formatted date based on a time value as returned by time.localtime(). If the argument localtime is specified with a true value, use the local timezone rather than UTC. With no options, use the current time. Return a list of pairs (realname,addr) based on the list of compound addresses in argument addresses. >>> addrs = ['"Joe" <[email protected]>','Jane <[email protected]>'] >>> email.Utils.getaddresses(addrs) [('Joe', '[email protected]'), ('Jane', '[email protected]')] Return a unique string suitable for a Message-ID header. If the argument seed is given, incorporate that string into the returned value; typically a seed is the sender's domain name or other identifying information. Return a timestamp based on an email.Utils.parsedate_tz() style tuple. >>> email.Utils.mktime_tz((2001, 1, 11, 14, 49, 2, 0, 0, 0, 0)) 979224542.0 Parse a compound address into the pair (realname,addr). Return a date tuple based on an RFC-2822 date string. >>> email.Utils.parsedate('11 Jan 2001 14:49:02 -0000') (2001, 1, 11, 14, 49, 2, 0, 0, 0) SEE ALSO: time 86; Return a date tuple based on an RFC-2822 date string. Same as email.Utils.parsedate(), but adds a tenth tuple field for offset from UTC (or None if not determinable). Return a string with backslashes and double quotes escaped. >>> print email.Utils.quote(r'"MyPath" is d:\this\that') \"MYPath\" is d:\\this\\that Return a string with surrounding double quotes or angle brackets removed. >>> print email.Utils.unquote('<[email protected]>') [email protected] >>> print email.Utils.unquote('"us-ascii"') us-ascii The module imaplib supports implementing custom IMAP clients. This protocol is detailed in RFC-1730 and RFC-2060. As with the discussion of other protocol libraries, this documentation aims only to cover the basics of communicating with an IMAP server?many methods and functions are omitted here. In particular, of interest here is merely being able to retrieve messages?creating new mailboxes and messages is outside the scope of this book. The Python Library Reference describes the POP3 protocol as obsolescent and recommends the use of IMAP4 if your server supports it. While this advice is not incorrect technically?IMAP indeed has some advantages?in my experience, support for POP3 is far more widespread among both clients and servers than is support for IMAP4. Obviously, your specific requirements will dictate the choice of an appropriate support library. Aside from using a more efficient transmission strategy (POP3 is line-by-line, IMAP4 sends whole messages), IMAP4 maintains multiple mailboxes on a server and also automates filtering messages by criteria. A typical (simple) IMAP4 client application might look like the one below. To illustrate a few methods, this application will print all the promising subject lines, after deleting any that look like spam. The example does not itself retrieve regular messages, only their headers. #!/usr/bin/env python import imaplib, sys if len(sys.argv) == 4: sys.argv.append('INBOX') (host, user, passwd, mbox) = sys.argv[1:] i = imaplib.IMAP4(host, port=143) i.login(user, passwd) resp = i.select(mbox) if r[0] <> 'OK': sys.stderr.write("Could not select %s\n" % mbox) sys.exit() # delete some spam messages typ, spamlist = i.search(None, '(SUBJECT) "URGENT"') i.store(','.join(spamlist.split()),'+FLAGS.SILENT','\deleted') i.expunge() typ, messnums = i.search(None,'ALL').split() for mess in messnums: typ, header = i.fetch(mess, 'RFC822.HEADER') for line in header[0].split('\n'): if string.upper(line[:9]) == 'SUBJECT: ': print line[9:] i.close() i.logout() There is a bit more work to this than in the POP3 example, but you can also see some additional capabilities. Unfortunately, much of the use of the imaplib module depends on passing strings with flags and commands, none of which are well-documented in the Python Library Reference or in the source to the module. A separate text on the IMAP protocol is probably necessary for complex client development. Create an IMAP instance object to manage a host connection. Close the currently selected mailbox, and delete any messages marked for deletion. The method imaplib.IMAP4.logout() is used to actually disconnect from the server. Permanently delete any messages marked for deletion in the currently selected mailbox. Return a pair (typ,datalist). The first field typ is either OK or NO, indicating the status. The second field datalist is a list of returned strings from the fetch request. The argument message_set is a comma-separated list of message numbers to retrieve. The message_parts describe the components of the messages retrieved?header, body, date, and so on. Return a (typ,datalist) tuple of all the mailboxes in directory dirname that match the glob-style pattern pattern. datalist contains a list of string names of mailb
http://etutorials.org/Programming/Python.+Text+processing/Chapter+5.+Internet+Tools+and+Techniques/5.1+Working+with+Email+and+Newsgroups/
CC-MAIN-2017-04
refinedweb
6,423
50.33
Quickstart: Adding WinJS controls and styles (HTML) The Windows Library for JavaScript (WinJS) provides high quality infrastructure such as page controls, promises, and data-binding; polished UI features like virtualizing collections; and high performance Windows controls such as ListView, FlipView, and SemanticZoom. You can use Windows Library for JavaScript in your Windows Runtime apps, in your websites, and when using HTML-based app technologies like Apache Cordova. See this feature in action as part of our App features, start to finish series: Windows Store app UI, start to finish. Prerequisites We assume that you can create a basic Windows app using JavaScript that uses the WinJS template. For help creating your first app, see Create your first Windows Store app using JavaScript. - We assume that you know how to handle events in a Windows app using JavaScript. To learn the recommended way to handle events, see Quickstart: adding HTML controls and handling events. What is the Windows Library for JavaScript? The WinJS is a library of CSS and JavaScript files. It contains JavaScript objects, organized into namespaces, designed to make easier to develop great-looking apps. WinJS includes objects that help you handle activation, access storage, and define your own classes and namespaces. For the complete list of controls that WinJS provides, see the Controls list. WinJS also provides styling features in the form of CSS styles and classes that you can use or override. (Control styling is described in Quickstart: styling controls.) Adding the Windows Library for JavaScript to your page To use the latest version of WinJS in your app or website: -. Adding a WinJS control in markup Unlike HTML controls, WinJS controls don't have dedicated markup elements: you can't create a Rating control by adding a <rating /> element, for example. To add a WinJS WinJS controls it finds. The next set of examples show you how to add a WinJS control to a project created with the Blank Application template. It's easier to follow along if you create a new Blank Application project. To create a new project using the Blank Application template Launch Microsoft Visual Studio.="/WinJS/css/ui-dark.css" rel="stylesheet"> <script src="/WinJS/js/WinJS. The WinJS.UI.processAll function processes the document and activates any WinJS controls that you've declared in markup. When you run the app, the Rating control appears where you positioned the div host element. : Unlike HTML controls, WinJS controls don't have dedicated element or attribute tags; for example, you couldn't create a Rating control and set its properties using this markup: Instead, you use the data-win-options attribute to set a property in markup. It takes a string that contains one or more property/value pairs: This example sets the maxRating of a Rating control to 10. When you run the app, the Rating control looks like this: To set more than one property, separate them with a comma: The next example sets two properties of the Rating control.: To find out if a property is supported by a given WinJS control, see its reference page. Retrieving a control that you created in markup You can also set the properties of a WinJS control programmatically. To access the control in code, retrieve the host element and then use its winControl property to retrieve the control. In the previous examples, the host element of the Rating control is "ratingControlHost".. The next section describes how to add event listeners to a WinJS control. Handling events for WinJS controls Just like with HTML controls, the preferred way to attach event listeners for a WinJS control is to use the addEventListener function. Retrieving a WinJS. - In your JavaScript, create an event handler function called ratingChanged that takes one parameter. This next example creates an event handler that displays the properties and values contained by the event object. -. Adding a WinJS control in code The previous examples showed you how to create and manipulate a WinJS control that you created in your markup, but you can also create a WinJS control using JavaScript code instead. To create a WinJS control in code - In your markup, create the element that will host your control. In your code (preferably in your DOMContentLoaded event handler), retrieve the host element. - Create your control by calling its constructor and passing the host element to the constructor. This example creates a Rating control: When you run the program, it displays the Rating you created: There's no need to call WinJS.UI.processAll—you only need to call WinJS.UI.processAll when you create a WinJS control in markup. Summary and next steps You learned how to create WinJS controls, how to set their properties, and how to attach event handlers. The next topic, Quickstart: styling controls, describes how to use Cascading Style Sheets (CSS) and the enhanced styling capabilities of Windows apps using JavaScript. To learn more about specific controls, see the Controls list and Controls by function topics. Samples For live code examples of nearly every WinJS control and an online editor, see try.buildwinjs.com. Related topics - Get WinJS - Controls list - Controls by function - API Reference for Windows Runtime and Windows Library for JavaScript
https://msdn.microsoft.com/en-us/library/windows/apps/hh465493.aspx
CC-MAIN-2018-26
refinedweb
865
53.51
I think I still have a pitchfork in my closet from that one. Personally, I think that we should keep the existing name spaces.. And for those who don't want to use particular elements from those namespaces (for example, with very few exceptions, my mobile flex apps don't use, nor declare the mx namespace), don't have to. Right now, we have mx (going away, eventually), fx (script tags) and s (spark). I don't even think adding an "a" namespace to denote new components from Apache really makes sense at this point. The reason why mx -> s was important was that the architectural really changed, plus the planned obsolescence of the old components. I don't think there is much on our plate right now that constitutes a similar need (aside some of the architectural changes that Alex and Mike have been teasing us with). -Nick On Sat, Mar 17, 2012 at 12:28 AM, Alex Harui <[email protected]> wrote: > > > > On 3/16/12 9:08 PM, "Omar Gonzalez" <[email protected]> wrote: > > > > > > I think s:List and mx:List is fine. I think changing the classes to > MxList > > or sList or SList is pretty ugly. > In Flex 4, the early pre-releases added Fx to everything (FxList, > FxTextInput) to avoid requiring folks learn about namespaces. The > pre-release users strongly put down the Fx prefix and we used namespaces > instead. I think it was the right decision. > > -- > Alex Harui > Flex SDK Team > Adobe Systems, Inc. > > >
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201203.mbox/%3CCALorpXeUN5h32tCj2FSLPHS9gm_2LAsXZNcMGAoCw-PXLcEg6g@mail.gmail.com%3E
CC-MAIN-2014-15
refinedweb
251
72.66
table of contents NAME¶ fpclassify, isfinite, isnormal, isnan, isinf - floating-point classification macros SYNOPSIS¶ #include <math.h> int fpclassify(x); int isfinite(x); int isnormal(x); int isnan(x); int isinf(x); Link with -lm. fpclassify(), isfinite(), isnormal(): || _XOPEN_SOURCE || /* Since glibc 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE || /* Since glibc 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE DESCRIPTION¶¶ For an explanation of the terms used in this section, see attributes(7). CONFORMING TO¶ POSIX.1-2001, POSIX.1-2008, C99. For isinf(), the standards merely say that the return value is nonzero if and only if the argument has an infinite value. NOTES¶ In glibc 2.01 and earlier, isinf() returns a nonzero value (actually: 1) if x is positive infinity or negative infinity. (This is all that C99 requires.) SEE ALSO¶ finite(3), INFINITY(3), isgreater(3), signbit(3) COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at
https://manpages.debian.org/unstable/manpages-dev/isinf.3.en.html
CC-MAIN-2022-21
refinedweb
179
52.46
Blog Is CheapGeeky ramblings by Kyle Davis Server2004-12-21T13:32:00ZWrite code at night? Bah.<p><font face="Arial">In my </font><A href=""><font face="Arial">last post</font></a><font face="Arial">,, </font><a href=""><font face="Arial">despite the language</font></a>.</p><img src="" width="1" height="1">daviskyle Alerter, Part 2<font face="Arial">It has been a long time since I <A href="">first talked about</a> the RSS Alerter I want to build for my technology-challenged family. In that time, virtually nothing has happened on it, other than a little bit of up front research. That's partly because I spent some time reading about how to start a <a href="">company</a>, and partly because I've been learning PHP for the framework of my <a href="">latest project</a> on a Unix-based server.<br /><br />But, I just landed a contract that will have me out of town (in Oklahoma City) during the week for a while, and without my Linux box (which is a desktop), I won't have much better to do (other than <a href="">playing poker</a>) so I might actually get to work on it!<br /><br /></font><img src="" width="1" height="1">daviskyle, Thanks, Verizon!<p><font face="Arial">I have 7 domains from which I can receive email, not including my <a href="">Gmail</a> and MSN accounts. All mail for my domains funnel down to a few POP3 accounts, which I check with <a href="">Small Business Server</a>. <a href="">too busy lately</a> to really hunt down the problem.</font></p> <p><font face="Arial">Well, today, I set aside some time to investigate, and found </font><a href=""><font face="Arial">this</font></a><font face="Arial">. It turns out Verizon decided to implement a breaking change on their SMTP servers that means I can't use SBS to send email anymore. Thanks, guys! I'm glad I'm giving you money!</font></p> <p><font face="Arial">As I </font><a href=""><font face="Arial">mentioned before</font></a><font face="Arial">, I hate Comcast, which is what caused me to switch to Verizon in the first place. Now I hate Verizon too. (Yes, I'm quick to hate companies with poor customer service. Sue me.) </font><font face="Arial". </font><font face="Arial">Anyone have any recommendations for a company that services Dallas?</font></p> <p><font face="Arial">(BTW - I have SMTP outbound again. I signed up for a $15/year </font><a href=""><font face="Arial">service from DynDns.org</font></a><font face="Arial">)</font></p><img src="" width="1" height="1">daviskyle Spam<p><font face="Arial">Comment spam sucks. I'm getting deluged with email to approve comments from these slimeballs. </font><a href=""><font face="Arial">Google says</font></a><font face="Arial"> they have the solution, but I have to agree with </font><A href=""><font face="Arial">Robert McLaws</font></a><font face="Arial">. We're not going to stop the spam with a rel attribute (though, I'm glad Scott Watermasysk has a .Text </font><a href=""><font face="Arial">quick fix</font></a><font face="Arial">). The only way we're going to stop these bastards is with </font><a href=""><font face="Arial">CAPTCHA</font></a><font face="Arial">. I hope that's hign on the </font><a href=""><font face="Arial">Telligent</font></a><font face="Arial"> priority list.</font></p> <p> </p><img src="" width="1" height="1">daviskyle Keyboard with Fingerprint Reader<p><font face="Arial">One of my keyboards started to get a little flaky, so I needed a new one. I remembered a post on Scott Dockendorf's blog about </font><A href=""><font face="Arial">Microsoft's Thumbprint Reader Technology</font></a><font face="Arial">, and I found a good price at </font><a href=""><font face="Arial">Amazon</font></a><font face="Arial">, so I decided to go for it.</font></p> <p><font face="Arial" </font><a href=""><font face="Arial">Optical Trackball</font></a><font face="Arial"> (thumb operated). The fingerprint reader, however, combined with the software from </font><a href=""><font face="Arial">Digital Persona</font></a><font face="Arial"> is very cool.</font></p> <p><font face="Arial">When you visit a website or windows application with a login form (technically, any form that you fill out the same way each time [<em>correction: this turns out not to be the case - if it doesn't look like a login form, the software will not fill it in</em>]).)</font></p> <p><font face="Arial" </font><a href=""><font face="Arial">fingerprint reader</font></a><font face="Arial">. That gets you the benefits of the reader and DP software, without the expense of the whole keyboard.</font></p><img src="" width="1" height="1">daviskyle Disk Stakka<p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Arial">MSDN is offering a special deal on the </font><a href=""><font face="Arial">Imation Disk Stakka</font></a><font face="Arial"> (the link doesn’t appear to be protected, so I think even non-MSDN subscribers can get the special price).<span style="mso-spacerun: yes"> </span>Since I like technology toys, I thought I’d pick one" <?xml:namespace prefix = st1<st1:place w:<st1:PlaceName w:Digital</st1:PlaceName> <st1:PlaceType w:River</st1:PlaceType></st1:place>, so there isn’t a real connection between their helpdesk and their retail operation. When my first unit was DOA, I was told I needed to return it and re-order because they didn’t have a mechanism for replacement.<span style="mso-spacerun: yes"> </span>I did this – meaning I have paid for two units while waiting for them to refund my money for the first unit – and the 2<sup>nd</sup> unit I received was defective as well!<span style="mso-spacerun: yes"> </span.<span style="mso-spacerun: yes"> </span>This finally resulted in me getting a working unit.<span style="mso-spacerun: yes"> </span>So, if you read this review and decide you want one, just beware of the possible DOAs.</font></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><font face="Arial"> </font></o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Arial">The Stakka consists of two parts – one hardware, one software.<span style="mso-spacerun: yes"> </span>The hardware is an enclosed carousel with a slot for inserting/removing CDs or DVDs (full-size only, no shapes).<span style="mso-spacerun: yes"> </span>The software is an explorer-integrated inventory system.<span style="mso-spacerun: yes"> </span>When you insert a disc into the Stakka, the software asks what the disc is, and what category it belongs to. <span style="mso-spacerun: yes"> </span>You can then use the explorer portion to browse the discs that are in the Stakka and select one to be ejected. Viola, it spits it out of the slot on the Stakka.<span style="mso-spacerun: yes"> </span>It keeps track of all discs that have been ejected, and defaults to one of those when you insert another disc.</font></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><font face="Arial"> </font></o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Arial">You can stack up to 5 units on top of each other, controlled off a single USB cable. Theoretically, with a powered USB hub, you can control over 100 Stakkas from a single PC. </font></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><font face="Arial"> </font></o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Arial">It’s a little bulky, and takes up a little too much space on my desk for my liking, so it will probably be relocated into the closet with my servers.<span style="mso-spacerun: yes"> </span.<span style="mso-spacerun: yes"> </span>At $99, it’s probably a frivolous toy, but if I build 400 more machines, it will have paid for itself. <grin> </font></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><font face="Arial"> </font></o:p></p><img src="" width="1" height="1">daviskyle Remote 688<p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Arial">I recently bought myself the </font><font face="Arial"><a href="">Harmony 688</a> universal remote by Logitech. <span style="mso-spacerun: yes"> </span>I was looking for something that would control all the gear in my entertainment center, that’s acceptable to my wife. I already have the an older <a href="">Philips Pronto Pro</a>, but that takes a lot of work to program the way you want it, and my wife isn’t crazy about">What a great remote! There are three things about this remote (and the whole <a href="">Harmony line</a>) that set it apart from others that I have used. First, it has what they call “Smart State Technology”, that just means it remembers what is already on, what needs to be turned on, and what needs to be turned off.<span style="mso-spacerun: yes"> </span>Second, it is programmed through the Harmony website, rather than using an obscure sequence of key codes to represent devices in your rack and desired behavior. This means they have an enormous database of devices and codes, continually being updated as new devices are released.<span style="mso-spacerun: yes"> </span>And, third, there are “activity” buttons that do everything you need them to do. For instance, there is a button labeled “watch tv” that will turn on your TV, receiver, satellite/cable box, etc, and set them all to the correct inputs.<span style="mso-spacerun: yes"> </span>Another button labeled “watch DVD” will turn on all the appropriate devices for that activity.<span style="mso-spacerun: yes"> </span>If you switch from one activity to another, the remote knows what devices are on, and what inputs they are set to, so it only sends the IR codes for the difference in state.<span style="mso-spacerun: yes"> </span></font></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><font face="Arial"> </font></o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Arial"?” <span style="mso-spacerun: yes"> </span>Answer this with “no” and the TV will be turned on, returning you to the proper state.<span style="mso-spacerun: yes"> </span>Great stuff.</font></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><font face="Arial"> </font></o:p></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><font face="Arial">I bought one for my mother-in-law for Christmas, and she loves it already. If there’s someone in your family that struggles with all those remotes on your coffee table, it’s well worth the $200. </font></p> <p class="MsoNormal" style="MARGIN: 0in 0in 0pt"><o:p><font face="Arial"> </font></o:p></p><img src="" width="1" height="1">daviskyle Alerter, part 1<p><font face="Arial">My family decided for me what my next project will be - they just don't know it. I currently maintain 3 blogs: This one, for programming- and geek-related topics, </font><a href=""><font face="Arial">KyleBits.com</font></a><font face="Arial">!</font></p> <p><font face="Arial">RSS is a wonderful thing for bringing news and other info to your door, <em>if</em>.</font></p> <p><font face="Arial">I did a quick search and the stuff I could find wasn't quite what I was looking for ... and it's more fun to write it. So, it's time to start building...</font></p><img src="" width="1" height="1">daviskyle Holidays<p><font face="Arial">Or, as we used to say, once upon a time, "Merry Christmas and Happy New Year".</font></p> <p><font face="Arial">I'll be spending the holidays puttering around the house, and alternating between two books: </font><a href=""><font face="Arial">Joel on Software</font></a><font face="Arial"> and </font><a href=""><font face="Arial">The Winner's Guide to Omaha Poker</font></a><font face="Arial">. When I tire of that, I'll rebuild my domain controller. I've been procrastinating, because it's just a laborious task. Sigh.</font></p> <p><font face="Arial">I'm actually considering turning my </font><a href=""><font face="Arial">Linux</font></a><font face="Arial"> box into a DC with Samba, and rebuilding my current DC as just a server. It'll certainly be easier that way when I need to scrub the server and rebuild next time. And it'll make </font><a href=""><font face="Arial">Paco</font></a><font face="Arial"> proud. :-)</font></p><img src="" width="1" height="1">daviskyle The Air<p><font face="Arial">Hi there. Welcome. Come on in.</font></p> <p><font face="Arial".</font></p> <p><font face="Arial">So, what has changed? Nothing, really. Just my commitment. I'm tired of sitting on the sidelines watching everyone else get to have fun with .NET, so I'm leaping back into the fray. It's about time.</font></p><img src="" width="1" height="1">daviskyle
http://weblogs.asp.net/daviskyle/atom.aspx
crawl-002
refinedweb
2,245
53.1
SQL Anywhere supports two versions of jConnect: jConnect 5.5 and jConnect 6.0.5. The jConnect driver is available as a separate download from. Documentation for jConnect can also be found on the same page. If you want to use JDBC from an applet, you must use the jConnect JDBC driver to connect to SQL Anywhere databases. SQL Anywhere supports the following versions of jConnect: jConnect 5.5 This version of jConnect is for developing JDK 1.2 applications. jConnect 5.5 is JDBC 2.0 compliant. jConnect 5.5 is supplied as a JAR file named jconn2.jar. jConnect 6.0.5 This version of jConnect is for developing JDK 1.3 or later applications. jConnect 6.0.5 is JDBC 3.0 compliant. jConnect 6.0.5 is supplied as a JAR file named jconn3.jar. SQL Anywhere installation directory. set classpath=%classpath%;path\jConnect-6_0\classes\jconn3.jar The classes in jConnect are all in the com.sybase package. If you are using jConnect 6.0.5, the classes are in com.sybase.jdbc3.jdbc. You must import these classes at the beginning of each source file: import com.sybase.jdbc3.jdbc.* Installing jConnect system objects into a database Supplying a URL to the driver
http://dcx.sap.com/1001/en/dbpgen10/pg-jconnect-using-jdbxextra.html
CC-MAIN-2018-26
refinedweb
209
53.68
There are two novel elements about this month's installment of Generic<Programming>. One is the subject we will talk about implementing the standard library component basic_string (better known as string, which is a convenience typedef for basic_string<char>), an important element of the C++ library. But the truly interesting thing is that the code available for download is especially crafted to work with Visual C++ 6.0, a compiler known for two contradictory things its ubiquity and its weak support for generic programming. The code accompanying this article implements not one, not two, but twelve basic_strings featuring various trade-offs. They are not toys. We're talking about full-fledged, Standard-compliant, industrial-strength stuff here (er, modulo bugs, of course). You think that that's going to take an awful lot of code? Think twice. Believe me, this article is going to be a lot of fun. One Size Does Not Fit All First off, why would anyone bother implementing basic_string at all? It's already implemented by your standard library, so coming up with "yet another" basic_string implementation seems to have educational value only. Yet, many of those who have been using strings in multithreaded applications know about a difficult problem. The Standard tries to allow copy-on-write implementations of basic_string. (Copy-on-write is fondly called COW by its fans, and "the mad cow" by its opponents.) COW-based strings, which use reference counting internally, are either unusable in a multithreaded application or, if the library implementer supports multithreading, unacceptably slow even in the single-threaded parts of your application. Pick one. Further problems with COW strings might arise in applications that use dynamic loading of libraries when you free a library, there is a risk that your application might still hold shallow copies of strings allocated in the memory space of that library. Extensive discussions about the trouble with COW strings [1] [2] have convinced many STL implementers to ditch COW and use alternate optimization strategies for their basic_string implementations. Yet, "most" is not "all," so when programming with threads and basic_string, you must use a non-COW implementation and consequently give up portability of your code across STL implementations. Furthermore, COW does have its advantages and is quite useful in a large category of applications. Wouldn't it be nice, then, if you could just choose what optimizations to use for a certain string in a certain application? "Here I want a non-COW string featuring the small string optimization for strings up to 16 characters" or "here I'd like to take advantage of COW, and I'd like to use my own heap for allocation." How to Implement basic_string in 200 Lines of Code In spite of the usefulness of having multiple string implementations available, building even only one such implementation is a daunting task. Writing all the member functions and type definitions on top of your implementation of choice is certainly not an easy task. I know because I did implement the basic_string interface for an application that needed to use the COM string allocator and multiple threads. While I was carefully writing all the utility functions as prescribed by the Standard, I noticed an interesting fact. Most member functions seem to gravitate around a small kernel of functions and types in other words, you can decompose the basic_string interface in "core" and "utility." The utility part is the same no matter what implementation strategy you use, while the core part varies drastically among implementations. For example, the replace family consists of utility functions implemented in terms of the core function resize. And here's the rub: the utility part of basic_string is also the bulkiest one (in my implementation it has over 700 lines of code). In contrast, writing a core implementation, even a sophisticated one, is a much easier task my implementations vary between 75 and 250 lines of code. This means that you can create new implementations easily by fitting different core incarnations under the utility interface. And you don't have to implement the bulk but once. (Actually, not at all, because you can download this column's code, which is eager to be of use.) Really, you are 200 lines of code away from your dream basic_string implementation! A Policy-Based String Those of you who have read my book [3] (ah, don't you love marketing plugs) know what our nascent design screams for: policies! Of course, when you want to vary a specific aspect of a class' implementation and want to let the user choose what implementation of that aspect to use, you migrate that aspect into a template parameter and define an interface for it. It's not rocket science, but it is remarkably effective. The standard basic_string declaration looks like this: namespace std { template <class E, class T = char_traits<E>, class A = allocator<E> > class basic_string; } E is the character type of the string (most often, either char or wchar_t), T controls how strings are compared and copied, and A is the allocator that we all know, love, and never use. We will add a fourth template argument that controls the exact implementation of the string. Because it deals with exactly how the string is stored, let's call it the Storage policy. We call our new string flex_string, because, as you will soon see, it is quite flexible: template <class E, class T = char_traits<E>, class A = allocator<E> class Storage = AllocatorStringStorage<E, A> > class flex_string; Storage defaults to AllocatorStringStorage<E, A>, which is a straightforward storage implementation that uses eager copy (sort of an antithesis of a COW). In its implementation, flex_string holds a Storage object and uses its types and member functions. How exactly you choose the interface of Storage might vary a little. In essence, after fiddling with my basic_string implementation, I found a set of functions without which I could not possibly provide an implementation, and the functions weren't redundant, either. Here's a semi-formal specification of the conditions that a Storage policy implementation must satisfy: template <typename E, <i>other arguments</i>> class StorageImpl { public: typedef <i>some_type</i> size_type; typedef <i>some_type</i> iterator; typedef <i>some_type</i> const_iterator; typedef <i>some_type</i> allocator_type; StorageImpl(const StorageImpl &); StorageImpl(const allocator_type&); StorageImpl(const E* s, size_type len, const allocator_type& a); StorageImpl(size_type len, E, const allocator_type&); iterator begin(); const_iterator begin() const; iterator end(); const_iterator end() const; size_type size() const; size_type max_size() const; size_type capacity() const; void resize(size_type, E); void reserve(size_type); void swap(StorageImpl&); const E* c_str() const; const E* data() const; allocator_type get_allocator() const; }; That's pretty much it. The specification is quite simple (and would have been even simpler without the allocator, which is a pain in the neck). The idea is that you can implement basic_string's entire interface in an efficient manner by ultimately leveraging Storage's small kernel of types and functions. The flex_string class holds a Storage object by value. I chose private inheritance for the sake of some minor conveniences. Hence, the flex_string in the code available for download looks like this: template <class E, class T = std::char_traits<E>, class A = std::allocator<E>, class Storage = AllocatorStringStorage<E, A> > class flex_string : private Storage { public: typedef typename Storage::iterator iterator; typedef typename Storage::const_iterator const_iterator; ... // 21.3.1 construct/copy/destroy explicit flex_string(const A& a = A()) : Storage(a) {} ... }; Implementing the Storage policy Ok, time to get our hands dirty. Let's churn some Storage implementations. An effective string implementation would hold a pointer to a buffer. In turn, the buffer holds the length and the capacity of the string, plus the string itself. To avoid allocating memory twice (once for the bookkeeping data and once for the data), you might want to use a trick known as "the struct hack": the buffer holds a C-style array of characters as its last element and grows dynamically to accommodate as many characters as needed. This is exactly what SimpleStringStorage does: template <class E, class A = std::allocator<E> > class SimpleStringStorage { struct Data { E* pEnd_; E* pEndOfMem_; E buffer_[1]; }; Data* pData_; public: size_type size() const { return pData_->pEnd_ - pData_->buffer_; } size_type capacity() const { return pData_->pEndOfMem_ - pData_->buffer_; } ... }; pEnd_ points to the end of the string, pEndOfMem_ points to the end of the allocated buffer, and buffer_ extends to as many characters as the string holds in other words, it "continues" beyond the end of Data's memory. To achieve this flexibility, pData_ does not exactly point to a Data object, but to a larger chunk of memory cast to a Data. This "struct hack" is in theory not 100 percent portable, but in practice, well, it just is. SimpleStringStorage features another nice little optimization all empty strings are shared and point to a static Data instance. An alternate implementation could initialize pData_ with zero for empty strings, but that would have propagated tests through many member functions. SimpleStringStorage is "simple" because it is aloof to the allocator passed in. SimpleStringStorage simply uses the standard free store (new/delete) for its memory needs. Using the passed-in allocator for allocating Data objects is harder than it might seem, due partly to the allocator's design (no support for objects of arbitrary size) and partly to compiler compatibility issues. You can find such a politically correct Storage policy implementation in the class template AllocatorStringStorage. Yet another possible implementation of a string storage is to simply use std::vector as a back-end. The implementation is a slam-dunk, and what you'll get is a lean, mean string that reuses a nicely tuned standard library facility. This also helps in minimizing object code size. You can look up that implementation in VectorStringStorage. All these three implementations use inheritance to use the EBO (Empty Base Optimization) [4] wherever possible. (Did I mention the "industrial-strength" buzzword?) Using EBO is very effective because most allocators are in fact empty classes. Exhilarated C++ Ok, so here we are, some 1,300 of lines of code later, already with three nifty basic_string implementations under our belt. That's 433 lines of code per implementation. Not too bad, especially when you think that you can add new implementations quite easily. If you think that was fun, the article has reached its goal so far. But don't forget that the opening paragraph mentions a lot of fun, which hopefully starts now. Let's drop in the SSO (small string optimization) [5]. The idea behind SSO is to store small strings right in the string object (not in dynamically-allocated storage). When the size becomes too big to fit inside the string, a dynamic allocation strategy is used. The two strategies share the memory inside string for bookkeeping data. The string class can differentiate between the two mechanisms through some sort of a tag: template <class E, other parameters> class sso_string { struct DynamicData { ... }; static const unsigned int maxSmallStringLen = 12; union { E[maxSmallStringLen] inlineBuffer_; DynamicData data_; }; bool isSmall_; ... }; If isSmall_ is true, the string is stored right in inlineBuffer_. Otherwise, data_ is valid. The problem is what kind of dynamic allocation strategy to use for DynamicData? An std::vector? A SimpleStringStorage? An AllocatorStringStorage? The answer, of course, is "any of the above and more, please." It's clear that using SSO is orthogonal on whatever alternate storage you use. Therefore, the SmallStringOpt class template has another storage as a template parameter: template <class E, unsigned int threshold, class Storage, typename Align = E*> class SmallStringOpt { enum { temp = threshold > sizeof(Storage) ? threshold : sizeof(Storage) }; public: enum { maxSmallString = temp > sizeof(Align) ? temp : sizeof(Align) }; private: union { E buf_[maxSmallString + 1]; Align align_; }; ...implement the Storage policy... }; The buf_ member variable stores either a Storage object or the string itself. But what's that Align business? Well, when dealing with such "seated allocation," you must be careful with alignment issues. Because there is no portable way of figuring out what alignment requirements Storage has, SmallStringOpt accepts a type that specifies the alignment and stores it in the dummy align_ variable. How does SmallStringOpt make the difference between small and large strings? The last element of buf_ (namely buf_[maxSmallString]) stores the difference between maxSmallString and the actual length of the string for small strings, and a magic number for long strings. For a string of size maxSmallString, buf_[maxSmallString] is zero, which very nicely serves as both null terminator and tag. You can see a number of tricks, casts, and low-level stuff in SmallStringOpt (we're talking about an optimization here, right?), but in the end the result is remarkable: we can combine SmallStringOpt with any other Storage implementation, including of course SimpleStringStorage, VectorStringStorage, and AllocatorStringStorage. So now we have six implementations of basic_string we multiplied our returns with an incremental effort. (By the way, lots of fun yet?) By now the code is 1,440 lines long, so we went down to 240 lines of code per basic_string implementation. If C++ programming were karate, leveraging multiplied returns on your code investment would be like fighting with multiple opponents at once. Here's an example the instantiation: typedef flex_string< char, std::char_traits<char>, std::allocator<char>, SmallStringOpt<char, 16, VectorStringStorage<char, std::allocator<char> > > > String; specifies a string that combines an std::vector-based storage with the small-string optimization for strings less than at least 16 characters. Back to COW Like it or not, you can't ignore COW too many people find the gentle animal useful. For their sake, let's implement a CowString class template that, again, is able to add COW to any other Storage. CowString looks like: template <class E, class Storage> class CowString { struct Data { Storage s_; unsigned int refs_; }; Data* pData_; public: ... }; Data holds whatever Storage you choose and a reference count. CowString itself contains only a pointer to Data. Multiple CowStrings might point to the same Data object. Whenever a potential change is detected, CowString makes a genuine duplicate of its data. Now let's take a look at this: typedef flex_string< char, std::char_traits<char>, std::allocator<char>, SmallStringOpt<char, 5, CowString<char, AllocatorStringStorage<char, std::allocator<char> > > > > String; What we have here is a string optimized to not use dynamic allocation for strings shorter than five characters. For longer strings, a COW strategy is used over an allocator-based implementation. CowString doubles again the number of potential instances of flex_string, so now we have twelve implementations at our disposal. Total code amounts to 1,860 lines, or 155 lines of code per implementation. There are actually twenty-four of them if you consider the order in which you apply SmallStringOpt and CowString. However, applying COW to small strings is not likely to be an effective design decision, so you'll always apply SmallStringOpt to CowString and not vice versa. Conclusion basic_string is a very baroque component. In spite of that, careful policy-based design can increase your productivity into the stratosphere. Using a handful of policy implementations, you can choose between straight, small-string optimized, and reference-counted basic_string implementation as easy as feeding arguments to a template class. Surgeon General's warning: You might allegedly have a lot of fun while doing all that. References [1] Herb Sutter. "Optimizations that Aren't (In a Multithreaded World)," C/C++ Users Journal, June 1999. [2] Kevlin Henney. "From Mechanism to Method: Distinctly Qualified," C/C++ Users Journal C++ Experts Forum, May 2001,. [3] Andrei Alexandrescu. Modern C++ Design (Addison-Wesley, 2001). [4] Andrei Alexandrescu. "Traits on Steroids," C++ Report, June 2000,. [5] Jack Reeves. "String in the Real World Part 2," C++ Report, January 1999,. About the Author ().
http://www.drdobbs.com/generic-a-policy-based-basicstring-imple/184403784
CC-MAIN-2017-17
refinedweb
2,611
52.9
Oh no, you think, yet another article about drivers. Are they crazy about drivers at Be, or what? Ouaire iz ze beauty in driverz? The truth is that I would have loved to write about another (hotter) topic, one that has kept me very busy for the past few months, but my boss said I couldn't (flame him at [email protected] ;-). I guess I'll have wait until it becomes public information. In the meantime, please be a good audience, and continue reading my article. Before I get on with the meat of the subject, I'd like to stress that the following information pertains to our next release, BeOS Release 4. Because R4 is still in the making, most of what you read here is subject to change in the details, or even in the big lines. Don't write code today based on the following. It is provided to you mostly as a hint of what R4 will contain, and where we're going after that. That's it. We finally realized that our driver API was not perfect, and that there was room for future improvements, or "additions." That's why we'll introduce version control in the driver API for R4. Every driver built then and thereafter will contain a version number that tells which API the driver complies to. In concrete terms, the version number is a driver global variable that's exported and checked by the device file system at load time. In Drivers.h you'll find the following declarations: #define B_CUR_DRIVER_API_VERSION2 extern _EXPORT int32 api_version; In your driver code, you'll need to add the following definition: #include <Drivers.h> ... int32 api_version= B_CUR_DRIVER_API_VERSION. Driver API version 2 refers to the new (R4) API. Version 1 is the R3 API. If the driver API changes, we would bump the version number to 3. Newly built drivers will have to comply to the new API and declare 3 as their API version number. Old driver binaries would still declare an old version (1 or 2), forcing the device file system to translate them to the newer API (3). This incurs only a negligible overhead in loading drivers. But, attendez, vous say. What about pre-R4 drivers, which don't declare what driver API they comply to? Well, devfs treats drivers without version number as complying to the first version of the API—the one documented today in the Be Book. Et voila. I know you're all dying to learn what's new in the R4 driver API... Here it is, revealed to you exclusively! We'll introduce scatter-gather and (a real) select in R4, and add a few entries in the device_hooks structure to let drivers deal with the new calls. As discreetly announced by Trey in his article Be Engineering Insights: An Introduction to the Input Server, we've added 2 new system calls, well known to the community of UNIX programmers: struct iovec { void * iov_base; size_t iov_len; }; typedef struct iovec iovec; extern ssize_t readv_pos(int fd, off_t pos, constiovec * vec, size_t count); extern ssize_t writev_pos(int fd, off_t pos, constiovec * vec, size_t count); These calls let you read and write multiple buffers to/from a file or a device. They initiate an IO on the device pointed to by fd, starting at position pos, using the count buffers described in the array vec. One may think this is equivalent to issuing multiple simple reads and writes to the same file descriptor—and, from a semantic standpoint, it is. But not when you look at performance! Most devices that use DMA are capable of "scatter-gather." It means that the DMA can be programmed to handle, in one shot, buffers that are scattered throughout memory. Instead of programming N times an IO that points to a single buffer, only one IO needs to be programmed, with a vector of pointers that describe the scattered buffers. It means higher bandwidth. At a lower level, we've added two entries in the device_hooks structure: typedef status_t (*device_readv_hook) (void * cookie, off_t position, constiovec * vec, size_t count, size_t * numBytes); typedef status_t (*device_writev_hook) (void * cookie, off_t position, constiovec * vec, size_t count, size_t * numBytes); typedef struct { ... device_readv_hook readv; /* scatter-gather read from the device */ device_writev_hook writev; /* scatter-gather write to the device */ } device_hooks; Notice that the syntax is very similar to that of the single read and write hooks: typedef status_t (*device_read_hook) (void * cookie, off_t position, void * data, size_t * numBytes); typedef status_t (*device_write_hook) (void * cookie, off_t position, constvoid *data, size_t * numBytes); Only the descriptions of the buffers differ. Devices that can take advantage of scatter-gather should implement these hooks. Other drivers can simply declare them NULL. When a readv() or writev() call is issued to a driver that does not handle scatter-gather, the IO is broken down into smaller IO using individual buffers. Of course, R3 drivers don't know about scatter-gather, and are treated accordingly. I'm not breaking the news either with this one. Trey announced in his article last week the coming of select(). This is another call that is very familiar to UNIX programers: extern int select(int nbits, struct fd_set * rbits, struct fd_set * wbits, struct fd_set * ebits, struct timeval * timeout); rbits, wbits and ebits are bit vectors. Each bit represents a file descriptor to watch for a particular event: rbits: wait for input to be available (read returns something immediately without blocking) wbits: wait for output to drain (write of 1 byte does not block) ebits: wait for exceptions. select() returns when at least one event has occurred, or when it times out. Upon exit, select() returns (in the different bit vectors) the file descriptors that are ready for the corresponding event. select() is very convenient because it allows a single thread to deal with multiple streams of data. The current alternative is to spawn one thread for every file descriptor you want to control. This might be overkill in certain situations, especially if you deal with a lot of streams. select() is broken down into two calls at the driver API level: one hook to ask the driver to start watching a given file descriptor, and another hook to stop watching. Here are the two hooks we added to the device_hooks structure: struct selectsync; typedef struct selectsync selectsync; typedef status_t (*device_select_hook) (void * cookie, uint8 event, uint32 ref, selectsync * sync); typedef status_t (*device_deselect_hook) (void * cookie, uint8 event, selectsync * sync); #define B_SELECT_READ1 #define B_SELECT_WRITE2 #define B_SELECT_EXCEPTION3 typedef struct { ... device_select_hook select; /* start select */ device_deselect_hook deselect; /* stop select */ } device_hooks; cookie represents the file descriptor to watch. event tells what kind of event we're waiting on for that file descriptor. If the event happens before the deselect hook is invoked, then the driver has to call: extern void notify_select_event(selectsync * sync, uint32 ref); with the sync and ref it was passed in the select hook. This happens typically at interrupt time, when input buffers are filled or when output buffers drain. Another place where notify_select_event() is likely to be called is in your select hook, in case the condition is already met there. The deselect hook is called to indicate that the file descriptor shouldn't be watched any more, as the result of one or more events on a watched file descriptor, or of a timeout. It is a serious mistake to call notify_select_event() after your deselect hook has been invoked. Drivers that don't implement select() should declare these hooks NULL. select(), when invoked on such drivers, will return an error. Another big addition to R4 is the notion of "bus managers." Arve wrote a good article on this, which you'll find at: Be Engineering Insights: Splitting Device Drivers and Bus Managers Bus managers are loadable modules that drivers can use to access a hardware bus. For example, the R3 kernel calls which drivers were using looked like this: extern long get_nth_pci_info(long index, pci_info * info); extern long read_pci_config(uchar bus, uchar device, uchar function, long offset, long size); extern void write_pci_config(uchar bus, uchar device, uchar function, long offset, long size, long value); ... Now, they're encapsulated in the PCI bus manager. The same happened for the ISA, SCSI and IDE bus related calls. More busses will come. This makes the kernel a lot more modular and lightweight, as only the code handling the present busses are loaded in memory. In R3, /boot/beos/system/add-ons/kernel/drivers/ and /boot/home/config/add-ons/kernel/drivers/ contained the drivers. This flat organization worked fine. But it had the unfortunate feature of not scaling very well as you add drivers to the system, because there is no direct relation between the name of a device you open and the name of the driver that serves it. This potentially causes all drivers to be searched when an unknown device is opened. That's why we've broken down these directories into subdirectories that help the device file system locate drivers when new devices are opened. ../add-ons/kernel/dev/ mirrors the devfs name space using symlinks and directories ../add-ons/kernel/bin/ contains the driver binaries For example, the serial driver publishes the following devices: ports/serial1 ports/serial2 It lives under ../add-ons/kernel/bin/ as serial, and has the following symbolic link set up: ../add-ons/kernel/drivers/dev/ports/serial -> ../../bin/serial If "fred", a driver, wishes to publish a ports/XYZ device, then it should setup this symbolic link: ../add-ons/kernel/drivers/dev/ports/fred -> ../../bin/fred If a driver publishes devices in more than one directory, then it must setup a symbolic link in every directory in publishes in. For example, driver "foo" publishes: fred/bar/machin greg/bidule then it should come with the following symbolic links: ../add-ons/kernel/drivers/dev/fred/bar/foo -> ../../../bin/foo ../add-ons/kernel/drivers/dev/greg/foo -> ../../bin/foo This new organization speeds up device name resolution a lot. Imagine that we're trying to find the driver that serves the device /dev/fred/bar/machin. In R3, we have to ask all the drivers known to the system, one at a time, until we find the right one. In R4, we only have to ask the drivers pointed to by the links in ../add-ons/kernel/drivers/dev/fred/bar/. You see that the driver world has undergone many changes in BeOS Release 4. All this is nice, but there are other features that did not make it in, which we'd like to implement in future releases. Perhaps the most important one is asynchronous IO. The asynchronous read() and write() calls don't block—they return immediately instead of waiting for the IO to complete. Like select(), asynchronous IO makes it possible for a single thread to handle several IOs simultaneously, which is sometimes a better option than spawning one thread for each IO you want to do concurrently. This is true especially if there are a lot of them. Thanks to the driver API versioning, we'll have no problems throwing the necessary hooks into the device_hooks structure while remaining backward compatible with existing drivers. In application writing, the Interface Kit (and the Application Server which runs underneath the Kit) are responsible for handling all the display that finally goes on screen. They provide a nice, reasonably fast way to develop a good GUI for your application. Sometimes however, they aren't fast enough, especially for game writing. Using a windowed-mode BDirectWindow sometimes helps (or doesn't slow things down, in any case), but you still have to cooperate with other applications whose windows can suddenly overlap yours or want to use the graphics accelerator exactly when you need it. Switching to a full-screen BDirectWindow improves things a little more, but you may still want even higher performance. What you need is a BWindowScreen. The BWindowScreen basically allows you to establish an (almost) direct connection to the graphics driver, bypassing (almost) the whole Application Server. Its great advantage over BDirectWindow is that it allows you to manipulate all the memory from the graphics card, instead of just having a simple frame buffer. Welcome to the world of double- (or triple-) buffering, of high-speed blitting, of 60+ fps performance. Looks quite exciting, hey? Unfortunately, all is not perfect. BWindowScreen is a low-level API. This means that you'll have to do many things by hand that you were used to having the Application Server do for you. BWindowScreen is also affected by some hardware and software bugs, which can make things harder than they should be. BWindowScreen reflects the R3 graphics architecture. That architecture is going away in R4, since it was becoming dated. The architecture that replaces it will allow some really cool things in later releases. BWindowScreen is still the best way to get high-performance full screen display in R4, though it too will be replaced by something even better in a later release. Here is a code snippet, ready for you to use and customize: #include <Application.h> #include <WindowScreen.h> #include <string.h> typedef long (*blit_hook)(long,long,long,long,long,long); typedef long (*sync_hook)(); class NApplication:public BApplication{ public: NApplication(); bool is_quitting; // So that the WindowScreen knows what to do // when disconnected. private: bool QuitRequested(); void ReadyToRun(); }; class NWindowScreen:public BWindowScreen{ public: NWindowScreen(status_t*); private: void ScreenConnected(bool); long MyCode(); static long Entry(void*); thread_id tid; sem_id sem; area_id area; uint8* save_buffer; uint8* frame_buffer; ulong line_length; bool thread_is_locked; // small hack to allow to quit the // app from ScreenConnected() blit_hook blit; // hooks to the graphics driver functions sync_hook sync; }; main() { NApplication app; } NApplication:: NApplication() : BApplication("application/x-vnd.Be-sample-jbq1") { Run(); // see you in ReadyToRun() } void NApplication:: ReadyToRun() { status_t ret= B_ERROR; is_quitting= false; NWindowScreen* ws=new NWindowScreen(& ret); // exit if constructing the WindowScreen failed. if (( ws== NULL)||( ret< B_OK)) PostMessage( B_QUIT_REQUESTED); } bool NApplication:: QuitRequested() { is_quitting= true; return true; } NWindowScreen:: NWindowScreen(status_t* ret) : BWindowScreen("Example", B_8_BIT_640x480, ret) { thread_is_locked= true; tid=0; if (* ret== B_OK) { // this semaphore controls the access to the WindowScreen sem= create_sem(0,"WindowScreen Access"); // this area is used to save the whole framebuffer when // switching workspaces. (better than malloc()). area= create_area("save",& save_buffer, B_ANY_ADDRESS, 640*2048, B_NO_LOCK, B_READ_AREA| B_WRITE_AREA); // exit if an error occurred. if (( sem< B_OK)||( area< B_OK)) * ret= B_ERROR; else Show(); // let's go. See you in ScreenConnected. } } void NWindowScreen:: ScreenConnected(bool connected) { if ( connected) { if (( SetSpace( B_8_BIT_640x480)< B_OK) ||( SetFrameBuffer(640,2048)< B_OK)) { // properly set the framebuffer. // exit if an error occurs. be_app-> PostMessage( B_QUIT_REQUESTED); return; } // get the hardware acceleration hooks. get them each time // the WindowScreen is connected, because of multiple // monitor support blit=(blit_hook) CardHookAt(7); sync=(sync_hook) CardHookAt(10); // cannot work with no hardware blitting if ( blit== NULL) { be_app-> PostMessage( B_QUIT_REQUESTED); return; } // get the framebuffer-related info, each time the // WindowScreen is connected (multiple monitor) frame_buffer=(uint8*)( CardInfo()-> frame_buffer); line_length= FrameBufferInfo()-> bytes_per_row; if ( tid==0) { // clean the framebuffer memset( frame_buffer,0,2048* line_length); // spawn the rendering thread. exit if an error occurs. // don't use a real-time thread. URGENT_DISPLAY is enough. if ((( tid= spawn_thread( Entry,"rendering thread", B_URGENT_DISPLAY_PRIORITY, this))< B_OK) ||( resume_thread( tid)< B_OK)) be_app-> PostMessage( B_QUIT_REQUESTED); } else for (int y=0; y<2048; y++) // restore the framebuffer when switching back from // another workspace. memcpy( frame_buffer+ y* line_length, save_buffer+640* y,640); // set our color list. for (int i=0; i<128; i++) { rgb_color c1={ i*2, i*2, i*2}; rgb_color c2={127+ i,2* i,254}; SetColorList(& c1, i, i); SetColorList(& c2, i+128, i+128); } // allow the rendering thread to run. thread_is_locked= false; release_sem( sem); } else { // block the rendering thread. if (! thread_is_locked) { acquire_sem( sem); thread_is_locked= true; } // kill the rendering and clean up when quitting if (((( NApplication*) be_app)-> is_quitting)) { status_t ret; kill_thread( tid); wait_for_thread( tid,& ret); delete_sem( sem); delete_area( area); } else { // set the color list black so that the screen doesn't // seem to freeze while saving the framebuffer rgb_color c={0,0,0}; for (int i=0; i<256; i++) SetColorList(& c, i, i); // save the framebuffer for (int y=0; y<2048; y++) memcpy( save_buffer+640* y, frame_buffer+ y* line_length,640); } } } long NWindowScreen:: Entry(void* p) { return (( NWindowScreen*) p)-> MyCode(); } long NWindowScreen:: MyCode() { // gain access to the framebuffer before writing to it. acquire_sem( sem); for (int j=1440; j<2048; j++) { for (int i=0; i<640; i++) { // draw the background ripple pattern float val=63.99*(1+cos(2* PI*(( i-320)*( i-320) +( j-1744)*( j-1744))/1216)); frame_buffer[ i+ line_length* j]=int( val); } } ulong numframe=0; bigtime_t trgt=0; ulong y_origin; uint8* current_frame; while( true) { // the framebuffer coordinates of the next frame y_origin=480*( numframe%3); // and a pointer to it current_frame= frame_buffer+ y_origin* line_length; // copy the background int ytop= numframe%608, ybot= ytop+479; if ( ybot<608) { blit(640,1440+ ytop,0, y_origin,639,479); } else { blit(0,1440+ ytop,0, y_origin,639,1086- ybot); blit(0,1440,0, y_origin+1087- ybot,639, ybot-608); } // calculate the circle position. doing such calculations // between blit() and sync() can save some time. uint32 x=287.99*(1+sin( numframe/72.)); uint32 y=207.99*(1+sin( numframe/52.)); if ( sync) sync(); // draw the circle for (int j=0; j<64; j++) { for (int i=0; i<64; i++) { if (( i-31)*( i-32)+( j-31)*( j-32)<=1024) current_frame[ x+ i+ line_length*( y+ j)]+=128; } } // release the semaphore while waiting. gotta release it // at some point or nasty things will happen! release_sem( sem); // we're doing some triple buffering. unwanted things would // happen if we rendered more pictures than the card can // display. we here make sure not to render more than 55.5 // pictures per second. if ( system_time()< trgt) snooze( trgt- system_time()); trgt= system_time()+18000; // acquire the semaphore back before talking to the driver acquire_sem( sem); // do the page-flipping MoveDisplayArea(0, y_origin); // and go to the next frame! numframe++; } return 0; } There are some traps to be aware of before you begin playing with the BWindowScreen: About BWindowScreen(), SetSpace() and SetFrameBuffer(): The constructor does not completely initialize the BWindowScreen internal data. You should call Show(), SetSpace() and SetFrameBuffer() *in that order* if you want the structures returned by CardInfo() and FrameBufferInfo() to be valid. You should call Show() just after constructing the BWindowScreen object, and call SetSpace() and SetFrameBuffer() in ScreenConnected() *each time* your BWindowScreen is connected (not just the first time). You should neither call SetSpace() without SetFrameBuffer() nor call SetFrameBuffer() without SetSpace(). Always call SetSpace() *then* SetFrameBuffer() for the best results. Choosing a good color_space and a good framebuffer size: You should be aware that in R3.x some drivers do not support 16 bpp, and some others do not support 32 bpp. You should also know that some graphics cards do not allow you to choose any arbitrary framebuffer size; some will not accept a framebuffer wider than 1600 or 2048, or higher than 2048, some will only be able to use a small set of widths. I recommend not using a framebuffer wider than the display area (except for temporary development reasons or if you don't care about compatibility issues). It's also a good idea not to use the full graphics card memory but to leave 1kB to 4kB unused (for the hardware cursor). Here are some height limits you should not break if you want your program to be compatible with the mentioned cards: in a B_8_BIT_640x480 space: 640x1632 all 1MB cards 640x2048 2MB PowerMac 7300/7600/8500/8600, #9GXE64 (BeBox) 640x3270 all 2MB cards in a B_8_BIT_800x600 space: 800x1305 all 1MB cards 800x2048 2MB PowerMac 7300/7600/8500/8600, #9GXE64 (BeBox) 800x2180 2MB Matrox cards 800x2616 all 2MB cards in a B_16_BIT_640x480 space: 640x1635 all 2MB cards 640x3273 all 4MB cards in a B_16_BIT_800x600 space: 800x1308 all 2MB cards 800x2182 4MB Matrox cards 800x2618 all 4MB cards in a B_32_BIT_640x480 space: 640x1636 all 4MB cards in a B_32_BIT_800x600 space: 800x1309 all 4MB cards MoveDisplayArea() and hardware scrolling: Although the Be Book says that MoveDisplayArea() can be used for hardware scrolling, you shouldn't try to use it that way. Some graphics cards are known to not implement hardware scrolling properly. You should try to use MoveDisplayArea() only with x=0, and only for page-flipping (not for real hardware scrolling). CardHookAt(10) ("sync"): One of the keys to high-performance—the graphics card hooks must be treated with special attention. If there is a sync function (hook number 10), all other hooks can be asynchronous. Be careful to call the sync hook when it's needed (e.g., to synchronize hardware acceleration and framebuffer access, or to finish all hardware accelerations before page-flipping or before being disconnected from the screen). ScreenConnected() and multiple monitors: While R3 does not support any form of multiple monitors, future releases will. You should keep in mind that a BWindowScreen might be disconnected from one screen and reconnected to another one. Consequently, you must refresh the card hooks each time your BWindowScreen is connected, as well as any variable that could be affected by a change in CardInfo() or FrameBufferInfo(). MoveDisplayArea() and the R3 Matrox driver: In R3.x, MoveDisplayArea() returns immediately but the display area is not effective until the next vertical retrace, except for the Matrox driver. The default Matrox driver actually waits until the next vertical retrace before returning (and sometimes misses a retrace and has to wait until the next one). There is an alternate Matrox driver at which returns immediately, but the display area is effective immediately as well. Seen from the program, this driver has the same behaviour as all other drivers, at the cost of a little tearing. It's advisable to use that driver when developing BWindowScreen applications under R3. (All drivers will have the same behaviour in R4.) About 15/16bpp: We have discovered the bugs in the R3 drivers that affected 5/16bpp WindowScreens with ViRGE and Matrox cards. There are some updated drivers available at: and Also be aware that some drivers do not support both 15bpp and 16bpp. Even worse, the old Matrox driver would use a 15bpp screen when asked for 16bpp. Update your drivers! It is funny, but somewhat fitting that many times the Newsletter article you intend to write is not really the Newsletter article you end up writing. With the best of intentions, I chose to follow a recent trend in articles and talk about multithreaded programming and locking down critical sections of code and resources. The vehicle for my discussion was to be a Multiple-Reader Single-Writer locking class in the mode of BLocker, complete with Lock(), Unlock(), IsLocked() and an Autolocker-style utility class. Needless to say, the class I was expecting is a far cry from what I will present today. In the hopes of this being my first short Newsletter article, I will leave the details of the class to the sample code. For once it was carefully prepared ahead of time and is reasonably commented. I will briefly point out two neat features of the class before heading into a short discussion of locking styles. The first function to look at is the IsWriteLocked() function, as it shows a way to cache the index of a thread's stack in memory, and use it to help identify a thread faster than the usual method, find_thread(NULL). The stack_base method is not infallible, and needs to be backed up by find_thread(NULL) when there is no match, but it is considerably faster when a match is found. This is kind of like the benaphore technique of speeding up semaphores. The other functions to look at are the register_thread() and unregister_thread() functions. These are debug functions that keep state about threads holding a read-lock by creating a state array with room for every possible thread. An individual slot can be set aside for each thread and specified by performing an operation: thread_id % max_possible_threads. Again, the code itself lists these in good detail. I hope you find the class useful. A few of the design decisions I made are detailed in the discussion below. I want to take a little space to discuss locking philosophies and their trade-offs. The two opposing views can be presented briefly as "Lock Early, Lock Often" and "Lock Only When and Where Necessary." These philosophies sit on opposite ends of the spectrum of ease of use and efficiency, and both have their adherents in the company (understanding that most engineers here fall comfortably in the middle ground.) The "Lock Early, Lock Often" view rests on the idea that if you are uncertain exactly where you need to lock, it is better to be extra sure that you lock your resources. It advises that all locking classes should support "nested" calls to Lock(); in other words if a thread holds a lock and calls Lock() again, it should be allowed to continue without deadlocking waiting for itself to release the lock. This increases the safety of the lock, by allowing you to wrap all of your functions in Lock() / Unlock() pairs and allowing the lock to take care of knowing if the lock needs to be acquired or not. An extension of this are Autolocking classes, which acquire a lock in their constructor and release it in their destructor. By allocating one of these on the stack you can be certain that you will safely hold the lock for the duration of your function. The main advantage of the "Lock Early, Lock Often" strategy is its simplicity. It is very easy to add locking to your applications: create an Autolock at the top of all your functions and be assured that it will do its magic. The downside of this philosophy is that the lock itself needs to get smarter and to hold onto state information, which can cause some inefficiencies in space and speed. At the other end of the spectrum is the "Lock Only When and Where Necessary." This philosophy asserts that programmers using the "Lock Early, Lock Often" strategy do not understand the locking requirements of their applications, and that is essentially a bug just waiting to happen. In addition, the overhead added to applications by locking when it is unnecessary (say, in a function that is only called >from within another function that already holds the lock) and by using an additional class to manage the lock makes the application larger and less efficient. This view instead requires programmers to really design their applications and to fully understand the implications of the locking mechanisms chosen. So, which is correct? I think it often depends on the tradeoffs you are willing to make. With locks with only a single owner, the state information needed is very small, and usually the lock's system for determining if a thread holds the lock is fairly efficient (see the stack_base trick mentioned above to make it a bit faster.) Another consideration is how important speed and size are when dealing with the lock. In a very crucial area of an important, busy program, like the app_server, increasing efficiency can be paramount. In that case it is much, much better to take the extra time to really understand the locking necessary and to reduce the overhead. Even better would be to design a global application architecture that makes the flow of information clear, and correspondingly makes the locking mechanisms much better (along with everything else.) The MultiLocker sample code provided leans far to the efficiency side. The class itself allows multiple readers to acquire the lock, but does not allow these readers to make nested ReadLock() calls. The overhead for keeping state for each readers (storage space and stomping through that storage space every time a ReadLock() or ReadUnlock() call was made) was simply too great. Writers, on the other hand, have complete control over the lock, and may make ReadLock() or additional WriteLock() calls after the lock has been acquired. This allows a little bit of design flexibility so that functions that read information protected by the lock can be safely called by a writer without code duplication. The class does have a debug mode where state information is kept about readers so you can be sure that you are not performing nested ReadLock()s. The class also has timing functions so that you can see how long each call takes in both DEBUG mode and, with slight modifications to the class, the benefits of the stack-based caching noted above. I have included some extensive timing information from my computers that you can look at, or you can run your own tests with the test app included. Note that the numbers listed are pretty close to the raw numbers of the locking overhead, as writers only increment a counter, and readers simply access that counter. The sample code can be found at: The class should be pretty efficient, and you are free to use it and make adjustments as necessary. My thanks go out to Pierre and George from the app_server team, for the original lock on which this is based, and for their assistance with (and insistence on) the efficiency concerns. And, if it is, are we wrong to focus on it? Can we pace off enough running room to launch the virtuous ascending spiral of developers begetting users begetting developers? Is the A/V space large enough to swing a cat and ignite a platform? Perhaps there's another way to look at the platform question, one that's brought to mind by the latest turn of Apple's fortunes. Back in 1985, Apple had a bad episode: The founders were gone, the new Mac wasn't taking off and the establishment was dissing Apple as a toy company with a toy computer. The advice kept pouring in: reposition the company, refocus, go back to your roots, find a niche where you have a distinctive advantage. One seer wanted to position Apple as a supplier of Graphics-Based Business Systems, another wanted to make the company the Education Computer Company. Steve Jobs, before taking a twelve year sabbatical, convinced Apple to buy 20% of Adobe, and thus began the era of desktop publishing and the Gang of Four (Apple, Adobe, Aldus and Canon). Apple focused on publishing, and is still focused on publishing (as evidenced by the other Steve—Ballmer—ardently promoting NT as *the* publishing platform). Does that make Apple a publishing niche player? Not really. iMac buyers are not snapping up the "beetle" Mac for publishing, they just want a nice general-purpose computer. Although Apple is still thrown into the publishing bin, the Mac has always strived to be an everyday personal computer, and the numbers show that this isn't mere delusion: For example, Macs outsell Photoshop ten to one. But let's assume that at the company's zenith, publishing made up as much as 25% of Apple sales. Even then, with higher margin CPUs, Apple couldn't live on publishing alone, hence the importance of a more consumer-oriented product such as the iMac and hence, not so incidentally, the importance of keeping Microsoft Office on the platform. The question of the viability of an A/V strategy stems from us being thrown into the same sort of bin as our noble hardware predecessor. But at Be we have an entirely different business model. A hardware company such as Apple can't survive on a million units per year. Once upon a time it could, but those were the salad days of expensive computers and 66% gross margins. We, on the other hand, have a software-only business model and will do extremely well with less than a million units per year--and so will our developers. As a result, the virtuous spiral will ignite (grab a cat). More important—and here we share Apple's "niche-yet-general" duality -- the question may be one that never needs to be answered: While BeOS shows its unique abilities in A/V, we're also starting to see applications for everyday personal computing. I'm writing this column on Gobe Productive and e-mailing it to the prose-thetic surgeon using Mail-It, both purchased using NetPositive and SoftwareValet.
https://www.haiku-os.org/legacy-docs/benewsletter/Issue3-36.html
CC-MAIN-2017-39
refinedweb
5,369
58.92
alm as a dependency dependencies: mustache4dart: '>= 1.0.0'! At the moment the project is under heavy development but pass all the Mustache specs. If you want to run the tests yourself, just do what drone.io does, or to put it by another way, do the following: git clone git://github.com/valotas/mustache4dart.git git submodule init git submodule update pub install test/run.sh If you found a bug, just create a new issue or even better fork and issue a pull request with you fix. The library will follow a semantic versioning Add this to your package's pubspec.yaml file: dependencies: mustache4dart: ">=1.0.4 <2.0.0" If your package is an application package you should use any as the version constraint. If you're using the Dart Editor, choose: Menu > Tools > Pub Install Or if you want to install from the command line, run: $ pub install Now in your Dart code, you can use: import 'package:mustache4dart/ mustache4dart.dart';mustache4dart.dart';
http://pub.dartlang.org/packages/mustache4dart
CC-MAIN-2014-15
refinedweb
168
65.01
Subject: [Boost-announce] [boost] [review] Boost.Nowide is Accepted into Boost From: Frédéric Bron via Boost (boost_at_[hidden]) Date: 2017-06-23 13:34:09 Dear all, I would like to thank all the people who took part in the discussions during the review. All contributors admitted that Boost.Nowide addresses a real issue and solves it, at least partially. Moreover we received 8 official reviews with 6 âacceptâ, 1 âdo not accept as isâ and 1 âI am not against itâ. So Boost.Nowide is accepted for inclusion in Boost. Congratulation and many thanks to Artyom Beilis for this contribution. However, 2 major issues arose in the discussions and they will have to be fixed before inclusion: 1. handling of ill-formed Unicode input should be improved/clarified 2. the documentation needs improvements In particular, this needs to be fixed: * Design: There was a lot of discussions on what to do with ill-formed UTF-16 strings on Windows, in particular coming from the file system while converting them to narrow UTF-8 strings. Basically, 3 options: 1. allow the roundtrip ill formed UTF-16 > UTF-8 > ill formed UTF-16, this is not possible with UTF-8 encoding, WTF-8 (not a standard) may be an option with some issues 2. issue an error (todayâs nowide option, even if the author would like to move to 3.) 3. convert as much as possible to fully conformant UTF-8 with character replacement U+FFFD for invalid input. This is what the NT kernel does with functions like RtlUTF8ToUnicodeN. This would allow cout to continue to work after invalid output instead of failing. After reviewing what was chosen for std::filesystem in C++17, I think 3. is the right think to do: - std::filesystem::path::u8string does not convert strings on Posix. If a conversion is done (on Windows) the output is fully conformant UTF-8. - the roundtrip is not guaranteed in std::filesystem::path conversions (implementation defined) but the roundtrip of a converted paths is guaranteed to work which is the case with Boost.Nowide: once in fully conformant UTF-8, narrow > wide > narrow works fine. - the use of the replacement character U+FFFD is what is done by the NT kernel (question: do you have to reimplement RtlUTF8ToUnicodeN and RtlUnicodeToUTF8N, why not using them)? It should been made clear in the doc that this choice does not guarantees the roundtrip on Windows so that ill-formed filename may not be opened. * Documentation: - the main part of the doc (the non-reference part) should be more detailed - it should be clear to the reader that Windows and Posix platforms are not handled the same; on Windows, we get UTF-8 strings while on Posix platforms, the user should not expect UTF-8 encoding, just narrow strings, even if UTF-8 is today the most common encoding (addressed in Q/A but is probably too far away). It must be clear from the beginning that on Posix platforms, the library does nothing (including no checking of UTF-8 conformance), just forward to the standard library. - review the use of UTF-8 so that it do not let think that narrow strings are always UTF-8 as this is not necessary the case on Posix platform; maybe just use narrow string? - clarify what is converted to what and when: for each function, we need to know precisely what is the output for all possible input; in particular, better explain for each function what the library does with ill-formed input (either UTF-16 input or narrow input), in particular for args and getenv; say clearly what happens (failbit, error, exception, invalid characterâ¦) - clarify what is done for Windows and what is (not) done for Posix - explain better what is behind nowide::cout, cin, cerr, what is done? - clarify what does the filesystem integration on Windows vs Posix - check if basic_stackstring::swap needs to swap buffer_ when both strings are on the stack - The doc/ directory is missing a Jamfile, and there's also no meta/libraries.json. * Implementation: - The global BOOST_USE_WINDOWS_H macro should probably be respected. When windows.h is not used, the WinAPI prototypes are declared in the global namespace. This can cause issues when windows.h is also included before/after the Nowide headers, and the prototypes don't match exactly (e.g. short vs. wchar_t). Also it pollutes the global namespace. Nowide should do what other Boost libraries do, and declare the prototypes in a private detail namespace instead. Example in <boost/thread/win32/thread_primitives.hpp>. Need to add a namespace around your definitions. - MinGW gcc: fix warnings in c++03 mode and errors in c++11 and above - Cygwin gcc/clang (errors because of missing ::getenv and friends) - when this is fixed, an error remains: in file included from test\test_system.cpp:11:0: ..\../boost/nowide/cenv.hpp: In function 'int boost::nowide::putenv(char*)': ..\../boost/nowide/cenv.hpp:104:41: warning: comparison between pointer and zero character constant [-Wpointer-compare] while(*key_end!='=' && key_end!='\0') - Under MSYS, one test failed Fail: Error boost::nowide::cin.putback(c) in test_iostream.cpp:17 main - filesystem integration should return somehow the previous locale for potential undoing * Misc: - There are no .travis.yml and appveyor.yml files in the root. - a number of copy/paste references to Boost.System (test/Jamfile) and Boost.Locale (index.html), should be corrected This are just suggestions and are not mandatory for acceptance: - implement stat/opendir/readir - integration with boost::interprocess::file_lock - integration with boost::program_options (in particular for correct line breaks when showing options) Frédéric _______________________________________________
https://lists.boost.org/boost-announce/2017/06/0516.php
CC-MAIN-2018-17
refinedweb
937
53.41
$X Synopsis $X Description $X contains the current horizontal position of the cursor. As characters are written to a device, InterSystems IRIS updates $X to reflect the horizontal cursor position. Each printable character that is output increments $X by 1. A carriage return (ASCII 13) or form feed (ASCII 12) resets $X to 0 (zero). $X is a 16-bit unsigned integer. $X wraps to 0 when its value reaches 16384 (the two remaining bits are used for Japanese pitch encoding).. $X with Terminal I/O The following table shows the effects of different characters on $X. The S(ecret) protocol of the OPEN and USE commands turns off echoing. It also prevents $X from being changed during input, so it indicates the true cursor position. WRITE $CHAR() changes $X. WRITE * does not change $X. For example, WRITE $X,"/",$CHAR(8),$X performs the backspace (deleting the / character) and resets $X accordingly, returning 01. In contrast, WRITE $X,"/",*8,$X performs the backspace (deleting the / character) but does not reset $X; it returns 02. (See the WRITE command for further details.) Using WRITE *, you can send a control sequence to your terminal and $X will still reflect the true cursor position. Since some control sequences do move the cursor, you can use the SET command to set $X directly. For example, the following commands move the cursor to column 20 and line 10 on a Digital VT100 terminal (or equivalent) * (integer expression) syntax and specify the ASCII value of each character in the string. For example, instead of using: WRITE !,$CHAR(27)_"[1m" WRITE !,$X use this equivalent form: WRITE !,*27,*91,*49,*109 WRITE !,$X As a rule, after any escape sequence that explicitly moves the cursor, you should update $X and $Y to reflect the actual cursor position. You can set how $X handles escape sequences for the current process using the DX() method of the %SYSTEM.Process class. The system-wide default behavior can be established by setting the DX property of the Config.Miscellaneous class. $X with TCP and Interprocess Communication When you use the WRITE command to send data to either a client or server TCP device, InterSystems IRIS first stores the data in a buffer. It also updates $X to reflect the number of characters in the buffer. It does not include the ASCII characters <RETURN> and <LINE FEED> in this count because they are considered to be part of the record. If you flush the $X buffer with the WRITE ! command, InterSystems IRIS resets $X to 0 and increments the $Y value by 1. If you flush the $X and $Y buffers with the WRITE # command, InterSystems IRIS writes the ASCII character <FORM FEED> as a separate record and resets both $X and $Y to 0. See Also I/O Devices and Commands in I/O Device Guide Terminal I/O in I/O Device Guide Local Interprocess Communication in I/O Device Guide TCP Communication in I/O Device Guide
https://docs.intersystems.com/irisforhealthlatest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_VX
CC-MAIN-2021-10
refinedweb
499
63.29
RxJava is missing a factory to create an infinite stream of natural numbers. Such a stream is useful e.g. when you want to assign unique sequence numbers to possibly infinite stream of events by zipping both of them: That doesn't sound right, are there any other strategies? Yes, many: By the way when you are turning an Meet The difference between Because this operator produces events one at a time, typically it needs some sort of state to figure out where it was the last time1. This is what Such a programming model is obviously harder than a This is similar to stateless HTTP protocol that uses small pieces of state called session* on the server to keep track of past requests. Flowable<Long> naturalNumbers = //??? Flowable<Event> someInfiniteEventStream = //... Flowable<Pair<Long, Event>> sequenced = Flowable.zip( naturalNumbers, someInfiniteEventStream, Pair::of );Implementing naturalNumbersis surprisingly complex. In RxJava 1.x you could briefly get away with Observablethat does not respect backpressure: import rx.Observable; //RxJava 1.x Observable<Long> naturalNumbers = Observable.create(subscriber -> { long state = 0; //poor solution :-( while (!subscriber.isUnsubscribed()) { subscriber.onNext(state++); } });What does it mean that such stream is not backpressure-aware? Well, basically the stream produces events (ever-incrementing statevariable) as fast as the CPU core permits, millions per second, easily. However when consumers can't consume events so fast, growing backlog of unprocessed events starts to appear: naturalNumbers // .observeOn(Schedulers.io()) .subscribe( x -> { //slooow, 1 millisecond } );The program above (with observeOn()operator commented out) runs just fine because it has accidental backpressure. By default everything is single threaded in RxJava, thus producer and consumer work within the same thread. Invoking subscriber.onNext()actually blocks, so the whileloop throttles itself automatically. But try uncommenting observeOn()and disaster happens a few milliseconds later. The subscription callback is single-threaded by design. For every element it needs at least 1 millisecond, therefore this stream can process not more than 1000 events per second. We are somewhat lucky. RxJava quickly discovers this disastrous condition and fails fast with MissingBackpressureException Flowable(backpressure-aware) the same way as you can with Observable. It's not possible to create Flowablethat overloads the consumer with messages: Flowable<Long> naturalNumbers = Flowable.create(subscriber -> { long state = 0; while (!subscriber.isCancelled()) { subscriber.onNext(state++); } }, BackpressureStrategy.DROP);Did you spot this extra DROPparameter? Before we explain it, let's see the output when we subscribe with slow consumer: 0 1 2 3 //...continuous numbers... 126 127 101811682 //...where did my 100M events go?!? 101811683 101811684 101811685 //...continuous numbers... 101811776 //...17M events disappeared again... 101811777 //...Your mileage may vary. What happens? The observeOn()operator switches between schedulers (thread pools). A pool of threads that are hydrated from a queue of pending events. This queue is finite and has capacity of 128 elements. observeOn()operator, knowing about this limitation, only requests 128 elements from upstream (our custom Flowable). At this point it lets our subscriber process the events, 1 per millisecond. So after around 100 milliseconds observeOn()discovers its internal queue is almost empty and asks for more. Does it get 128, 129, 130...? No! Our Flowablewas producing events like crazy during this 0.1 second period and it (astonishingly) managed to generate more than 100 million numbers in that time frame. Where did they go? Well, observeOn()was not asking for them so the DROPstrategy (a mandatory parameter) simply discarded unwanted events. That doesn't sound right, are there any other strategies? Yes, many: BackpressureStrategy BackpressureStrategy.BUFFER: If upstream produces too many events, they are buffered in an unbounded queue. No events are lost, but your whole application most likely is. If you are lucky, OutOfMemoryErrorwill save you. I got stuck on 5+ second long GC pauses. BackpressureStrategy.ERROR: If over-production of events is discovered, MissingBackpressureExceptionwill be thrown. It's a sane (and safe) strategy. BackpressureStrategy.LATEST: Similar to DROP, but remembers last dropped event. Just in case request for more data comes in but we just dropped everything - we at least have the last seen value. BackpressureStrategy.MISSING: No safety measures, deal with it. Most likely one of the downstream operators (like observeOn()) will throw MissingBackpressureException. BackpressureStrategy.DROP: drops events that were not requested. Observableto Flowableyou must also provide BackpressureStrategy. RxJava must know how to limit over-producing Observable. OK, so what is the correct implementation of such a simple stream of sequential natural numbers? Meet The difference between Flowable.generate() create()and generate()lies in responsibility. Flowable.create()is suppose to generate the stream in its entirety with no respect to backpressure. It simply produces events whenever it wishes to do so. Flowable.generate()on the other hand is only allowed to generate one event at a time (or complete a stream). Backpressure mechanism transparently figures out how many events it needs at the moment. generate()is called appropriate number of times, for example 128 times in case of observeOn(). Because this operator produces events one at a time, typically it needs some sort of state to figure out where it was the last time1. This is what generate()is: a holder for (im)mutable state and a function that generates next event based on it: Flowable<Long> naturalNumbers = Flowable.generate(() -> 0L, (state, emitter) -> { emitter.onNext(state); return state + 1; });The first argument to generate()is an initial state (factory), 0Lin our case. Now every time a subscriber or any downstream operator asks for some number of events, the lambda expression is invoked. Its responsibility is to call onNext()at most once (emit at most one event) somehow based on supplied state. When lambda is invoked for the first time the stateis equal to initial value 0L. However we are allowed to modify the state and return its new value. In this example we increment longso that subsequent invocation of lambda expression receives state = 1L. Obviously this goes on and on, producing consecutive natural numbers. Such a programming model is obviously harder than a whileloop. It also fundamentally changes the way you implement your sources of events. Rather than pushing events whenever you feel like it you are only passively waiting for requests. Downstream operators and subscribers are pulling data from your stream. This shift enables backpressure at all levels of your pipeline. generate()has a few flavors. First of all if your state is a mutable object you can use an overloaded version that does not require returning new state value. Despite being less functional mutable state tends to produce way less garbage. This assumes your state is constantly mutated and the same state object instance is passed every time. For example you can easily turn an Iterator(also pull-based!) into a stream with all wonders of backpressure: Iterator<Integer> iter = //... Flowable<String> strings = Flowable.generate(() -> iter, (iterator, emitter) -> { if (iterator.hasNext()) { emitter.onNext(iterator.next().toString()); } else { emitter.onComplete(); } });Notice that the type of stream ( <String>) doesn't have to be the same as the type of state ( Iterator<Integer>). Of course if you have a Java Collectionand want to turn it into a stream, you don't have to create an iterator first. It's enough to use Flowable.fromIterable(). Even simpler version of generate()assumes you have no state at all. For example stream of random numbers: Flowable<Double> randoms = Flowable .generate(emitter -> emitter.onNext(Math.random()));But honestly, you will probably need an instance of Randomafter all: Flowable.generate(Random::new, (random, emitter) -> { emitter.onNext(random.nextBoolean()); }); SummaryAs you can see Observable.create()in RxJava 1.x and Flowable.create()have some shortcomings. If you really care about scalability and health of your heavily concurrent system (and otherwise you wouldn't be reading this!) you must be aware of backpressure. If you really need to create streams from scratch, as opposed to using from*()family of methods or various libraries that do the heavy lifting - familiarize yourself with generate(). In essence you must learn how to model certain types of data sources as fancy iterators. Expect more articles explaining how to implement more real-life streams. This is similar to stateless HTTP protocol that uses small pieces of state called session* on the server to keep track of past requests.
https://www.nurkiewicz.com/2017/08/generating-backpressure-aware-streams.html
CC-MAIN-2018-34
refinedweb
1,360
51.14
NAME ::clig::parseCmdline - command line interpreter for Tcl SYNOPSIS package require clig namespace import ::clig::* setSpec var parseCmdline _spec argv0 argc argv DESCRIPTION This manual page describes how to instrument your Tcl-scripts with a command line parser. It requires that package clig is installed whenever your script is run. To find out how to create a parser which is independent of clig, read clig(1). (Well, don’t, it is not yet implemented.) The options to be understood by your script must be declared with calls to ::clig::Flag, ::clig:String etc., which is best done in a separate file, e.g. cmdline.cli, to be sourced by your script. Having these declarations in a separate file allows you to run clig, the program, on that file to create a basic manual page. setSpec The option-declaring functions want to store the declarations in an array with a globally accessible name. For compatibility with older software, the name of this array can not be passed as a parameter. Consequently, your script must declare it before sourcing cmdline.cli. A call like setSpec ::main declares ::main as the database (array) to be filled by subsequent calls to ::clig::Flag and the like. The array should not contain anything except entries created with the declarator functions. parseCmdline After declaring the database and sourcing your declarations from cmdline.cli your script is ready to call parseCmdline. This is typically done with: set Program [file tail $argv0] if {[catch {parseCmdline ::main $Program $argc $argv} err]} { puts stderr $err exit 1 } If parseCmdline finds an unknown option in $argv, it prints a usage- message to stderr and exits the script with exit-code 1. If it finds any other errors, like numerical arguments being out of range, it prints an appropriate error-message to stderr and also exits your script with code 1. Setting Program and passing it to parseCmdline instead of $argv0 results in nicer error-messages. If no errors occur, parseCmdline enters the values found into variables of its callers context. The names of these variables are those declared with the declarator functions. For example with a declaration like Float -ival ival {interval to take into account} -c 2 2 the caller will find the variable ival set to a list with 2 elements, if option -ival was found with two numeric arguments in $argv. If option -ival is not existent on the given command line, ival will not be set. Consequently, it is best to not set the declared variables to any value before calling parseCmdline. Summary The typical skeleton of your script should look like package require clig namespace import ::clig::* setSpec ::main source [file join path to your installed base cmdline.cli] set Program [file tail $argv0] if {[catch {parseCmdline ::main $Program $argc $argv} err]} { puts stderr $err exit 1 } REMARK Of course parseCmdline can be called from a within any proc with a specification database previously filled with the declarator functions. I am not using OptProc because it is not documented and the declaration syntax used here was used for C-programs probably long before it existed. SEE ALSO clig(1), clig_Commandline(7), clig_Description(7), clig_Double(7), clig_Flag(7), clig_Float(7), clig_Int(7), clig_Long(7), clig_Name(7), clig_Rest(7), clig_String(7), clig_Usage(7), clig_Version(7)
http://manpages.ubuntu.com/manpages/hardy/en/man7/clig_parseCmdline.7.html
CC-MAIN-2013-20
refinedweb
547
60.55
Opened 3 years ago Closed 19 months ago Last modified 19 months ago #14829 closed Bug (fixed) URL dispatcher documentation with class-based generic views Description (last modified by Alex) It would be nice to update the url() documentation here to reflect that using class-based generic views means you pass the class object instead of view='some_view' and that class-based views must be imported as opposed to included in the patterns arguments. example: from myapp.views import * urlpatterns += patterns('', url(GenericViewClass.as_view(), name='the_url_name'), ) Attachments (0) Change History (7) comment:1 Changed 3 years ago by Alex comment:2 Changed 3 years ago by DrMeers - Summary changed from Documentation for url() with class-based generic views to URL dispatcher documentation with class-based generic views - Triage Stage changed from Unreviewed to Accepted We could change the URL Dispatcher documentation in *many* places to specify the class-based view alternatives. What might be better is to update "Once one of the regexes matches, Django imports and calls the given view, which is a simple Python function." (1) to better define views based on the recent updates, and refer the reader to the class-based view documentation (2) for more information. There may be some other simple changes required elsewhere in the document also. (1) (2) comment:3 Changed 3 years ago by anonymous - milestone 1.3 deleted - Severity set to Normal - Type set to Bug comment:4 Changed 2 years ago by aaugustin - UI/UX unset Change UI/UX from NULL to False. comment:5 Changed 2 years ago by aaugustin - Easy pickings unset Change Easy pickings from NULL to False. comment:6 Changed 19 months ago by Tim Graham <timograham@…> - Resolution set to fixed - Status changed from new to closed Fixed up the formatting.
https://code.djangoproject.com/ticket/14829
CC-MAIN-2014-15
refinedweb
296
54.56
Nov 29, 2009 12:41 PM|Mazenx|LINK Can someone tell me how to register (in the page) this dll after adding it as a reference? because I want to use the chart control and it gives me this error when i write this : <%@ Register Assembly="Interop.MSChart20Lib" Namespace="Interop.MSChart20Lib" TagPrefix="CH" %> the error is : Error 1 The type or namespace name 'Interop' could not be found (are you missing a using directive or an assembly reference?) C:\Users\Mazen\Documents\Visual Studio 2008\WebSites\Gymo\Financial\Summary.aspx 8 I dont konw what to do or how to reference it , I can see this in the bin folder : Interop.MSChart20Lib I downloaded the mschrt20.ocx file and registered it with cmd , then added as reference microsoft chart control 6.0 . I tried renaming the dll without the word interop it gives this error : Error 1 Could not load file or assembly 'MSChart20Lib' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) All-Star 124292 Points Moderator Nov 29, 2009 03:26 PM|SGWellens|LINK Nov 29, 2009 03:42 PM|Mazenx|LINK I had the add on setup file already but I thought it's just an extra add on ( to be installed later ) , then I thought I'll just add the dll file as a reference from ( c:program files : chart control : the 2 dlls there ) then when i ( add any as a reference ) it just doesnt appear in the bin folder , I googled it some guy said use this mschrt20.ocx , and I started using it and it didnt work...I googled it and no results appeared at the end I posted here at the forum , It's working now thanks alot. 2 replies Last post Nov 29, 2009 03:42 PM by Mazenx
http://forums.asp.net/t/1498289.aspx
CC-MAIN-2014-52
refinedweb
310
66.88
ThreadScope Tour/Run From HaskellWiki < ThreadScope Tour Revision as of 17:36, 7 December 2011 by DuncanCoutts (Talk | contribs) 1 Objective Run ThreadScope on a sample program and get a trace. 2 Steps Copy the following parallel code to hellofib.hs import Control.Parallel.Strategies import System.Environment fib 0 = 1 fib 1 = 1 fib n = runEval $ do x <- rpar (fib (n-1)) y <- rseq (fib (n-2)) return (x + y + 1) main = do args <- getArgs n <- case args of [] -> return 20 [x] -> return (read x) _ -> fail ("Usage: hellofib [n]") print (fib n) Build hellofib.hs ghc -O2 -rtsopts -eventlog -threaded hellofib Run hellofib ./hellofib +RTS -N2 -l View its trace threadscope hellofib.eventlog # on Windows, may be hellofib.exe.eventlog
https://wiki.haskell.org/index.php?title=ThreadScope_Tour/Run&oldid=43470
CC-MAIN-2015-40
refinedweb
122
74.39
I use Ionic 5 + Vue 3’s composition API. This is very annoying because my APP requires running something in the background. I could not put everything inside app.vue… This animation shows on that page if I did something, such as opening a file, selecting another page then coming back, everything on the pages is lost. A new page was created. Same story for all the pages in my app. I am not sure this is expected because there is no reason for it. Ionic said in its document that I should not use VUE’s keep pages API. I also notice, the Ionic didn’t close the previous pages but created a new one. Because if I init a setInterval to print something in the console for 10 seconds, after switching out and back a few times, it will print much quicker. Here are the code for the router, not sure if it is related. import { createRouter, createWebHistory } from '@ionic/vue-router'; import { RouteRecordRaw } from 'vue-router'; const routes: Array<RouteRecordRaw> = [ { path: '', redirect: '/pages/summary' }, { path: '/pages/summary', component: () => import ('../summary/summary-view.vue') }, { path: '/pages/parameters', component: () => import ('../msgparser/param-view.vue') }, { path: '/pages/waypoint', component: () => import ('../map/map-view.vue') }, { path: '/pages/control', component: () => import ('../control/control-view.vue') }, { path: '/pages/file', component: () => import ('../file/file-view.vue') }, ] Thanks.
https://forum.ionicframework.com/t/how-to-prevent-split-pane-create-new-pages-when-i-swithc-between-pages/220428
CC-MAIN-2022-21
refinedweb
223
59.19
re_comp, re_exec - BSD regex functions Synopsis Description Notes Colophon #define _REGEX_RE_COMP #include <sys/types.h> #include <regex.h> char *re_comp(char *regex); int re_exec(char *string); re_comp() is used to compile the null-terminated regular expression pointed to by regex. The compiled pattern occupies a static area, the pattern buffer, which is overwritten by subsequent use of re_comp(). If regex is NULL, no operation is performed and the pattern buffers. 4.3BSD. These functions are obsolete; the functions documented in regcomp(3) should be used instead. regcomp(3), regex(7), GNU regex manual This page is part of release 3.44 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.sgvulcan.com/re_comp.3.php
CC-MAIN-2017-17
refinedweb
121
59.19
Hello guys, I'm programming a little game and tried to create a menu for it. The menu is like this: New Game, Load and Exit are JButtons. When I run the program I see that screen but when i press New Game I want the menu JPanel to be replaced for the game JPanel. I was able to do it the problem is that the KeyListener on my Game panel doens't work. Any way i can do it having the KeyListener working when menu JPanel is removed? Please help me! :(Please help me! :(Code : public class GameFrame extends JFrame{ private GameMenu menu; private GameCanvas canvas; public GameFrame(String name, int width, int height) { super(name); menu = new GameMenu(this, width,height); add(menu); canvas = new GameCanvas(width, height); setResizable(false); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); pack(); setVisible(true); } //When New Game button is pressed this metod is called. protected void beginGame() { setContentPane(canvas); canvas.launchGame(); canvas.repaint(); validate(); } } public class GameCanvas extends JPanel implements KeyListener{ protected GameCanvas(int width, int height) { super(); this.width = width; this.height = height; setPreferredSize(new Dimension(width, height)); addKeyListener(this); setFocusable(true); } }
http://www.javaprogrammingforums.com/%20awt-java-swing/637-change-jframe-components-problem-printingthethread.html
CC-MAIN-2014-41
refinedweb
187
58.58
Type: Posts; User: happy_hippie Yes i did follow the link, thats why i say RadialGradientPaint is a paint, a gradient paint that is painted in a circle. What i have provided, the Glow, is like in Photoshop, where you defines an... It must be you that has been smoking weed. As i read that post, you mensions RadialGradientPaint, which is a paint, right? dlorde:Not the same as what? | It is not a paint, but a glow. And yes, it is only showing 2 shapes, to display the effect on different shapes The paint method you gave, is only round. Mine gives a inner glow, no matter which shape you use, which is not the same. Whoops, that's correct. It was from when i tested :) Here is the correct test code, without the last parameter :) import java.awt.Dimension; import javax.swing.JFrame; import java.awt.Color;... Hey. I have just finished some code, that will allow you to create an inner glow inside an Area. I would be happy if you would try it and tell me what you think. I also provides a test, that...
http://forums.codeguru.com/search.php?s=c4c629c5fb6fc24f9d7157a9c9ccc0cf&searchid=5798131
CC-MAIN-2014-52
refinedweb
188
84.68
Rate Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . .1 2. . . . . . . . . . 23 23 24 24 24 25 26 26 26 26 29 37 38 41 46 51 52 55 66 69 76 77 78 80 80 94 96 99 1. Snort Modes . . . . . . . . . . . . Specifying Multiple-Instance Identifiers . . 2. . . . . . . . . . . . . . . . . . .1 2. . 105 Event Logging . . . . . . . . . . . . . . . . . . Running in Rule Stub Creation Mode . .2. . . . . . Obfuscating IP Address Printouts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . 107 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . HTTP Inspect . . . . . . . . . . .2 2. . .4. . . . . . . Variables . . . . . . . . . . . . . . . . . . . . .9. . .5 Running Snort as a Daemon . . . . . . . .2. . . . . . . 2 Configuring Snort 2. . . . . . . . . . . . . .13 DCE/RPC 2 Preprocessor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . 2. . . . . . 106 2. . . . . . . . 108 3 . . . . 101 Event Filtering . . . . . . . . . . . . . .9. . SSH . .4 Configuring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2.3 2. . .4. . . . . . . . . . . . . . . 2. . . . . .1 Includes .5. . . . . . . . . . . . . . . .3 Decoder and Preprocessor Rules . . . . . . . . . . . . . . . . . . . . .4 2. . . . . . . . . .2. . . . . . . . . . . . . . . .7 2. . . . . . . 101 2. . . . . . . . . . . . . . .2 Rule Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . . . . . . . . . 103 Event Suppression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 ARP Spoof Preprocessor . . . . . . . .1. . .2. . . .9 Frag3 . . . . .6 2. . . .1 2. . . . .2. . . . . . . . . . . . . . . . . . . . . . sfPortscan . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stream5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . .11 SSL/TLS . . . .15 Normalizer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . . . .2. . . . . . .4 1. . . . . . . . 2. . . . . . .1 1. . .8 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 DNS . .3 2. . . . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. FTP/Telnet Preprocessor . . . . . . . . .4. . . . . . . .2. . . . . . . . . . . . . . . . . . . . Performance Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . . . . . 100 Reverting to original behavior . . . . . . . . . . . . . . . . . . . . . . . . . .5 Performance Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Event Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. . Preprocessors . . . . . . . .9.1. . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1. . . . . . . . . . . . . . .3. . . . . . RPC Decode . . . . . . . . . . . . .2. . . . . . . . .1 2. . . . . . . . . . . . . . . . SMTP Preprocessor . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . .2. . . . . . . . . . . 107 Preprocessor Profiling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 1. . . . . . . . 2. . . . . .4. . . . . . .1 2. . . . . .10 More Information . . . . . . .3 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . . . . . Config . . . . . . . . . . . . . . .14 Sensitive Data Preprocessor . .1. .2 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . .5 2. . . . . . . . . . . . .11 Active Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 2. . . . . . . . . . . . . . . . . . . . . 127 2. . . . 132 2. . . 133 2. . . . . . . . . . . . . . . . . . . 121 2. . . . . . . . . . . . . .1 2.6. . . . . . . . . . . . . . .6. . . . .10. . . . . . . . . . . . . . . . .8 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 2. .10 Multiple Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 2. . . . .4 React . . . . . . . . . . . . . . . . . . . . . . . . .2 2. . . . . . . . . .1 2. . . . . . . . . . . . . . . .3 2.6. . . . . . . . .11. . . . . .6.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 unified 2 . . . . . . . . . . . . . . .7 Host Attribute Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 alert prelude . . .8. . . .6. . . . 128 Non-reloadable configuration options . . . 130 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 2. . 114 2. 127 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 2. . . . . . . . . . . . . . . . . 127 Directives . . . . . .11. . . 117 csv . . . . . . . . .7 2. . . . . . . . . . . . . . . . . . . . . .12 alert aruba action . . . . . . . . . . . . . . . . . . . . . . . . . .9.11. . . . . . . . . . . . . . . . . . 125 Dynamic Modules . . . . . . . . . . . . . . .6. . . .8 Configuration Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 2. . .7. . . . . . . . . . . . . . . . . . . . . . . . 117 log tcpdump . . . . . . . . . . . . . . . . . . .5. .2 Configure Sniping . . . . .8. . .4 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Rule Actions . . . . 114 alert fast . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2. . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10. . . . . . . . . . . . . . . 122 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . .11 log null .6. . . . . . . . . . . . . . . . . . . . 133 2. . . . . . . 130 2. . . . . . . . . . . . . . . . . . . . . .2 Configuration Specific Elements . . . . . . . . . . .13 Log Limits . . . . .9. . . . . . .6 Packet Performance Monitoring (PPM) . . . . . . . .11. . . . . . . . . . . .9 Reloading a Snort Configuration . . . .3 Enabling support . . . . . . . . . . . . . . . 121 2. . . . . . . . . . . 111 Output Modules . . . . . . . . . . . . . . . . . . .10. . 130 2. . . . 127 2. . . . . . . .1 2. . . . . . . . . . .2 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 2. . . . . . .11. . . . . . . . . . . . . . . .9 alert syslog . . 122 2. . . . . . . . .3 How Configuration is applied? . . . . . . . . . . . . . . . . . . .6. 116 alert unixsock . . . . . . 120 2. . . . . . . . . . . . 123 Attribute Table File Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Enabling Active Response . . . . . . . . 117 database . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Reloading a configuration . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . .1 Creating Multiple Configurations . 116 alert full . . . . . . . . . . . . . . . . . . . . . . . 132 2. . . . . .2 Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . 119 unified . . . . . . . . . . . 134 4 . . . . .3 2. . .9. . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . .3 Flexresp . . . . . . . . . . . . .6. . . . . . . . . 123 2. . . . . . . . 123 Attribute Table Example . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . 140 sid . . . . . . .3 3. . . . . . . . . . . . . . . . . . . .5. . . .17 http stat msg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Activate/Dynamic Rules . . . . . . . . . . . 139 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . 135 Rules Headers . . . . . . . 139 reference . . . . . . . . . . . . .1 3. . . . 151 3. . . . . . . . 150 3. . . . . . . . . . . . . . . . . . . . . . . .5.2 3. . . . . . . . . . 147 within . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 msg . . . . . . . . . . . . . . . . . . . . . . 148 http cookie . . . . . . . . . . . . . . . . .11 http header . . . . . . . . . . . . . . . . . . . 145 depth . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . 146 distance . . . . . . . . . . .1 3.7 3. . . . . . . . . . 151 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 http client body . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 classtype . . . . . . . . . . . . . . . . . . . . 139 gid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . .5. . . .7 3. . . . . . 149 3. 152 3. . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . 143 General Rule Quick Reference .6 3. . 144 nocase . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 rev . . . . . . . . . .5 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . .4. . .4 3.5. . . . . . .5. . . . . . . . . . . . . . 145 rawbytes . . . . . . . . . . . . . .5.6 3. . . . . . . . . . . . . . . 144 3. . . . . . . . . . . . . . . . . . . 138 General Rule Options . . . .18 http encode . . . . . . . . . . . . . . . . . . 137 The Direction Operator . . . . . .4. . . . . . . . . . . . . .5 Payload Detection Rule Options . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . .3 Writing Snort Rules 3. . . . .2. . . . . . . . . . . . .2 3. . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . .4. . . .1 3. . . . . . . . . . . . . . . . .3 3. . . . . . . . . . . . . . . . . . . . . . . . . . . .14 http uri . . . . . . .4 3. . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . .8 3. . . . . . . .10 http raw cookie . . . . . . . . . . . 136 Port Numbers . . . 135 3. . . . . 136 IP Addresses . . . . . . . . . . . . . 148 3. . . . . . . . . . . .2. .16 http stat code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Rule Actions . . . . . 142 metadata . . . . .5. . . . . . . . . .4 3. 150 3. . . . . . . 135 Protocols . . . . . .5 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 priority . . . . . .9 content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 http method . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . 138 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 3. . . . . . . . . . . . . . . . . . . . .2 135 The Basics . .5 3. . . . . . . . . .8 3. . . . . . . . . . . . . . . . . . . . . . . . . 150 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . .15 http raw uri . .4. . . . . . . . . . .12 http raw header . . . . . . . . .2.3 3. . . . . . . . . . . . . . . . . . . . . . . . . . . .2 3. . . . . . . . . . . . . .5. . . . . . . . . . . .3 3. . . . . . . 152 5 . .4 Rule Options . . 149 3. . . . . . . . . . . . . . . . . . . . . . 146 offset . . . . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 3. . . 171 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . .6. . . . . . . 169 3. . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . .6. . . . . . . . . . . . . . . . 172 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 icmp id . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 3. . . . . . .6. . . . . 164 3. . . . . . . . . . . . . . . . . . . . . . .23 pcre . . . . . . 164 3. . . . . . . . 170 3. .25 base64 decode . . . . . . . . . . . . . . . . . . . . . . .30 ftpbounce . . . . . . .18 rpc . . . . . . . . . . . . . 162 3. . . . . . . . . . .6. . . . . . . . . . . . . . . . 166 id . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 3. .31 asn1 . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . 171 3.6. . . . . . . . . . . . . . . . 166 ipopts . . . . . 157 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . .15 icode . . . . . . . . . 153 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . . . . . . . . . 165 tos . . . . . . . .5. . . . . . .6. .5. . . . . . . . . . . . . . . . . . . .17 icmp seq . . . . . . . . . . . . . . . . . . . .5. . . . 158 3. . . . . . . . . . . . . . . . . . . . 168 flags . . . . . . . .29 byte extract . . . . . . . . . .37 ssl state . . . . . . . . . . . . . . . . . 164 3. . . . . . . . . . . . .24 file data . . . . .35 dce stub data . . . . . . . . . . . . . . . . . . . . . . 170 3. .6 Non-Payload Detection Rule Options . . 171 3.5. . . . . . . . . . . . . . . . .2 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 uricontent . . . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . .3 3. . . . . . . . . . . . . 167 dsize . . . 164 3. . . . . . . . . . . . . . . . .6 3. . . . . . 165 ttl . 164 3. . . . . . . . . . 168 flow . . . . . . . . . . . . . . . . . . . . . . . . . .13 window . . 171 3. . . . 169 3. . . . . . . . . . . . . . . 159 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . .14 itype . . . . . . . . . . . . . . . . . . . . . . . . . 164 3. . .5. . . . .5. . . . .33 dce iface . . . . . . . . .5. . . . . .26 base64 data . . . . . . . . . . . . . .5. . . . . .6. . . . . . . . . . . . . . . . . . . . . .6. . . . .32 cvs . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 byte test . . . . . . . . . . .6. . . .8 3. . . .9 fragoffset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 3. . . . . . . . .36 ssl version . . . . . . . . .6. . . . . . . 170 3. . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . 154 3. . .5. . . . . . . . . . .5 3. . . . . . . . . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .6.38 Payload Detection Quick Reference . . . . . . . . . . . . 167 fragbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 urilen . . . . . . . . . . . . . . . . . . . . . . . .4 3. . . . . . . . . . . . . . . . . . . . . 164 3. . . 163 3. . . . . . . . . . . . . . 160 3. . .10 flowbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 3.28 byte jump . . . . . . .6. . . . . . . . . . . . . . . .22 isdataat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. .11 seq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 dce opnum . . . . . .19 fast pattern . . . . . . . . . .12 ack . . .6. . . . . . . . . . . . . . . .3 Examples . . . . .9. . . . . . . .2 4. . . . . . . . . . . . . . . . .1 3. . . . . . 178 3. . . . . . . . . 190 Rules . . . . . 178 Optimizing Rules . . . . . . . .9. . . . . . . . . . . . . .7. .7. . . . . . . . . . . .7 Post-Detection Rule Options . . . . . . . . . . . . . . . . . . . . . . . . .2 3. Not the Exploit . . . . . . . . . . . . . . . . .3 4. . . . 177 Writing Good Rules . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . . . . . . . . . . .9 logto . . . .5 DynamicPluginMeta . . . . . . . .9. . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 detection filter . . .1 Data Structures . . . . . . . . . . . . . . . . . . .5 4 Content Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . 176 replace . . . . 173 3. . . . . . . . . . . . . . . . . . . . . .6. . . . .6. . . . . . . . . . . 183 DynamicEngineData . . . . . . . . . . . .6. . . . . . . . . . . . . . . . . .4 3. . . . . . 176 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 4. . . . . . . . . . . . . . . . . . . . . . .5 3. . . . . . . 172 3. . . . . . .21 stream reassemble . . . . . . . . . . . . . . . . .9. . . . 184 4. . .1 4. . . . . . . . . . . . . 173 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . . .9 Rule Thresholds . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 tag . . . . . . .6.7. . . . . . . . . . . . . . . . . . . 174 session . . . . . . . . . . . . . 189 4.2 3. . . . . . . . . . . . .8 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . .1. . . . 178 Catch the Oddities of the Protocol in the Rule . . . . . . . . . . . . . . . . 174 resp . .22 stream size . . . . . . . . . . . . .1 3. . . . . . . . . . . . . . . . . . . . 175 activates . . . . . . . .3 Preprocessors . . . . . . . . . . . . . . . 191 Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . 184 SFSnortPacket . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . 183 4. . . . . . 174 3. . . . . . . . . . . . . . . . .11 Post-Detection Quick Reference . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 3. . . . 176 count . . . . . . . . . . . . . . .6 3. . . . . . . . . . . . . . . . . . . . . . . . . . . .7 3. . . . . . . 179 Testing Numerical Values . . . . .23 Non-Payload Detection Quick Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 3. . . . . . . . . .4 4. . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . .20 sameip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . 177 3. . . . . . .1. . . . . . . .7. . . . .6. . . . . . . . . . . . 173 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 3. . . . . . . . . .3 3. . . . . 184 Dynamic Rules .2 4. . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 DynamicPreprocessorData . . . . . . . . . . . . .1 4. . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . 191 4. . . . . . 172 3. . . . . . . . . . .3. . . . . . . . . . . . . . . . . . 175 react . . 176 activated by . . . . . . . . . . . . . . . . . . . . . . . .19 ip proto . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 7 . . . . . . .7. . . . . . .3 3. . . . . . . . . . .2 Required Functions . . . . . . . . . . . . . . . . . . . . 180 183 Dynamic Modules 4. . . . . . . . . . . . . . . . . . . . . . 178 Catch the Vulnerability. . . .2 Preprocessor Example . . . . . . . . . . . . . . 190 Detection Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Output Plugins . . . . . . . . . . . . . . . . . . . 196 Detection Plugins . . . . . . . . . . . . . . . .2 5. . . . . . . . . . . . . . . . 196 5. . .5 Snort Development 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 5. . . . . .1 5. . . . . .2. . . . . . . . . . . 196 Snort Data Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Preprocessors . 197 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . .3 The Snort Team . . . . . . . . .2 196 Submitting Patches . . . . 196 5. . . . . . . .2. . . . . . . . . . . . . but there are a lot of command line options to play with. sniffer mode). which allows Snort to analyze network traffic for matches against a user-defined rule set and performs several actions based upon what it sees. • Network Intrusion Detection System (NIDS) mode. try the following: . Snort can be configured to run in three modes: • Sniffer mode. • Packet Logger mode. If you would like to submit patches for this document. there are a few basic concepts you should understand about Snort. If you want an even more descriptive display. Small documentation updates are the easiest way to help out the Snort Project. which simply reads the packets off of the network and displays them for you in a continuous stream on the console (screen).tex. drop us a line and we will update A it./snort -v This command will run Snort and just show the IP and TCP/UDP/ICMP headers. It was then maintained by Brian Caswell <[email protected] 1 Snort Overview This manual is based on Writing Snort Rules by Martin Roesch and further work from Chris Green <cmg@snort. 1. Before we proceed.1 Getting Started Snort really isn’t very hard to use. do this: 9 . If you have a better way to say something or find that something in the documentation is outdated. try this: . and it’s not always obvious which ones go together well. let’s start with the basics. which logs the packets to disk. nothing else.org>./snort -vd This instructs Snort to display the packet data as well as the headers. you can find the latest version of the documentation in LTEX format in the Snort CVS repository at /doc/snort_manual. showing the data link layer headers. the most complex and configurable configuration. If you want to see the application data in transit.org> and now is maintained by the Snort Team.e. 1. If you just want to print out the TCP/IP packet headers to the screen (i. This file aims to make using Snort easier for new users.2 Sniffer Mode First. Once the packets have been logged to the binary file. you need to tell Snort which network is the home network: . in the case of a tie. if you wanted to run a binary log file through Snort in sniffer mode to dump the packets to the screen. If you don’t. the source address. and you want to log the packets relative to the 192. these switches may be divided up or smashed together in any combination. you should consider logging in binary mode. all of these commands are pretty cool.1. ! △NOTE both the source and destination hosts are on the home network. they are logged to a directory Note that if with a name based on the higher of the two port numbers or. For example./snort -vde (As an aside./log -b Note the command line changes here. you don’t need to run in verbose mode or specify the -d or -e switches because in binary mode the entire packet is logged. you can try something like this: 10 ... with the directory names being based on the address of the remote (non-192. If you’re on a high speed network or you want to log the packets into a more compact form for later analysis. When Snort runs in this mode. Packets from any tcpdump formatted file can be processed through Snort in any of its run modes. The last command could also be typed out as: ./log.168. Snort can also read the packets back by using the -r switch./snort -dev -l . it collects every packet it sees and places it in a directory hierarchy based upon the IP address of one of the hosts in the datagram./snort -d -v -e and it would do the same thing.168. If you just specify a plain -l switch.) 1. Snort will exit with an error message. We don’t need to specify a home network any longer because binary mode logs everything into a single file.1) host./log -h 192. but if you want to record the packets to the disk.0 class C network. not just sections of it. you can read the packets back out of the file with any sniffer that supports the tcpdump binary format (such as tcpdump or Ethereal). you need to specify a logging directory and Snort will automatically know to go into packet logger mode: .3 Packet Logger Mode OK. In order to log relative to the home network.1. which puts it into playback mode.0/24 This rule tells Snort that you want to print out the data link and TCP/IP headers as well as application data into the directory . this assumes you have a directory named log in the current directory. Additionally. Binary mode logs the packets in tcpdump format to a single binary file in the logging directory: ./snort -l . All incoming packets will be recorded into subdirectories of the log directory. you may notice that Snort sometimes uses the address of the remote computer as the directory in which it places packets and sometimes it uses the local host address. which eliminates the need to tell it how to format the output directory structure./snort -dev -l .168./log Of course. There are seven alert modes available at the command line: full./snort -dev -l ./snort -d -h 192.. it will default to /var/log/snort. It’s also not necessary to record the data link headers for most applications. If you don’t specify an output directory for the program.0/24 -c snort. so you can usually omit the -e switch. Full alert mode. try this: . too.conf in plain ASCII to disk using a hierarchical directory structure (just like packet logger mode). Sends “fast-style” alerts to the console (screen). The default logging and alerting mechanisms are to log in decoded ASCII format and use full alerts.conf This will configure Snort to run in its most basic NIDS form.4. logging packets that trigger rules specified in the snort. source and destination IPs/ports. Six of these modes are accessed with the -A command line switch. Generates “cmg style” alerts. if you only wanted to see the ICMP packets from the log file. cmg. simply specify a BPF filter at the command line and Snort will only see the ICMP packets in the file: .0/24 -l . .1. as well as with the BPF interface that’s available from the command line./log -c snort.1 NIDS Mode Output Options There are a number of ways to configure the output of Snort in NIDS mode. alert message. There are several other alert output modes available at the command line. Sends alerts to a UNIX socket that another program can listen on. as well as two logging facilities. and packets can be dropped while writing to the display. the -v switch should be left off the command line for the sake of speed. 1.conf file to each packet to decide if an action based upon the rule type in the file should be taken. syslog.log icmp For more info on how to use the BPF interface.conf where snort. For example. One thing to note about the last command line is that if Snort is going to be used in a long term way as an IDS. This is the default alert mode and will be used automatically if you do not specify a mode. console.4 Network Intrusion Detection System Mode To enable Network Intrusion Detection System (NIDS) mode so that you don’t record every single packet sent down the wire.168. 1. Alert modes are somewhat more complex.168. socket. These options are: Option -A fast -A full -A -A -A -A unsock none console cmg Description Fast alert mode.log You can manipulate the data in the file in a number of ways through Snort’s packet logging and intrusion detection modes./log -h 192. This will apply the rules configured in the snort. The full alert mechanism prints out the alert message in addition to the full packet headers. read the Snort and tcpdump man pages. Turns off alerting. The screen is a slow place to write data to. 11 ./snort -dvr packet./snort -dv -r packet. and none.conf is the name of your snort configuration file. Writes the alert in a simple format with a timestamp. fast.1. please see etc/gen-msg.conf -A fast -h 192.1. you need to use unified logging and a unified log reader such as barnyard. 1.4.Packets can be logged to their default decoded ASCII format or to a binary log file via the -b command line switch.4.1 for more details on configuring syslog output./snort -b -A fast -c snort. see Section 2. use the output plugin directives in snort.168. To disable packet logging altogether. This number is primarily used when writing signatures. For output modes available through the configuration file. Rule-based SIDs are written directly into the rules with the sid option. See Section 2. this tells the user what component of Snort generated this alert. For example.6.conf. it will usually look like the following: [**] [116:56:1] (snort_decoder): T/TCP Detected [**] The first number is the Generator ID. This allows debugging of configuration issues quickly via the command line. please read etc/generators in the Snort source. 56 represents a T/TCP event. To send alerts to syslog. use the -s switch. The second number is the Snort ID (sometimes referred to as Signature ID). use the -N command line switch.0/24 1. For a list of preprocessor SIDs. The third number is the revision ID./snort -c snort./snort -c snort.168.conf -l .map. The default facilities for the syslog alerting mechanism are LOG AUTHPRIV and LOG ALERT. use the following command line to log to default (decoded ASCII) facility and send alerts to syslog: . For a list of GIDs. but still somewhat fast. as each rendition of the rule should increment this number with the rev option./log -h 192.6. try using binary logging with the “fast” output mechanism. such as writing to a database. If you want to configure other facilities for syslog output. In this case. we know that this event came from the “decode” (116) component of Snort. This allows Snort to log alerts in a binary form as fast as possible while another program performs the slow actions. This will log packets in tcpdump format and produce minimal alerts.0/24 -s As another example. use the following command line to log to the default facility in /var/log/snort and send alerts to a fast alert file: .2 Understanding Standard Alert Output When Snort generates an alert message. In this case.conf 12 .3 High Performance Configuration If you want Snort to go fast (like keep up with a 1000 Mbps connection).1. If you want a text file that’s easily parsed. For example: . ! △NOTE Command line logging options override any output options specified in the configuration file. For more information.5. 1. etc. in that the event processing is terminated when a pass rule is encountered. The Pass rules are applied first. Several command line options are available to change the order in which rule actions are taken. for packet I/O. while taking the actions based on the rules ordering. Log rules are applied.1. • --process-all-events option causes Snort to process every event associated with a packet.9 introduces the DAQ. Without this option (default case). However. 1. This allows use of an inline policy with passive/IDS mode. you can select and configure the DAQ when Snort is invoked as follows: . in which case you can change the default ordering to allow Alert rules to be applied before Pass rules. regardless of the use of --process-all-events. please refer to the --alert-before-pass option. then the Alert rules and finally. The DAQ replaces direct calls to PCAP functions with an abstraction layer that facilitates operation on a variety of hardware and software interfaces without requiring changes to Snort. • --treat-drop-as-alert causes drop and reject rules and any associated alerts to be logged as alerts. ! △NOTE Pass rules are special cases here. .4 Changing Alert Order The default way in which Snort applies its rules to packets may not be appropriate for all installations. or Data Acquisition library.4. ! △NOTE Sometimes an errant pass rule could cause alerts to not show up. It is possible to select the DAQ type and mode when invoking Snort to perform PCAP readback or inline operation. • --alert-before-pass option forces alert rules to take affect in favor of a pass rule. rather then the normal action. The sdrop rules are not loaded. then the Drop rules. only the events for the first action based on rules ordering are processed.1 Configuration Assuming that you did not disable static modules or change the default DAQ type. you can run Snort just as you always did for file readback or sniffing an interface.5 Packet Acquisition Snort 2. and directory may be specified either via the command line or in the conf file. 1. this ends up being 1530 bytes per frame. as this appears to be the maximum number of iovecs the kernel can handle. 14 . the maximum size is 32768. You may include as many variables and directories as needed by repeating the arg / config. the command line overrides the conf.gov/cpw/. mode. and if that hasn’t been set. since there is no conflict. -r will force it to read-file. If the mode is not set explicitly. According to Phil./snort -i <device> . MMAPed pcap On Linux. if configured in both places. Note that if Snort finds multiple versions of a given library. enabling the ring buffer is done via setting the environment variable PCAP FRAMES. Phil Woods (cpw@lanl.<mode> ::= read-file | passive | inline <var> ::= arbitrary <name>=<value> passed to DAQ <dir> ::= path where to look for DAQ module so’s The DAQ type. Instead of the normal mechanism of copying the packets from kernel memory into userland memory. The shared memory ring buffer libpcap can be downloaded from his website at. for a total of around 52 Mbytes of memory for the ring buffer alone. Once Snort linked against the shared memory libpcap. and if that hasn’t been set. by using a shared memory ring buffer./snort --daq pcap --daq-var buffer_size=<#bytes> Note that the pcap DAQ does not count filtered packets. and attributes of each. the mode defaults to passive. -Q will force it to inline. This applies to static and dynamic versions of the same library. Also. libpcap is able to queue packets into a shared buffer that Snort is able to read directly. PCAP FRAMES is the size of the ring buffer. This change speeds up Snort by limiting the number of times the packet is copied before Snort gets to perform its detection upon it.5./snort --daq pcap --daq-mode passive -i <device> . the most recent version is selected./snort [--daq-list <dir>] The above command searches the specified directory for DAQ modules and prints type. By using PCAP FRAMES=max. This feature is not available in the conf. but -Q and any other DAQ mode will cause a fatal error at start-up. a modified version of libpcap is available that implements a shared memory ring buffer. -Q and –daq-mode inline are allowed.gov) is the current maintainer of the libpcap implementation of the shared memory ring buffer. variable. if snort is run w/o any DAQ arguments.lanl. . version.2 PCAP pcap is the default DAQ. These are equivalent: . libpcap will automatically use the most frames possible. DAQ type may be specified at most once in the conf and once on the command line. On Ethernet. it will operate as it always did using this module./snort --daq pcap --daq-mode read-file -r <file> You can specify the buffer size pcap uses with: ./snort -r <file> . default is 0 Notes on iptables are given below. 5.5 MB. the numbers break down like this: 1./snort --daq nfq \ [--daq-var device=<dev>] \ [--daq-var proto=<proto>] \ [--daq-var queue=<qid>] \ [--daq-var queue_len=<qlen>] <dev> ::= ip | eth0. 4. You can change this with: --daq-var buffer_size_mb=<#MB> Note that the total allocated is actually higher. 1. default is IP injection <proto> ::= ip4 | ip6 | ip*. default is 0 <qlen> ::= 0.4 NFQ NFQ is the new and improved way to process iptables packets: . you must set device to one or more interface pairs. 2.. the afpacket DAQ allocates 128MB for packet memory. The smallest block size that can fit at least one frame is 4 KB = 4096 bytes @ 2 frames per block. 15 .65535.1. where each member of a pair is separated by a single colon and each pair is separated by a double colon like this: eth0:eth1 or this: eth0:eth1::eth2:eth3 By default. 3.5. here’s why. we need 84733 / 2 = 42366 blocks. etc. Actual memory allocated is 42366 * 4 KB = 165.. As a result. The frame size is 1518 (snaplen) + the size of the AFPacket header (66 bytes) = 1584 bytes. Assuming the default packet memory with a snaplen of 1518./snort --daq afpacket -i <device> [--daq-var buffer_size_mb=<#MB>] [--daq-var debug] If you want to run afpacket in inline mode.5.65535. The number of frames is 128 MB / 1518 = 84733. default is ip4 <qid> ::= 0.3 AFPACKET afpacket functions similar to the memory mapped pcap DAQ but no external library is required: . /snort -r <pcap> -Q --daq dump --daq-var load-mode=read-file ./snort -i <device> --daq dump .9 versions built with this: . you will probably want to have the pcap DAQ acquire in another mode like this: . It replaces the inline version available in pre-2. It therefore does not count filtered packets. Furthermore./configure --enable-ipfw / -DGIDS -DIPFW This command line argument is no longer supported: .5 IPQ IPQ is the old way to process iptables packets. 1. 1.5./snort -J <port#> Instead.9 versions built with this: . default is ip4 Notes on iptables are given below. .6 IPFW IPFW is available for BSD systems./snort --daq ipq \ [--daq-var device=<dev>] \ [--daq-var proto=<proto>] \ <dev> ::= ip | eth0.9 Snort like injection and normalization./snort -r <pcap> --daq dump By default a file named inline-out. You can optionally specify a different name..1. start Snort like this: . default is IP injection <proto> ::= ip4 | ip6.5./configure --enable-inline / -DGIDS Start the IPQ DAQ as follows: ./snort --daq dump --daq-var file=<name> dump uses the pcap daq for packet acquisition. It replaces the inline version available in pre-2./snort --daq ipfw [--daq-var port=<port>] <port> ::= 1.5.65535. default is 8000 * IPFW only supports ip4 traffic./snort -i <device> -Q --daq dump --daq-var load-mode=passive 16 . . etc. Note that the dump DAQ inline mode is not an actual inline mode.7 Dump The dump DAQ allows you to test the various inline mode features available in 2.pcap will be created containing all packets that passed through or were generated by snort. 6 Reading Pcaps Instead of having Snort listen on an interface. This can be useful for testing and debugging Snort. --pcap-no-filter --pcap-reset --pcap-show 1. • Whitelist packets that caused Snort to allow a flow to pass w/o inspection by any analysis program.pcap $ snort --pcap-single=foo. eg due to a block rule. This filter will apply to any --pcap-file or --pcap-dir arguments following. without this option. Same as -r. The default. • Ignore packets that caused Snort to allow a flow to pass w/o inspection by this instance of Snort. • Allow packets Snort analyzed and did not take action on. 1. you can give it a packet capture to read. Print a line saying what pcap is currently being read.1 Command line arguments Any of the below can be specified multiple times on the command line (-r included) and in addition to other Snort command line options. • Block packets Snort did not forward. A space separated list of pcaps to read. that specifying --pcap-reset and --pcap-show multiple times has the same effect as specifying them once. i. If reading multiple pcaps. • Injected packets Snort generated and sent. Added for completeness.2 Examples Read a single pcap $ snort -r foo. 1. is not to reset state.e. Note.1. reset snort to post-configuration state before reading next pcap. Option -r <file> --pcap-single=<file> --pcap-file=<file> --pcap-list="<list>" --pcap-dir=<dir> --pcap-filter=<filter> Description Read a single pcap. Can specify path to pcap or directory to recurse to get pcaps. A directory to recurse to look for pcaps. Shell style filter to apply when getting pcaps from file or directory.6. however. Snort will read and analyze the packets as if they came off the wire. File that contains a list of pcaps to read.8 Statistics Changes The Packet Wire Totals and Action Stats sections of Snort’s output include additional fields: • Filtered count of packets filtered out and not handed to Snort for analysis. Sorted in ASCII order. Use --pcap-no-filter to delete filter for following --pcap-file or --pcap-dir arguments or specify --pcap-filter again to forget previous filter and to apply to following --pcap-file or --pcap-dir arguments. eg TCP resets. The action stats show ”blocked” packets instead of ”dropped” packets to avoid confusion between dropped packets (those Snort didn’t actually see) and blocked packets (those Snort did not allow to pass). • Blacklist packets that caused Snort to block a flow from passing.pcap 17 . • Replace packets Snort modified.5. Reset to use no filter when getting pcaps from file or directory.6. txt” (and any directories that are recursed in that file). Note that Snort will not try to determine whether the files under that directory are really pcap files or not.txt foo1. foo2. $ snort --pcap-filter="*.txt $ snort --pcap-filter="*.Read pcaps from a file $ cat foo.txt.pcap --pcap-file=foo. Using filters $ cat foo.pcap --pcap-file=foo.pcap foo3. then no filter will be applied to the files found under /home/foo/pcaps. so all files found under /home/foo/pcaps will be included.pcap. any file ending in ”. the first filter will be applied to foo.pcap”.txt \ > --pcap-filter="*. the first filter will be applied to foo.pcap foo2.pcap /home/foo/pcaps $ snort --pcap-filter="*.pcap.txt \ > --pcap-no-filter --pcap-dir=/home/foo/pcaps In this example.pcap” will only be applied to the pcaps in the file ”foo. Read pcaps under a directory $ snort --pcap-dir="/home/foo/pcaps" This will include all of the files under /home/foo/pcaps.pcap" --pcap-file=foo.pcap”. then no filter will be applied to the files found under /home/foo/pcaps.pcap foo2.cap" --pcap-dir=/home/foo/pcaps In the above.txt foo1.cap” will cause the first filter to be forgotten and then applied to the directory /home/foo/pcaps. then the filter ”*.pcap" --pcap-dir=/home/foo/pcaps The above will only include files that match the shell pattern ”*. $ snort --pcap-filter="*.pcap --pcap-file=foo.pcap /home/foo/pcaps $ snort --pcap-file=foo.cap” will be applied to files found under /home/foo/pcaps2. foo2.cap" --pcap-dir=/home/foo/pcaps2 In this example. so all files found under /home/foo/pcaps will be included.pcap foo2.pcap and foo3.cap” will be included from that directory.pcap" This will read foo1.txt This will read foo1.txt. the first filter ”*.pcap and all files under /home/foo/pcaps. $ snort --pcap-filter="*. The addition of the second filter ”*.txt \ > --pcap-no-filter --pcap-dir=/home/foo/pcaps \ > --pcap-filter="*. Read pcaps from a command line list $ snort --pcap-list="foo1. in other words. so only files ending in ”. 18 .pcap. Printing the pcap $ snort --pcap-dir=/home/foo/pcaps --pcap-show The above example will read all of the files under /home/foo/pcaps and will print a line indicating which pcap is currently being read. statistics reset.856509 seconds Snort processed 3716022 packets. If you are reading pcaps.7. unless you use –pcap-reset. • Filtered packets are not shown for pcap DAQs. in which case it is shown per pcap. For each pcap.000%) 19 .2 Packet I/O Totals This section shows basic packet acquisition and injection peg counts obtained from the DAQ. Snort ran for 0 days 0 hours 2 minutes 55 seconds Pkts/min: 1858011 Pkts/sec: 21234 =============================================================================== 1. The way this is counted varies per DAQ so the DAQ documentation should be consulted for more info. and only shown when non-zero. but after each pcap is read. minutes.1 Timing Statistics This section provides basic timing statistics. • Outstanding indicates how many packets are buffered awaiting processing. Snort will be reset to a post-configuration state. etc. the totals are for all pcaps combined. Example: =============================================================================== Packet I/O Totals: Received: 3716022 Analyzed: 3716022 (100.7 Basic Output Snort does a lot of work and outputs some useful statistics when it is done. This does not include all possible output data. Many of these are self-explanatory. It includes total seconds and packets as well as packet processing rates. 1. • Injected packets are the result of active response which can be configured for inline or passive modes. Example: =============================================================================== Run time for packet processing was 175. The others are summarized below. it will be like Snort is seeing traffic for the first time. meaning all buffers will be flushed. The rates are based on whole seconds. just the basics.Resetting state $ snort --pcap-dir=/home/foo/pcaps --pcap-reset The above example will read all of the files under /home/foo/pcaps. etc.7. 1. 000%) IP4/IP6: 0 ( 0.884%) Frag: 3839 ( 0.000%) Injected: 0 =============================================================================== 1. This traffic includes internal ”pseudo-packets” if preprocessors such as frag3 and stream5 are enabled so the total may be greater than the number of analyzed packets in the packet I/O section.000%) GRE IP6 Ext: 0 ( 0.000%) GRE IPX: 0 ( 0.773%) TCP6: 1619633 ( 43.000%) GRE PPTP: 202 ( 0. • Disc counts are discards due to basic encoding integrity flaws that prevents Snort from decoding the packet.685%) TCP: 1619621 ( 43.Dropped: 0 ( 0.103%) ICMP6: 1650 ( 0. • S5 G 1/2 is the number of client/server sessions stream5 flushed due to cache limit.005%) GRE ARP: 0 ( 0. • Other includes packets that contained an encapsulation that Snort doesn’t decode.005%) GRE Eth: 0 ( 0.000%) IP4: 1782394 ( 47.000%) MPLS: 0 ( 0.000%) GRE: 202 ( 0.000%) IP6/IP6: 0 ( 0.103%) ICMP: 38860 ( 1.000%) GRE VLAN: 0 ( 0.044%) UDP: 137162 ( 3.000%) IP6/IP4: 0 ( 0.044%) UDP6: 140446 ( 3.166%) Frag6: 3839 ( 0.000%) Outstanding: 0 ( 0.000%) ARP: 104840 ( 2. session timeout.817%) 20 .000%) GRE IP4: 0 ( 0.7.000%) GRE IP6: 0 ( 0.000%) EAPOL: 0 ( 0.016%) IP6 Opts: 6168 ( 0.511%) IP6: 1781159 ( 47.000%) IP4/IP4: 0 ( 0.850%) IP6 Ext: 1787327 ( 48.511%) Teredo: 18 ( 0.000%) ICMP-IP: 0 ( 0.000%) GRE Loop: 0 ( 0.000%) Filtered: 0 ( 0.000%) VLAN: 0 ( 0.3 Protocol Statistics Traffic for all the protocols decoded by Snort is summarized in the breakdown section. session reset. Example: =============================================================================== Breakdown by protocol (includes rebuilt packets): Eth: 3722347 (100. and reject actions. ”Block” is used instead of ”Drop” to avoid confusion between dropped packets (those Snort didn’t actually see) and blocked packets (those Snort did not allow to pass).863%) Bad TTL: 0 ( 0. This information is only output in IDS mode (when snort is run with the -c <conf> option).555%) Bad Chk Sum: 32135 ( 0. If the DAQ supports this in hardware.000%) TCP Disc: 0 ( 0. drop. due to normalization or replace rules. and Verdicts Action and verdict counts show what Snort did with the packets it analyzed.000%) Eth Disc: 0 ( 0.4 Actions.000%) All Discard: 1385 ( 0. Verdicts are rendered by Snort on each packet: • Allow = packets Snort analyzed and did not take action on. and block actions processed as determined by the rule actions. 21 . • Blacklist = packets that caused Snort to block a flow from passing. The default is 3. Limits. • Alerts is the number of activate. this is done by the DAQ or by Snort on subsequent packets. snort will block each packet and this count will be higher. for example. This is the case when a block TCP rule fires.000%) UDP Disc: 1385 ( 0. • Ignore = packets that caused Snort to allow a flow to pass w/o inspection by this instance of Snort. Limits arise due to real world constraints on processing time and available memory. This can only happen in inline mode with a compatible DAQ. These indicate potential actions that did not happen: • Match Limit counts rule matches were not processed due to the config detection: setting.7. • Log Limit counts events were not alerted due to the config event queue: • Event Limit counts events not alerted due to event filter limits.000%) IP6 Disc: 0 ( 0.IPX: 60 ( 0. The default is 8. this is done by the DAQ or by Snort on subsequent packets.000%) IP4 Disc: 0 ( 0. eg due to a block rule. no further packets will be seen by Snort for that session. Like blacklist.040%) S5 G 2: 1654 ( 0. Here block includes block. Like blacklist. • Whitelist = packets that caused Snort to allow a flow to pass w/o inspection by any analysis program.037%) ICMP Disc: 0 ( 0. alert. max queue events max queue • Queue Limit counts events couldn’t be stored in the event queue due to the config event queue: setting. • Block = packets Snort did not forward.000%) S5 G 1: 1494 ( 0. log setting.044%) Total: 3722347 =============================================================================== 1. • Replace = packets Snort modified. If not.002%) Eth Loop: 0 ( 0. The default is 5.037%) Other: 57876 ( 1. Example: =============================================================================== Action Stats: Alerts: 0 ( 0./configure --enable-gre To enable IPv6 support.000%) Block: 0 ( 0. 1.8 Tunneling Protocol Support Snort supports decoding of GRE. one still needs to use the configuration option: $ . an extra configuration option is necessary: $ ./configure --enable-ipv6 1.g. Scenarios such as Eth IPv4 GRE IPv4 GRE IPv4 TCP Payload or Eth IPv4 IPv6 IPv4 TCP Payload will not be handled and will generate a decoder alert.2 Logging Currently. Eth IP1 GRE IP2 TCP Payload gets logged as Eth IP2 TCP Payload 22 .000%) Ignore: 0 ( 0.1 Multiple Encapsulations Snort will not decode more than one encapsulation.8.000%) Logged: 0 ( 0.000%) Whitelist: 0 ( 0. only the encapsulated part of the packet is logged.000%) Replace: 0 ( 0. e.000%) Passed: 0 ( 0.8. To enable support.000%) Blacklist: 0 ( 0. IP in IP and PPTP.000%) =============================================================================== 1.000%) Match Limit: 0 Queue Limit: 0 Log Limit: 0 Event Limit: 0 Verdicts: Allow: 3716022 (100. Additionally. is not currently supported on architectures that require word alignment such as SPARC. /usr/local/bin/snort -c /usr/local/etc/snort.2 Running in Rule Stub Creation Mode If you need to dump the shared object rules stub to a directory. The path can be relative or absolute.1 Running Snort as a Daemon If you want to run Snort as a daemon. Please notice that if you want to be able to restart Snort by sending a SIGHUP signal to the daemon. which utilizes GRE and PPP. you can the add -D switch to any combination described in the previous sections.9.9 Miscellaneous 1. In Snort 2. 1.1. you must specify the full path to the Snort binary when you start it.9.conf \ --dump-dynamic-rules=/tmp This path can also be configured in the snort. These rule stub files are used in conjunction with the shared object rules. The PID file will be locked so that other snort processes cannot start. for example: /usr/local/bin/snort -d -h 192.168.conf -s -D Relative paths are not supported due to security concerns. the --pid-path command line switch causes Snort to write the PID file in the directory specified. the daemon creates a PID file in the log directory. 23 . Use the --nolock-pidfile switch to not lock the PID file. 1.and Eth IP1 IP2 TCP Payload gets logged as Eth IP2 TCP Payload ! △NOTE Decoding of PPTP. Snort PID File When Snort is run as a daemon . you might need to use the –dump-dynamic-rules option.6. the --create-pidfile switch can be used to force creation of a PID file even when not running in daemon mode.0/24 \ -l /var/log/snortlogs -c /usr/local/etc/sn. either on different CP. obfuscating only the addresses from the 192./usr/local/bin/snort -c /usr/local/etc/snort. This switch obfuscates your IP addresses in packet printouts. You can also combine the -O switch with the -h switch to only obfuscate the IP addresses of hosts on the home network. you might want to use the -O switch. Explanation of Modes • Inline When Snort is in Inline mode.1. Drop rules are not loaded (without –treat-drop-as-alert). 1. it acts as an IPS allowing drop rules to trigger. For example. or on the same CPU but a different interface./snort -d -v -r snort.168. Snort policies can be configured in these three modes too. Snort can be configured to passive mode using the snort config option policy mode as follows: config policy_mode:tap • Inline-Test Inline-Test mode simulates the inline mode of snort. Each Snort instance will use the value specified to generate unique event IDs.conf: config dump-dynamic-rules-path: /tmp/sorules In the above mentioned scenario the dump path is set to /tmp/sorules.9. Users can specify either a decimal value (-G 1) or hex value preceded by 0x (-G 0x11).4 Specifying Multiple-Instance Identifiers In Snort v2.0/24 1.3 Obfuscating IP Address Printouts If you need to post packet logs to public mailing lists.0/24 class C network: . The drop rules will be loaded and will be triggered as a Wdrop (Would Drop) alert. This option can be used when running multiple instances of snort. 1. allowing evaluation of inline behavior without affecting traffic.4. it acts as a IDS. the -G command line option was added that specifies an instance identifier for the event logs.log -O -h 192.5 Snort Modes Snort can operate in three different modes namely tap (passive). inline.conf \ --dump-dynamic-rules snort.9. and inline-test.9. Snort can be configured to run in inline-test mode using the command line option (–enable-inline-test) or using the snort config option policy mode as follows: 24 .168. This is handy if you don’t want people on the mailing list to know the IP addresses involved. you could use the following command to read the packets from a log file and dump them to the screen. This is useful if you don’t care who sees the address of the attacking host.1. This is also supported via a long option --logid. so you may have to type snort -\? instead of snort -? for a list of Snort command line options.theaimsgroup.net provide informative announcements as well as a venue for community discussion and support.org) and the Snort Users mailing list:. 25 .com/?l=snort-users at [email protected] More Information Chapter 2 contains much information about many configuration options available in the configuration file. so sit back with a beverage of your choosing and read the documentation and mailing list archives.. a backslash (\) is needed to escape the ?. ! △NOTE In many shells.snort. The Snort web page ( --enable-inline-test config policy_mode:inline_test ! △NOTE Please note –enable-inline-test cannot be used in conjunction with -Q. There’s a lot to Snort. The Snort manual page and the output of snort -? or snort --help contain information that can help you get Snort running in several different modes.sourceforge. 1.0/24] alert tcp any any -> $MY_NET $MY_PORTS (flags:S. use a regular ’var’.1. See Section 2. Note that there Included files will substitute any predefined variable values into their own variable references.1.1 Format include <include file path/name> ! △NOTE is no semicolon at the end of this line.) include $RULE_PATH/example. Without IPv6 support.2 Variables Three types of variables may be defined in Snort: • var • portvar • ipvar ! △NOTE are only enabled with IPv6 support.conf indicated on the Snort command line. reading the contents of the named file and adding the contents in the place where the include statement appears in the file. msg:"SYN packet". It works much like an #include from the C programming language.rule 26 .168.1024:1050] ipvar MY_NET [192.2 for more information on defining and using variables in Snort config files. ipvar.1 Includes The include keyword allows other snort config files to be included within the snort.Chapter 2 Configuring Snort 2. or portvar keywords as follows: var RULES_PATH rules/ portvar MY_PORTS [22. 2.10. 2.1. Note: ’ipvar’s These are simple substitution variables set with the var.1.80.1.0/24. 2.7.2.1.![2. Use of !any: ipvar EXAMPLE any alert tcp !$EXAMPLE any -> any any (msg:"Example".0.1.1. but ’!any’ is not allowed. IP variables should be specified using ’ipvar’ instead of ’var’. and CIDR blocks may be negated with ’!’.2.1. Lists of ports must be enclosed in brackets and port ranges may be specified with a ’:’.2.1.2.!1.2. Valid port ranges are from 0 to 65535.2.0/24. with the exception of IPs 2.1.sid:2.0. Negation is handled differently compared with Snort versions 2.2. If IPv6 support is enabled.2.2. sid:1. such as in: [10:50.0/24.2.1.2. [1. ’any’ will specify any ports. as a CIDR block. or any combination of the three. but it will be deprecated in a future release. Variables. in a list. Using ’var’ for an IP variable is still allowed for backward compatibility.2 and 2.0.0/24. IP lists.2.3.2.1.![2.2.1.) Different use of !any: ipvar EXAMPLE !any alert tcp $EXAMPLE any -> any any (msg:"Example".sid:3. IPs. Previously.1.1.2.1.2.2.1 and IP from 2. or lists may all be negated with ’!’.2. ipvar EXAMPLE [1.3]] The order of the elements in the list does not matter.255.!1.IP Variables and IP Lists IPs may be specified individually.sid:3. Also. IP lists now OR non-negated elements and AND the result with the OR’ed negated elements.2. each element in a list was logically OR’ed together.888:900] 27 .!1.1.2.2.3]] alert tcp $EXAMPLE any -> any any (msg:"Example".) Logical contradictions: ipvar EXAMPLE [1. The following example list will match the IP 1.0/8. although ’!any’ is not allowed.1.x and earlier.2.2.2.0/24] any -> any any (msg:"Example".0 to 2. See below for some valid examples if IP variables and IP lists. negated IP ranges that are more general than non-negated IP ranges are not allowed.1.1.1. Also. The element ’any’ can be used to match all IPs.1.) alert tcp [1.0/16] Port Variables and Port Lists Portlists supports the declaration and lookup of ports and the representation of lists and ranges of ports.1] Nonsensical negations: ipvar EXAMPLE [1.) The following examples demonstrate some invalid uses of IP variables and IP lists.2. ranges. provided the variable name either ends with ’ PORT’ or begins with ’PORT ’.) alert tcp any 90 -> any [100:1000.) Port variable used as an IP: alert tcp $EXAMPLE1 any -> any any (msg:"Example".) Several invalid examples of port variables and port lists are demonstrated below: Use of !any: portvar EXAMPLE5 !any var EXAMPLE5 !any Logical contradictions: portvar EXAMPLE6 [80.9999:20000] (msg:"Example". sid:4. sid:3. sid:1. portvar EXAMPLE1 80 var EXAMPLE2_PORT [80:90] var PORT_EXAMPLE2 [1] portvar EXAMPLE3 any portvar EXAMPLE4 [!70:90] portvar EXAMPLE5 [80. The following examples demonstrate several valid usages of both port variables and port lists.Port variables should be specified using ’portvar’. a ’var’ can still be used to declare a port variable. The use of ’var’ to declare a port variable will be deprecated in a future release. You can define meta-variables using the $ operator.100:200] alert tcp any $EXAMPLE1 -> any $EXAMPLE2_PORT (msg:"Example". These can be used with the variable modifier operators ? and -. as described in the following table: 28 . sid:2. sid:5.!80] Ports out of range: portvar EXAMPLE7 [65536] Incorrect declaration and use of a port variable: var EXAMPLE8 80 alert tcp any $EXAMPLE8 -> any any (msg:"Example". For backwards compatibility.) Variable Modifiers Rule variable names can be modified in several ways.91:95.) alert tcp any $PORT_EXAMPLE2 -> any any (msg:"Example". 1. variables can not be redefined if they were previously defined as a different type. Replaces with the contents of variable var or prints out the error message and exits.1. but old-style variables (with the ’var’ keyword) can not be embedded inside a ’portvar’. For instance. Here is an example of advanced variable usage in action: ipvar MY_NET 192.90] Invalid embedded variable: var pvar1 80 portvar pvar2 [$pvar1. Replaces the contents of the variable var with “default” if var is undefined. types can not be mixed. Format config <directive> [: <value>] 29 .90] Likewise. They should be renamed instead: Invalid redefinition: var pvar 80 portvar pvar 90 2.Variable Syntax var $(var) or $var $(var:-default) $(var:?message) Description Defines a meta-variable. Valid embedded variable: portvar pvar1 80 portvar pvar2 [$pvar1.0/24 log tcp any any -> $(MY_NET:?MY_NET is undefined!) 23 Limitations When embedding variables.168 variables. Replaces with the contents of variable var. this option will cause Snort to revert back to it’s original behavior of alerting if the decoder or preprocessor generates an event. notcp. Types of packets to calculate checksums. Default (with or without directive) is enabled. You can optionally specify a directory to include any dynamic DAQs from that directory. Chroots to specified dir (snort -t). tcp. Tell Snort to dump basic DAQ capabilities and exit. Decodes Layer2 headers (snort -e). tcp. Snort just passes this information down to the DAQ. udp. ip. Set a DAQ specific variable. noudp. Sets the alerts output file. Specifies BPF filters (snort -F). ip. Values: none. The DAQ with the highest version of the given type is selected if there are multiple of the same type (this includes any built-in DAQs). or read-file.31 for more information and examples.5. You can also preceed this option with extra DAQ directory options to look in multiple directories. See the DAQ distro README for possible DAQ variables. Specifies the maximum number of nodes to track when doing ASN1 decoding. If Snort was configured to enable decoder and preprocessor rules. Not all DAQs support modes. This can be repeated. noip. inline. Global configuration directive to enable or disable the loading of rules into the detection engine. noicmp.2 for a list of classifications. 30 . See Table 3. Tell Snort where to look for available dynamic DAQ modules. Types of packets to drop if invalid checksums. icmp or all. Values: none. Selects the type of DAQ to instantiate. Specify disabled to disable loading rules. notcp. noip. Select the DAQ mode: passive. udp. See the DAQ distro README for possible DAQ modes or list DAQ capabilities for a brief summary. See Section). Forks as a daemon (snort -D). icmp or all (only applicable in inline mode and for packets checked per checksum mode config option). noudp. The selected DAQ will be the one with the latest version. noicmp. ∗ ac-nq . split-any-any ∗ intel-cpm .Aho-Corasick Binary NFA (low memory. then evaluated. high performance).Low Memory Keyword Trie (low memory. ∗ ac-bnfa and ac-bnfa-q .Aho-Corasick Standard (high memory. moderate performance) ∗ ac-banded . moderate performance) ∗ ac-split . This is the default search method if none is specified.Matches are queued until the fast pattern matcher is finished with the payload.Aho-Corasick Full with ANYANY port group evaluated separately (low memory.Aho-Corasick Sparse (high memory. moderate performance) – Other search methods (the above are considered superior to these) ∗ ac-std .Aho-Corasick Full (high memory.Low Memory Keyword Trie (low memory. high performance). Note this is shorthand for search-method ac. ∗ ac and ac-q .Aho-Corasick Banded (high memory.The ”nq” option specifies that matches should not be queued and evaluated as they are found.Aho-Corasick SparseBanded (high memory. ∗ ac-bnfa-nq . high performance) ∗ lowmem and lowmem-q . best performance). high performance) ∗ acs .Intel CPM library (must have compiled Snort with location of libraries to enable this) – No queue search methods .Aho-Corasick Full (high memory.Aho-Corasick Binary NFA (low memory. best performance). This was found to generally increase performance through fewer cache misses (evaluating each rule would generally blow away the fast pattern matcher state in the cache). ∗ lowmem-nq . • search-method <method> – Queued match search methods .config detection: <method>] [search-method Select type of fast pattern matcher algorithm to use. moderate performance) 31 . moderate performance) ∗ ac-sparsebands . one for the specific port group and one for the ANY-ANY port group. however CPU cache can play a part in performance so a smaller memory footprint of the fast pattern matcher can potentially increase performance. potentially increasing performance. 32 . But doing so may require two port group evaluations per packet . Patterns longer than this length will be truncated to this length before inserting into the pattern matcher. rules that will be evaluated because a fast pattern was matched. Useful when there are very long contents being used and truncating the pattern won’t diminish the uniqueness of the patterns. thus potentially reducing performance. Not putting the ANY-ANY port rule group into every other port group can significantly reduce the memory footprint of the fast pattern matchers if there are many ANYANY port rules. By default.e. max-pattern-len <integer> – This is a memory optimization that specifies the maximum length of a pattern that will be put in the fast pattern matcher. some fail-state resolution will be attempted. Default is to not set a maximum pattern length.config detection: [split-any-any] [search-optimize] [max-pattern-len <int>] Other options that affect fast pattern matching. but eventually fail. Of note is that the lower memory footprint can also increase performance through fewer cache misses. Default is not to split the ANY-ANY port group. Default is not to optimize. • search-optimize – Optimizes fast pattern memory when used with search-method ac or ac-split by dynamically determining the size of a state based on the total number of states. • split-any-any – A memory/performance tradeoff. ANYANY port rules are added to every non ANY-ANY port group so that only one port group rule evaluation needs to be done per packet. Note that this may cause more false positive rule evaluations. When used with ac-bnfa. 33 . Default is not to do this. • bleedover-port-limit – The maximum number of source or destination ports designated in a rule before the rule is considered an ANY-ANY port group rule. • no stream inserts – Specifies that stream inserted packets should not be evaluated against the detection engine. Default is 5 events. 1024. Not recommended. Default is to inspect stream inserts. • max queue events <integer> – Specifies the maximum number of events to queue per packet. config detection: [debug] Options for detection engine debugging. Turns on character dumps (snort -C). [debug-print-nocontent-rule-tests] [debug-print-rule-group-build-details] • debug [debug-print-rule-groups-uncompiled] – Prints fast pattern information for a particular port [debug-print-rule-groups-compiled] group. config enable decode oversized alerts Enable alerting on packets that have headers containing length fields for which the value is greater than the length of the packet. Turns off alerts generated by T/TCP options.-group-build-details – Prints port group information during port group compilation. (snort --disable-inline-init-failopen) Disables IP option length validation alerts. Only useful if Snort was configured with –enable-inline-init-failopen. config disable decode alerts config disable inline init failopen Turns off the alerts generated by the decode phase of Snort. 34 . • debug-print-rule-groups-uncompiled – Prints uncompiled port group information. prints information about the content being used for the fast pattern matcher. Turns off alerts generated by T/TCP options. Dumps application layer (snort -d). [debug-print-fast-pattern] [bleedover-warnings-enabled] • debug-print-nocontent-rule-tests – Prints port group information during packet evaluation. • debug-print-rule-groups-compiled – Prints compiled port group information. • debug-print-fast-pattern – For each rule with fast pattern content. Disables failopen thread that allows inline traffic to pass while Snort is starting up. • bleedover-warnings-enabled – Prints a warning if the number of source or destination ports used in a rule exceed the bleedover-port-limit forcing the rule to be moved into the ANY-ANY port group. Dumps raw packet starting at link layer (snort -X). Turns off alerts generated by experimental TCP options. Disables option length validation alerts. Enables the dropping of bad packets identified by decoder (only applicable in inline mode). Once we have this information we can start to really change the game for these complex modeling problems. a global configuration directive and an engine instantiation. Global Configuration 38 . There are at least two preprocessor directives required to activate frag3. they are usually implemented by people who read the RFCs and then write their interpretation of what the RFC outlines into code.snort. Frag3 uses the sfxhash data structure and linked lists for data handling internally which allows it to have much more predictable and deterministic performance in any environment which should aid us in managing heavily fragmented environments. check out the famous Ptacek & Newsham paper at. Unfortunately. As I like to say.. This is where the idea for “target-based IDS” came from. there are ambiguities in the way that the RFCs define some of the edge conditions that may occur and when this happens different people implement certain aspects of their IP stacks differently. There can be an arbitrary number of engines defined at startup with their own configuration.1 Frag3 The frag3 preprocessor is a target-based IP defragmentation module for Snort.icir. Frag 3 Configuration Frag3 configuration is somewhat more complex than frag2.org/docs/ idspaper/. Frag3 was implemented to showcase and prototype a target-based module within Snort to test this idea. Check it out at. if the attacker has more information about the targets on a network than the IDS does. For an IDS this is a big problem. 2. Faster execution than frag2 with less complex data management..org/vern/papers/activemap-oak03.pdf. but only one global configuration. Target-based host modeling anti-evasion techniques. Preprocessors are loaded and configured using the preprocessor keyword.2. When IP stacks are written for different operating systems. The frag2 preprocessor used splay trees extensively for managing the data structures associated with defragmenting packets. Frag3 is intended as a replacement for the frag2 defragmentation module and was designed with the following goals: 1.. heavily fragmented environments the nature of the splay trees worked against the system and actually hindered performance. but after the packet has been decoded. it is possible to evade the IDS. but that’s a topic for another day. In an environment where the attacker can determine what style of IP defragmentation is being used on a particular target. The format of the preprocessor directive in the Snort config file is: preprocessor <name>: <options> 2.engine is called. Target-based analysis is a relatively new concept in network-based intrusion detection.. For more detail on this issue and how it affects IDS. – timeout <seconds> . – disabled . Default is 60 seconds. This is an optional parameter. – overlap limit <number> .Detect fragment anomalies. bsdright.Memory cap for self preservation. last. – policy <type> . – detect anomalies .• Preprocessor name: frag3 global • Available options: NOTE: Global configuration options are comma separated.Select a target-based defragmentation mode. When the preprocessor is disabled only the options memcap. Fragments smaller than or equal to this limit are considered malicious and an event is raised.Minimum acceptable TTL value for a fragment packet. – bind to <ip list> . detect anomalies option must be configured for this option to take effect.255. The known mappings are as follows. and prealloc frags are applied when specified with the configuration.IP List to bind this engine to. Default value is all. – prealloc frags <number> . – min fragment length <number> . By default this option is turned off. bsd. Available types are first. – max frags <number> . linux. The accepted range for this option is 1 . Engine Configuration • Preprocessor name: frag3 engine • Available options: NOTE: Engine configuration options are space separated.Alternate memory management mode. Use preallocated fragment nodes (faster in some situations). The Paxson Active Mapping paper introduced the terminology frag3 is using to describe policy types. The default is ”0” (unlimited). Default is 4MB. Default is 8192. Default is 1.Option to turn off the preprocessor. Anyone who develops more mappings and would like to add to this list please feel free to send us an email! 39 . This is an optional parameter. This engine will only run for packets with destination addresses contained within the IP List. This config option takes values equal to or greater than zero. Default type is bsd. the minimum is ”0”. Fragments in the engine for longer than this period will be automatically dropped.Limits the number of overlapping fragments per packet. detect anomalies option must be configured for this option to take effect.Timeout for fragments. – min ttl <value> . The default is ”0” (unlimited). – memcap <bytes> . if detect anomalies is also configured.Defines smallest fragment size (payload size) that should be considered valid. prealloc memcap.Maximum simultaneous fragments to track. 1.2smp Linux 2.10.5. detect_anomalies 40 .7-10 Linux 2.2.14-5.2 IRIX 6. bind_to [10.4. The first two engines are bound to specific IP address ranges and the last one applies to all other traffic.0.2.0 OSF1 V3.47.1.2.2 OSF1 V4.10 Linux 2.0A.10smp Linux 2.20 HP-UX 11.0.1.4.3) MacOS (version unknown) NCD Thin Clients OpenBSD (version unknown) OpenBSD (version unknown) OpenVMS 7.5.5.5F IRIX 6.9-31SGI 1.2.7.0.3 8. first and last policies assigned.0.1 SunOS 4.5.1-7.172.8.3 Cisco IOS FreeBSD HP JetDirect (printer) HP-UX B.4 Linux 2.16.16-3 Linux 2.5.1. bind_to 192.00 IRIX 4.1 OS/2 (version unknown) OSF1 V3.0/24.4 SunOS/24] policy last.0 Linux 2.3 IRIX64 6.9.6.8 Tru64 Unix V5.19-6.2.0/24 policy first.V5. Packets that don’t fall within the address requirements of the first two engines automatically fall through to the third one.5.4 (RedHat 7. identify sessions that may be ignored (large data transfers. and update the identifying information about the session (application protocol. etc). direction.2 Stream5 The Stream5 preprocessor is a target-based TCP reassembly module for Snort. other protocol normalizers/preprocessors to dynamically configure reassembly behavior as required by the application layer protocol. TCP Timestamps. \ [memcap <number bytes>]. Stream5 Global Configuration Global settings for the Stream5 preprocessor. data received outside the TCP window. Data on SYN. [disabled] 41 . \ [track_udp <yes|no>]. The methods for handling overlapping data. Anomaly Detection TCP protocol anomalies. Some of these anomalies are detected on a per-target basis. etc) that can later be used by rules. [show_rebuilt_packets]. With Stream5. preprocessor stream5_global: \ [track_tcp <yes|no>]. [max_tcp <number>]. UDP sessions are established as the result of a series of UDP packets from two end points via the same set of ports. Its event output is packet-based so it will work with all output modes of Snort. \ [prune_log_max <bytes>]. and the policies supported by Stream5 are the results of extensive research with many target operating systems. FIN and Reset sequence numbers. such as data on SYN packets. Target-Based Stream5. a few operating systems allow data in TCP SYN packets. It is capable of tracking sessions for both TCP and UDP. the rule ’flow’ and ’flowbits’ keywords are usable with TCP as well as UDP traffic. 2. \ [flush_on_alert].Frag 3 Alert Output Frag3 is capable of detecting eight different types of anomalies. etc. Read the documentation in the doc/signatures directory with filenames that begin with “123-” for information on the different event types. [max_udp <number>]. \ [track_icmp <yes|no>]. For example. etc are configured via the detect anomalies option to the TCP configuration. Transport Protocols TCP sessions are identified via the classic TCP ”connection”. Stream API Stream5 fully supports the Stream API. while others do not. introduces target-based actions for handling of overlapping data and other TCP anomalies. which effectively terminate a TCP or UDP session.2. like Frag3. [max_icmp <number>]. ICMP messages are tracked for the purposes of checking for unreachable and service unavailable messages. The default is ”262144”. the minimum is ”1”. \ [dont_store_large_packets]. [policy <policy_id>]. minimum is ”1”. minimum is ”32768” (32KB). \ [timeout <number secs>]. Maximum simultaneous ICMP sessions tracked. [use_static_footprint_sizes]. The default is ”8388608” (8MB). [dont_reassemble_async]. max udp and max icmp are applied when specified with the configuration. The default is ”30”. Print a message when a session terminates that was consuming more than the specified number of bytes. The default is set to off. The default is ”131072”. The default is ”no”. Stream5 TCP Configuration Provides a means on a per IP address target to configure TCP policy. per policy that is bound to an IP address or network. minimum can be either ”0” (disabled) or if not disabled the minimum is ”1024” and maximum is ”1073741824”. Track sessions for ICMP. Option to disable the stream5 tracking. Track sessions for UDP. By default this option is turned off. \ [require_3whs [<number secs>]]. and that policy is not bound to an IP address or network. Flush a TCP stream when an alert is generated on that stream. 42 . This can have multiple occurrences. \ [max_queued_bytes <bytes>]. max tcp. [flush_factor <number segs>] Option bind to <ip addr> timeout <num seconds> \ Description IP address or network for this policy. The default is set to off. preprocessor stream5_tcp: \ [bind_to <ip_addr>]. \ [protocol <client|server|both> <all|service name [service name]*>]. The default is ”1048576” (1MB). The default is ”yes”. [ports <client|server|both> <all|number [number]*>]. \ [check_session_hijacking]. and the maximum is ”86400” (approximately 1 day). One default policy must be specified. maximum is ”1073741824” (1GB). \ [overlap_limit <number>]. When the preprocessor is disabled only the options memcap. [max_queued_segs <number segs>]. Print/display packet after rebuilt (for debugging). minimum is ”1”. The default is ”65536”. minimum is ”1”. [detect_anomalies]. Memcap for TCP packet storage. [max_window <number>]. maximum is ”1048576”. \ [ignore_any_rules]. Session timeout. The default is ”yes”. The default is set to any. Backwards compatibility. Maximum simultaneous UDP sessions tracked. \ [small_segments <number> bytes <number> [ignore_ports number [number]*]]. maximum is ”1048576”. Maximum simultaneous TCP sessions tracked. maximum is ”1048576”. Alerts are generated (per ’detect anomalies’ option) for either the client or server when the MAC address for one side or the other does not match. That is the highest possible TCP window per RFCs. The optional number of seconds specifies a startup timeout. The default is ”0” (unlimited). The default is set to off. Windows 95/98/ME win2003 Windows 2003 Server vista Windows Vista solaris Solaris 9. The default is ”0” (don’t consider existing sessions established). 43 . the minimum is ”0”. The policy id can be one of the following: Policy Name Operating Systems. Limit the number of bytes queued for reassembly on a given TCP session to bytes. This check validates the hardware (MAC) address from both sides of the connect – as established on the 3-way handshake against subsequent packets received on the session. the minimum is ”0”. and the maximum is ”255”. bsd FresBSD 4. the minimum is ”0”. The default is set to off. NetBSD 2. first Favor first overlapped segment. The default is set to off.2 and earlier windows Windows 2000. This option should not be used production environments. and a maximum of ”1073741824” (1GB). This option is intended to prevent a DoS against Stream5 by an attacker using an abnormally large window. A message is written to console/syslog when this limit is enforced. Performance improvement to not queue large packets in reassembly buffer. OpenBSD 3. with a non-zero minimum of ”1024”. Using this option may result in missed attacks.policy <policy id> overlap limit <number> max window <number> require 3whs [<number seconds>] detect anomalies check session hijacking use static footprint sizes dont store large packets dont reassemble async max queued bytes <bytes> The Operating System policy for the target OS. Windows XP. Maximum TCP window allowed. Check for TCP session hijacking.3 and newer Limits the number of overlapping packets per session. so using a value near the maximum is discouraged.x and newer linux Linux 2. and the maximum is ”86400” (approximately 1 day). Default is ”1048576” (1MB). there are no checks performed. This allows a grace period for existing sessions to be considered established during that interval immediately after Snort is started. The default is ”0” (unlimited). The default is set to off. Use static values for determining when to build a reassembled packet to allow for repeatable tests. If an ethernet layer is not part of the protocol stack received by Snort. Establish sessions only on completion of a SYN/SYN-ACK/ACK handshake. The default is set to queue packets.x and newer. Detect and alert on TCP protocol anomalies. last Favor first overlapped segment.x and newer hpux HPUX 11 and newer hpux10 HPUX 10 irix IRIX 6 and newer macos MacOS 10. A value of ”0” means unlimited. and the maximum is ”1073725440” (65535 left shift 14).x and newer.4 and newer old-linux Linux 2. Don’t queue packets for reassembly if traffic has not been seen in both directions. The default is set to off. including any of the internal defaults (see 2. The drop in size often indicates an end of request or response. This can appear more than once in a given config. Using this does not affect rules that look at protocol headers. derived based on an average size of 400 bytes. with a maximum of ”2048”. or byte test options. Configure the maximum small segments queued. ignore ports is optional. The default value is ”0” (disabled). [ignore_any_rules] 44 . This feature requires that detect anomalies be enabled. A message is written to console/syslog when this limit is enforced. The default is ”2621”. ! △NOTE If no options are specified for a given TCP policy. This option can be used only in default policy. Since there is no target based binding. PCRE. Don’t process any -> any (ports) rules for TCP that attempt to match payload if there are no port specific rules for the src or destination port. or both and list of ports in which to perform reassembly.7). or both and list of services in which to perform reassembly. there should be only one occurrence of the UDP configuration. Specify the client. Rules that have flow or flowbits will never be ignored. The service names can be any of those used in the host attribute table (see 2. The default is ”off”. The default settings are ports client 21 23 25 42 53 80 110 111 135 136 137 139 143 445 513 514 1433 1521 2401 3306.3) or others specific to the network. The first number is the number of consecutive segments that will trigger the detection rule. The default settings are ports client ftp telnet smtp nameserver dns http pop3 sunrpc dcerpc netbios-ssn imap login shell mssql oracle cvs mysql. defines the list of ports in which will be ignored for this rule. that is the default TCP policy. Stream5 UDP Configuration Configuration for UDP session tracking. Specify the client. and a maximum of ”1073741824” (1GB). This is a performance improvement and may result in missed attacks. The second number is the minimum bytes for a segment to be considered ”small”. preprocessor stream5_udp: [timeout <number secs>]. A message is written to console/syslog when this limit is enforced. server. with a non-zero minimum of ”2”. with a maximum of ”2048”. The minimum port allowed is ”1” and the maximum allowed is ”65535”. Useful in ips mode to flush upon seeing a drop in segment size after N segments of non-decreasing size. This can appear more than once in a given config. A value of ”0” means unlimited. server. The default value is ”0” (disabled).7. only those with content. The number of ports can be up to ”65535”. If only a bind to option is used with no other options that TCP policy uses all of the default values. track_icmp no preprocessor stream5_tcp: \ policy first. the ignore any rules option will be disabled in this case. and the maximum is ”86400” (approximately 1 day). with all other traffic going to the default policy of Solaris. It is not ICMP is currently turned on by default. preprocessor stream5_icmp: [timeout <number secs>] Option timeout <num seconds> Description Session timeout. Don’t process any -> any (ports) rules for UDP that attempt to match payload if there are no port specific rules for the src or destination port. track_udp yes. PCRE. track_tcp yes. the minimum is ”1”. With the ignore the ignore any rules option is effectively pointless. if a UDP rule that uses any -> any ports includes either flow or flowbits. This is a performance improvement and may result in missed attacks. For example. in minimal code form and is NOT ready for use in production networks. Stream5 ICMP Configuration Configuration for ICMP session tracking. only those with content. 45 . A list of rule SIDs affected by this option are printed at Snort’s startup. the ’ignored’ any -> any rule will be applied to traffic to/from port 53. and the maximum is ”86400” (approximately 1 day). The default is ”30”. Rules that have flow or flowbits will never be ignored. Example Configurations 1. ! △NOTE untested. This configuration maps two network segments to different OS policies. a UDP rule will be ignored except when there is another port specific rule With the ignore that may be applied to the traffic. there should be only one occurrence of the ICMP configuration.conf and can be used for repeatable tests of stream reassembly in readback mode. The default is ”off”. or byte test options. one for Windows and one for Linux. Because of the potential impact of disabling a flowbits rule. ! △NOTE any rules option. Since there is no target based binding. but NOT to any other source or destination port. This example configuration is the default configuration in snort. the minimum is ”1”. if a UDP rule specifies destination port 53. preprocessor stream5_global: \ max_tcp 8192. ! △NOTE any rules option. The default is ”30”. Using this does not affect rules that look at protocol headers.Option timeout <num seconds> ignore any rules Description Session timeout. use_static_footprint_sizes preprocessor stream5_udp: \ ignore_any_rules 2. sfPortscan alerts for the following types of portsweeps: 46 . One of the most common portscanning tools in use today is Nmap.1. since most hosts have relatively few services available. Our primary objective in detecting portscans is to detect and track these negative responses. ! △NOTE Negative queries will be distributed among scanning hosts. Nmap encompasses many.preprocessor preprocessor preprocessor preprocessor stream5_global: track_tcp yes stream5_tcp: bind_to 192. most queries sent by the attacker will be negative (meaning that the service ports are closed). Distributed portscans occur when multiple hosts query one host for open services. In the nature of legitimate network communications. is designed to detect the first phase in a network attack: Reconnaissance.2. an attacker determines what types of network protocols or services a host supports. Most of the port queries will be negative.. sfPortscan was designed to be able to detect the different types of scans Nmap can produce. and rarer still are multiple negative responses within a given amount of time.1. policy windows stream5_tcp: bind_to 10. otherwise. which are the traditional types of scans..1.3 sfPortscan The sfPortscan module. one host scans multiple ports on another host. developed by Sourcefire. policy linux stream5_tcp: policy solaris 2. if not all. of the current portscanning techniques.0/24. This is the traditional place where a portscan takes place. This tactic helps hide the true identity of the attacker. negative responses from hosts are rare.168. This phase assumes the attacking host has no prior knowledge of what protocols or services are supported by the target. This is used to evade an IDS and obfuscate command and control hosts. In the Reconnaissance phase. so we track this type of scan through the scanned host.0/24. As the attacker has no beforehand knowledge of its intended target. this phase would not be necessary. only the attacker has a spoofed source address inter-mixed with the real scanning address. This usually occurs when a new exploit comes out and the attacker is looking for a specific service. 47 . sfPortscan only generates one alert for each host pair in question during the time window (more on windows below). can trigger these alerts because they can send out many connection attempts within a very small amount of time. Open port events are not individual alerts. For example. A filtered alert may go off before responses from the remote hosts are received. sfPortscan will only track open ports after the alert has been triggered. ! △NOTE The characteristics of a portsweep scan may not result in many negative responses. One host scans a single port on multiple hosts. sfPortscan will also display any open ports that were scanned. we will most likely not see many negative responses. On TCP scan alerts.• TCP Portsweep • UDP Portsweep • IP Portsweep • ICMP Portsweep These alerts are for one→many portsweeps. but tags based on the original scan alert. It’s also a good indicator of whether the alert is just a very active legitimate host. On TCP sweep alerts however. such as NATs. Active hosts.. if an attacker portsweeps a web farm for port 80. The list is a comma separated list of IP addresses. The parameter is the same format as that of watch ip. as described in Section 2.conf. scan type <scan type> Available options: • portscan • portsweep • decoy portscan • distributed portscan • all 3. 7. If file does not contain a leading slash.2. A ”High” setting will catch some slow scans because of the continuous monitoring. so the user may need to deploy the use of Ignore directives to properly tune this directive. DNS caches. ignore scanners <ip1|ip2/cidr[ [port|port2-port3]]> Ignores the source of scan alerts. You should enable the Stream preprocessor in your snort.“High” alerts continuously track hosts on a network using a time window to evaluate portscan statistics for that host. Optionally. The parameter is the same format as that of watch ip.“Medium” alerts track connection counts. proxies. This setting is based on a static time window of 60 seconds.“Low” alerts are only generated on error packets sent from the target host. sense level <level> Available options: • low .2. after which this window is reset. IP address using CIDR notation.sfPortscan Configuration Use of the Stream5 preprocessor is required for sfPortscan.. and so will generate filtered scan alerts. proto <protocol> Available options: • TCP • UDP • IGMP • ip proto • all 2. ignore scanned <ip1|ip2/cidr[ [port|port2-port3]]> Ignores the destination of scan alerts. watch ip <ip1|ip2/cidr[ [port|port2-port3]]> Defines which IPs. This most definitely will require the user to tune sfPortscan. 6. etc). However. IPs or networks not falling into this range are ignored if this option is used. this file will be placed in the Snort config dir. This setting may false positive on active hosts (NATs. but is very sensitive to active hosts. 4. 5. and because of the nature of error responses. this setting should see very few false positives. • medium . this setting will never trigger a Filtered Scan alert because of a lack of error responses. and specific ports on those hosts to watch. 48 . • high . logfile <file> This option will output portscan events to the file specified. Stream gives portscan direction in the case of connectionless protocols like ICMP and UDP. networks. especially under heavy load with dropped packets. 9. The other options are parsed but not used. 49 . The open port information is stored in the IP payload and contains the port that is open. However. This option disables the preprocessor. The sfPortscan alert output was designed to work with unified packet logging. detect ack scans This option will include sessions picked up in midstream by the stream module. The size tends to be around 100 .8. etc. then the user won’t see open port alerts. which is necessary to detect ACK scans. port count. disabled This optional keyword is allowed with any policy to avoid packet processing. IP count. which is why the option is off by default.200 bytes.. include midstream This option will include sessions picked up in midstream by Stream5. The characteristics of the packet are: Src/Dst MAC Addr == MACDAD IP Protocol == 255 IP TTL == 0 Other than that. The payload and payload size of the packet are equal to the length of the additional portscan information that is logged. IP range. the packet looks like the IP portion of the packet that caused the portscan alert to be generated. Any valid configuration may have ”disabled” added to it. so it is possible to extend favorite Snort GUIs to display portscan alerts and the additional information in the IP payload using the above packet characteristics. 10. Open port alerts differ from the other portscan alerts. This can lead to false alerts. this can lead to false alerts. When the preprocessor is disabled only the memcap option is applied when specified with the configuration. This includes any IP options. connection count. because open port alerts utilize the tagged packet output system. and port range. which is why the option is off by default. snort generates a pseudo-packet and uses the payload portion to store the additional portscan information of priority count. especially under heavy load with dropped packets. This means that if an output system that doesn’t print tagged packets is used. 169.3:192. Use the watch ip. We use this count (along with IP Count) to determine the difference between one-to-one portscans and one-to-one decoys.168. the more bad responses have been received. this is a low number.3 -> 192. The analyst should set this option to the list of CIDR blocks and IPs that they want to watch. ignore scanners. and one-to-one scans may appear as a distributed scan.169. 3. This is accurate for connection-based protocols.168. It’s important to correctly set these options. and is more of an estimate for others. sfPortscan will watch all network traffic. 50 . Event id/Event ref These fields are used to link an alert with the corresponding Open Port tagged packet 2.168. 4.3 -> 192. Priority Count Priority Count keeps track of bad responses (resets. Connection Count Connection Count lists how many connections are active on the hosts (src or dst). 5.168. Portsweep (one-to-many) scans display the scanned IP range. High connection count and low priority count would indicate filtered (no response received from target). and increments the count if the next IP is different.603880 event_id: 2 192. The higher the priority count.168.5 (portscan) Open Port Open Port: 38458 1. Here are some tuning tips: 1. Tuning sfPortscan The most important aspect in detecting portscans is tuning the detection engine for your network(s).169.5 (portscan) TCP Filtered Portscan Priority Count: 0 Connection Count: 200 IP Count: 2 Scanner IP Range: 192. For active hosts this number will be high regardless. one or more additional tagged packet(s) will be appended: Time: 09/08-15:07:31. Scanned/Scanner IP Range This field changes depending on the type of alert. 6.168. and explained further below: Time: 09/08-15:07:31.169. Port Count Port Count keeps track of the last port contacted and increments this number when that changes.169. and ignore scanned options. Portscans (one-to-one) display the scanner IP.603881 event_ref: 2 192.4 Port/Proto Count: 200 Port/Proto Range: 20:47557 If there are open ports on the target.Log File Output Log file output is displayed in the following format. The watch ip option is easy to understand. unreachables). For one-to-one scans. Whether or not a portscan was filtered is determined here. IP Count IP Count keeps track of the last IP to contact a host.169. If no watch ip is defined. By default. For portscans. Most of the false positives that sfPortscan may generate are of the filtered scan alert type. For portscans. So be much more suspicious of filtered portscans. Many times this just indicates that a host was very active during the time period in question. the analyst will know which to ignore it as. DNS cache servers. These responses indicate a portscan and the alerts generated by the low sensitivity level are highly accurate and require the least tuning. This indicates that there were many connections to the same port. lower the sensitivity level. If all else fails. Filtered scan alerts are much more prone to false positives. If the host is generating portscan alerts (and is the host that is being scanned). It does this by normalizing the packet into the packet buffer. add it to the ignore scanned option. then add it to the ignore scanners option. this ratio should be low. The following is a list of ratios to estimate and the associated values that indicate a legitimate scan and not a false positive. The reason that Priority Count is not included. Some of the most common examples are NAT IPs. IP Count. For portscans. this ratio should be low. this ratio should be high and indicates that the scanned host’s ports were connected to by fewer IPs. 2. For portsweeps. Depending on the type of alert that the host generates. this ratio should be low. sfPortscan may not generate false positives for these types of hosts. it runs against traffic on ports 111 and 32771. and nfs servers. You get the best protection the higher the sensitivity level.4 RPC Decode The rpc decode preprocessor normalizes RPC multiple fragmented records into a single un-fragmented record. Connection Count / Port Count: This ratio indicates an estimated average of connections per port. The low sensitivity level only generates alerts based on error responses. since these are more prone to false positives. Port Count / IP Count: This ratio indicates an estimated average of ports connected to per IP. Format preprocessor rpc_decode: \ <ports> [ alert_fragments ] \ [no_alert_multiple_requests] \ [no_alert_large_fragments] \ [no_alert_incomplete] 51 . When determining false positives. 4.). but for now the user must manually do this. we hope to automate much of this analysis in assigning a scope level and confidence level. syslog servers. If the host continually generates these types of alerts. If the host is generating portsweep events. Connection Count. indicating that the scanning host connected to few ports but on many hosts. IP Range. The portscan alert details are vital in determining the scope of a portscan and also the confidence of the portscan. If stream5 is enabled. For portsweeps. This indicates that each connection was to a different port. and Port Range to determine false positives. Make use of the Priority Count. The low sensitivity level does not catch filtered scans. 3. lower the sensitivity level. but it’s also important that the portscan detection engine generate alerts that the analyst will find informative. The easiest way to determine false positives is through simple ratio estimations. this ratio should be high. it will only process client-side traffic. the alert type is very important. Port Count. For portsweeps. 2. Connection Count / IP Count: This ratio indicates an estimated average of connections per IP. the higher the better. If none of these other tuning techniques work or the analyst doesn’t have the time for tuning. add it to the ignore scanners list or use a lower sensitivity level.2. this ratio should be high. In the future. but be aware when first tuning sfPortscan for these IPs.The ignore scanners and ignore scanned options come into play in weeding out legitimate hosts that are very active on your network. it should have an output mode enabled. Snort’s real-time statistics are processed. where statistics get printed to the specified file name. either “console” which prints statistics to the console window or “file” with a file name. . By default. Don’t alert when a single fragment record exceeds the size of one packet. Don’t alert when there are multiple records in one packet. Whenever this preprocessor is turned on. 2. Don’t alert when the sum of fragmented records exceeds one packet.5 Performance Monitor This preprocessor measures Snort’s real-time and theoretical maximum performance.2.Option alert fragments no alert multiple requests no alert large fragments no alert incomplete Description Alert on any fragmented RPC record. ***. Prints statistics at the console.Dump stats for entire life of Snort. • console . • max .Prints out statistics about the type of traffic and protocol distributions that Snort is seeing. so if possible. • atexitonly . Not all statistics are output to this file. At startup. Rules without content are not filtered via the fast pattern matcher and are always evaluated. since checking the time sample reduces Snort’s performance. • events . Both of these directives can be overridden on the command line with the -Z or --perfmon-file options. This option can produce large amounts of output. • time .Represents the number of seconds between intervals. • pktcnt . This prints out statistics as to the number of rules that were evaluated and didn’t match (non-qualified events) vs. Snort will log a distinctive line to this file with a timestamp to all readers to easily identify gaps in the stats caused by Snort not running. The fast pattern matcher is used to select a set of rules for evaluation based on the longest content or a content modified with the fast pattern rule option in a rule.Turns on the theoretical maximum performance that Snort calculates given the processor speed and current performance. This is only valid for uniprocessor machines. Rules with short. this is 10000. By default. This boosts performance. generic contents are more likely to be selected for evaluation than those with longer. 54 . the number of rules that were evaluated and matched (qualified events). By default.Adjusts the number of packets to process before checking for the time sample. • file . since many operating systems don’t keep accurate kernel statistics for multiple CPUs. • accumulate or reset .Prints statistics in a comma-delimited format to the file that is specified. adding a content rule option to those rules can decrease the number of times they need to be evaluated and improve performance. reset is used. more unique contents. You may also use snortfile which will output into your defined Snort log directory.Turns on event reporting. The current version of HTTP Inspect only handles stateless processing. Examples preprocessor perfmonitor: \ time 30 events flow file stats. there are two areas of configuration: global and server. This value is in bytes and the default value is 52428800 (50MB). Users can configure individual HTTP servers with a variety of options. The minimum is 4096 bytes and the maximum is 2147483648 bytes (2GB).Defines the maximum size of the comma-delimited file. HTTP Inspect will decode the buffer. The following example gives the generic global configuration format: 55 ..x. followed by YYYY-MM-DD. it will be rolled into a new date stamped file of the format YYYY-MM-DD. as well as the IP addresses of the host pairs in human-readable format.Prints the flow IP statistics in a comma-delimited format to the file that is specified. HTTP Inspect works on both client requests and server responses.profile max console pktcnt 10000 preprocessor perfmonitor: \ time 300 file /var/tmp/snortstat pktcnt 10000 preprocessor perfmonitor: \ time 30 flow-ip flow-ip-file flow-ip-stats.csv pktcnt 1000 2. Future versions will have a stateful processing mode which will hook into various reassembly modules. • flow-ip-file . and normalize the fields. which should allow the user to emulate any type of web server. Given a data buffer. Once the cap has been reached.Collects IP traffic distribution statistics based on host pairs.• max file size .6 HTTP Inspect HTTP Inspect is a generic HTTP decoder for user applications. All of the statistics mentioned above. the table will start to prune the statistics for the least recently seen host pairs to free memory. • flow-ip . and will be fooled if packets are not reassembled. The default is the same as the maximum.2. For each pair of hosts for which IP traffic has been seen. Global Configuration The global configuration deals with configuration options that determine the global functioning of HTTP Inspect. are included. Within HTTP Inspect. but there are limitations in analyzing the protocol. Before the file exceeds this size.Sets the memory cap on the hash table used to store IP traffic statistics for host pairs. This means that HTTP Inspect looks for HTTP fields on a packet-by-packet basis. HTTP Inspect has a very “rich” user configuration. find HTTP fields. This works fine when there is another module handling the reassembly. where x will be incremented each time the comma delimited file is rolled over. • flow-ip-memcap . In the future. the codemap is usually 1252. The default for this option is 2920. This option along with compress depth and 56 . which is available at. detect anomalous servers This global configuration option enables generic HTTP server traffic inspection on non-HTTP configured ports. This option is turned off by default. The iis unicode map file is a Unicode codepoint map which tells HTTP Inspect which codepage to use when decoding Unicode characters. Blind firewall proxies don’t count.map and should be used if no other codepoint map is available. 6.snort. 4.org/ dl/contrib/.conf or be specified via a fully-qualified path to the map file. ! △NOTE Remember that this configuration is for the global IIS Unicode map. and alerts if HTTP traffic is seen. decompress depth <integer> This option specifies the maximum amount of decompressed data to obtain from the compressed packet payload. this inspects all network traffic. please only use this feature with traditional proxy environments. By configuring HTTP Inspect servers and enabling allow proxy use. A Microsoft US Unicode codepoint map is provided in the Snort source etc directory by default. This value can be set from 1 to 65535. max gzip mem This option determines (in bytes) the maximum amount of memory the HTTP Inspect preprocessor will use for decompression. This value can be set from 3276 bytes to 100MB. This value can be set from 1 to 65535. 3. It is called unicode. The map file can reside in the same directory as snort. then you may get a lot of proxy alerts. Please note that if users aren’t required to configure web proxy use. compress depth <integer> This option specifies the maximum amount of packet payload to decompress. we want to limit this to specific networks so it’s more useful. but for right now. Configuration 1. The iis unicode map is a required configuration parameter. iis unicode map <map filename> [codemap <integer>] This is the global iis unicode map file. you will only receive proxy use alerts for web users that aren’t using the configured proxies or are using a rogue proxy server. So. A tool is supplied with Snort to generate custom Unicode maps--ms unicode generator.Format preprocessor http_inspect: \ global \ iis_unicode_map <map_filename> \ codemap <integer> \ [detect_anomalous_servers] \ [proxy_alert] \ [max_gzip_mem <num>] \ [compress_depth <num>] [decompress_depth <num>] \ disabled You can only have a single global configuration. For US servers. Don’t turn this on if you don’t have a default server configuration that encompasses all of the HTTP server ports that your users might access. The default for this option is 1460. individual servers can reference their own IIS Unicode map. you’ll get an error if you try otherwise. 5. proxy alert This enables global alerting on HTTP server proxy usage.c. 2. map 1252 Server Configuration There are two types of server configurations: default and by IP address.decompress depth determines the gzip sessions that will be decompressed at any given instant.2. Example Global Configuration preprocessor http_inspect: \ global iis_unicode_map unicode. Other options are parsed but not used. ! △NOTE It is suggested to set this value such that the max gzip session calculated as follows is at least 1. disabled This optional keyword is allowed with any policy to avoid packet processing.1.1 10. The default value for this option is 838860. the only difference being that multiple IPs can be specified via a space separated list.1. ”compress depth” and ”decompress depth” options are applied when specified with the configuration.2. Example IP Configuration preprocessor http_inspect_server: \ server 10. Most of your web servers will most likely end up using the default configuration. This option disables the preprocessor. max gzip session = max gzip mem /(decompress depth + compress depth) 7. When the preprocessor is disabled only the ”max gzip mem”. Example Multiple IP Configuration preprocessor http_inspect_server: \ server { 10.0/24 } profile all ports { 80 } 57 . Any valid configuration may have ”disabled” added to it. the only difference being that specific IPs can be configured.1.1. There is a limit of 40 IP addresses or CIDR notations per http inspect server line. Example Default Configuration preprocessor http_inspect_server: \ server default profile all ports { 80 } Configuration by IP Address This format is very similar to “default”. alert on non strict URL parsing on tab uri delimiter is set max header length 0. iis. profile apache sets the configuration options described in Table 2. In other words. backslashes. alert off double decoding on. This is a great profile for detecting all types of attacks. bare-byte encoding. alert off directory normalization on.Server Configuration Options Important: Some configuration options have an argument of ‘yes’ or ‘no’. number of headers not checked 1-B. alert on iis backslash on. Double decode is not supported in IIS 5. HTTP normalization will still occur. profile iis sets the configuration options described in Table 2. So that means we use IIS Unicode codemaps for each server.1 and beyond. alert on %u decoding on. etc. and iis4 0. %u encoding. but are not required for proper operation.4. regardless of the HTTP server. iis4 0. only the alerting functionality. We alert on the more serious forms of evasions. like IIS does. Table 2. 1-D. whether set to ‘yes’ or ’no’. alert off iis delimiter on. alert off multiple slash on. Profiles allow the user to easily configure the preprocessor for a certain type of server. profile all sets the configuration options described in Table 2. there was a double decoding vulnerability. 1-A. so it’s disabled by default. all The all profile is meant to normalize the URI using most of the common tricks available. profile <all|apache|iis|iis5 0|iis4 0> Users can configure HTTP Inspect by using pre-defined HTTP server profiles. alert on iis unicode codepoints on.0 and IIS 5. The ‘yes/no’ argument does not specify whether the configuration option itself is on or off. This differs from the iis profile by only accepting UTF-8 standard Unicode encoding and not accepting backslashes as legitimate slashes. alert off webroot on. double decoding.3. iis The iis profile mimics IIS servers. 1. and rules based on HTTP traffic will still trigger. alert off apache whitespace on. alert on bare byte decoding on. 58 . Apache also accepts tabs as whitespace. This argument specifies whether the user wants the configuration option to generate an HTTP Inspect alert or not. apache The apache profile is used for Apache web servers.5. 1-C. header length not checked max headers 0. There are five profiles available: all. iis5 0 In IIS 4. except they will alert by default if a URL has a double encoding. These two profiles are identical to iis. apache. iis5 0.0. However. HTTPS traffic is encrypted and cannot be decoded with HTTP Inspect. no profile The default options used by HTTP Inspect do not use a profile and are described in Table 2. ports {<port> [<port>< . use the SSL preprocessor. To ignore HTTPS traffic. alert off directory normalization on. alert on utf 8 encoding on. Example preprocessor http_inspect_server: \ server 1..Table 2.6. alert off non strict url parsing on tab uri delimiter is set max header length 0. number of headers not checked 1-E.1 profile all ports { 80 3128 } 2.. alert off multiple slash on. default.1. header length not checked max headers 0.4: Options for the apache Profile Option Setting server flow depth 300 client flow depth 300 post depth 0 chunk encoding alert on chunks larger than 500000 bytes ASCII decoding on. >]} This is how the user configures which ports to decode on the HTTP server. alert off webroot on. 59 . alert on apache whitespace on. But the ms unicode generator program tells you which codemap to use for you server. Decompression is done across packets. The Cookie header line is extracted and stored in HTTP Cookie buffer for HTTP requests and Set-Cookie is extracted and stored in HTTP Cookie buffer for HTTP responses. this is usually 1252.org/dl/contrib/ directory. alert on iis unicode codepoints on. alert on bare byte decoding on.org web site at. You should select the config option ”extended response inspection” before configuring this option. By default the cookie inspection and extraction will be turned off. By turning this option the HTTP response will be thoroughly inspected. ! △NOTE When this option is turned on.c. alert off directory normalization on. alert on apache whitespace on. The different fields of a HTTP response such as status code. if the HTTP response packet has a body then any content pattern matches ( without http modifiers ) will search the response body ((decompressed in case of gzip) and not the entire packet payload. In both cases the header name is also stored along with the cookie.Table 2. alert on %u decoding on. You can select the correct code page by looking at the available code pages that the ms unicode generator outputs. To search for patterns in the header of the response. alert off iis delimiter on. cookie (when enable cookie is configured) and body are extracted and saved into buffers.snort. status message. So. enable cookie This options turns on the cookie extraction from HTTP requests and HTTP response. 6. http stat msg and http cookie. Executing this program generates a Unicode map for the system that it was run on. alert on double decoding on. number of headers not checked 3. alert off webroot on. 5. alert on non strict URL parsing on max header length 0. it’s the ANSI code page. The default http response inspection does not inspect the various fields of a HTTP response. This program is located on the Snort. alert on iis backslash on. For US servers. http stat code. header length not checked max headers 0. one should use the http modifiers with content such as http header. When the compressed data is spanned 60 . 4. you run this program on that server and use that Unicode map in this configuration. the user needs to specify the file that contains the IIS Unicode map and also specify the Unicode map to use. extended response inspection This enables the extended HTTP response inspection. So the decompression will end when either the ’compress depth’ or ’decompress depth’ is reached or when the compressed data ends. iis unicode map <map filename> codemap <integer> The IIS Unicode map is generated by the program ms unicode generator. When using this option. to get the specific Unicode mappings for an IIS web server. Different rule options are provided to inspect these buffers. headers. inspect gzip This option specifies the HTTP inspect module to uncompress the compressed data(gzip/deflate) in HTTP response. alert off multiple slash on. alert off apache whitespace on. The XFF/True-Client-IP Original client IP address is logged only with unified2 output and is not logged with console (-A cmg) output. To ensure unlimited decompression. When extended response inspection is turned on. 9.e. it is applied to the HTTP response body (decompressed data when inspect gzip is turned on) and not the HTTP headers. alert off multiple slash on. Snort should be configured with the –enable-zlib flag. or the content 61 . When extended response inspection is turned off the server flow depth is applied to the entire HTTP response (including headers). alert off utf 8 encoding on. ! △NOTE The original client IP from XFF/True-Client-IP in unified2 logs can be viewed using the tool u2spewfoo. enable xff This option enables Snort to parse and log the original client IP present in the X-Forwarded-For or True-ClientIP HTTP request headers along with the generated events. The decompression in a single packet is still limited by the ’compress depth’ and ’decompress depth’. alert off non strict URL parsing on max header length 0. 8. the decompressed data from different packets are not combined while inspecting). Also the amount of decompressed data that will be inspected depends on the ’server flow depth’ configured. header length not checked max headers 0. unlimited decompress This option enables the user to decompress unlimited gzip data (across multiple packets).Table 2.Decompression will stop when the compressed data ends or when a out of sequence packet is received. 7. it is suggested to set the ’compress depth’ and ’decompress depth’ to its maximum values. server flow depth <integer> This specifies the amount of server response payload to inspect. number of headers not checked across multiple packets. (i. Most of these rules target either the HTTP header. the state of the last decompressed packet is used to decompressed the data of the next packet.6: Default HTTP Inspect Options Option Setting port 80 server flow depth 300 client flow depth 300 post depth -1 chunk encoding alert on chunks larger than 500000 bytes ASCII decoding on. alert on iis backslash on. alert off iis delimiter on. alert off directory normalization on. alert off webroot on. This option can be used to balance the needs of IDS performance and level of inspection of HTTP server response data. ! △NOTE To enable compression of HTTP server response. This tool is present in the tools/u2spewfoo directory of snort source tree. But the decompressed data are individually inspected. Unlike client flow depth this option is applied per TCP session. Snort rules are targeted at HTTP server response traffic and when used with a small flow depth value may cause false negatives. ASCII decoding is also enabled to enforce correct functioning. Values above 0 tell Snort the number of bytes to inspect of the server response (excluding the HTTP headers when extended response inspection is turned on) in a given HTTP session. Unlike server flow depth this value is applied to the first packet of the HTTP request. Only packets payloads starting with ’HTTP’ will be considered as the first packet of a server response. so it is recommended that you disable HTTP Inspect alerting for this option. but your mileage may vary. a.a %2f = /.that is likely to be in the first hundred or so bytes of non-header data. It is normal to see ASCII encoding usage in URLs. This value can be set from -1 to 65535. 62 . Apache uses this standard. the entire payload will be inspected. A value of -1 causes Snort to ignore all the data in the post message. a value of 0 causes Snort to inspect all the client post message. If more than flow depth bytes are in the payload of the first packet only flow depth bytes of the payload will be inspected. Note that the 65535 byte maximum flow depth applies to stream reassembled packets as well. This value can be set from -1 to 1460. It is suggested to set the client flow depth to its maximum value. 14. Inversely. 12. %2e = . the entire payload will be inspected. client flow depth <integer> This specifies the amount of raw client request payload to inspect. you may be interested in knowing when you have a UTF-8 encoded URI. 13. extended ascii uri This option enables the support for extended ASCII codes in the HTTP request URI. so for any Apache servers. When the extended response inspection is turned on. This increases the performance by inspecting only specified bytes in the post message. When utf 8 is enabled. 10. If more than flow depth bytes are in the payload of the HTTP response packet in a session only flow depth bytes of the payload will be inspected for that session. etc. Note that the 1460 byte maximum flow depth applies to stream reassembled packets as well. Values above 0 tell Snort the number of bytes to inspect in the first packet of the client request.. The default value is -1. ! △NOTE server flow depth is the same as the old flow depth option. which will be deprecated in a future release. 11. a value of 0 causes Snort to inspect all HTTP client side traffic defined in ”ports” (note that this will likely slow down IDS performance).k. Rules that are meant to inspect data in the payload of the first packet of a client request beyond 1460 bytes will be ineffective unless flow depth is set to 0. post depth <integer> This specifies the amount of data to inspect in a client post message. It is not a session based flow depth. a value of 0 causes Snort to inspect all HTTP server payloads defined in ”ports” (note that this will likely slow down IDS performance). If less than flow depth bytes are in the TCP payload (HTTP request) of the first packet. It has a default value of 300. This abides by the Unicode standard and only uses % encoding. The value can be set from -1 to 65495. A value of -1 causes Snort to ignore all client side traffic for ports defined in ”ports. ascii <yes|no> The ascii decode option tells us whether to decode encoded ASCII chars. Rules that are meant to inspect data in the payload of the HTTP response packets in a session beyond 65535 bytes will be ineffective unless flow depth is set to 0. A value of -1 causes Snort to ignore all server side traffic for ports defined in ports when extended response inspection is turned off. Headers are usually under 300 bytes long. Inversely. utf 8 <yes|no> The utf-8 decode option tells HTTP Inspect to decode standard UTF-8 Unicode sequences that are in the URI. It is suggested to set the server flow depth to its maximum value. make sure you have this option turned on. As for alerting. If less than flow depth bytes are in the payload of the HTTP response packets in a given session. It primarily eliminates Snort from inspecting larger HTTP Cookies that appear at the end of many client request Headers. This option is turned off by default and is not supported with any of the profiles. but this will be prone to false positives as legitimate web clients use this type of encoding.” Inversely. value of -1 causes Snort to ignore the HTTP response body data and not the HTTP headers. The default value for server flow depth is 300. It is suggested to set the server flow depth to its maximum value. How the %u encoding scheme works is as follows: the encoding scheme is started by a %u followed by 4 characters. So it is most likely someone trying to be covert. a user may not want to see null bytes in the request URI and we can alert on that. this option will not work.jp/˜ shikap/patch/spp\_http\_decode. double decode <yes|no> The double decode option is once again IIS-specific and emulates IIS functionality. It’s flexible.>]} This option lets users receive an alert if certain non-RFC chars are used in a request URI. otherwise. so ASCII is also enabled to enforce correct decoding. multi slash <yes|no> This option normalizes multiple slashes in a row. You should alert on the iis unicode option. etc. If no iis unicode map is specified before or after this option. 19. In the second pass.. When double decode is enabled. This value can most definitely be ASCII. The xxxx is a hex-encoded value that correlates to an IIS Unicode codepoint.patch. If %u encoding is enabled. %u002e = . bare byte <yes|no> Bare byte encoding is an IIS trick that uses non-ASCII characters as valid values when decoding UTF-8 values. ASCII. the default codemap is used. because we are not aware of any legitimate clients that use this encoding. and %u. You have to use the base36 option with the utf 8 option. because base36 won’t work. An ASCII character is encoded like %u002f = /. base36 <yes|no> This is an option to decode base36 encoded chars.” 23. 22. and then UTF-8 is decoded in the second stage. u encode <yes|no> This option emulates the IIS %u encoding scheme. For instance.15. Anyway. You should alert on %u encodings. doing decodes in each one. When base36 is enabled. 21. because you could configure it to say.” If you want an alert when multiple slashes are seen. Please use this option with care.. 18. ASCII and UTF-8 decoding are also enabled to enforce correct decoding. This option is based on info from:. iis backslash <yes|no> Normalizes backslashes to slashes. and %u.. because there are no legitimate clients that encode UTF-8 this way since it is non-standard. To alert on UTF-8 decoding. Bare byte encoding allows the user to emulate an IIS server and interpret non-standard encodings correctly. This is again an IIS emulation. In the first pass. directory <yes|no> This option normalizes directory traversals and self-referential directories.yk. this is really complex and adds tons of different encodings for one character. as all non-ASCII values have to be encoded with a %. so be careful. If there is no iis unicode map option specified with the server config.or. 16. iis unicode <yes|no> The iis unicode option turns on the Unicode codepoint mapping. So a request URI of “/foo\bar” gets normalized to “/foo/bar. it seems that all types of iis encoding is done: utf-8 unicode. 17. bare byte. because it is seen mainly in attacks and evasion attempts. non rfc char {<byte> [<byte . alert on all ‘/’ or something like that. How this works is that IIS does two passes through the request URI. the following encodings are done: ASCII. The alert on this decoding should be enabled. bare byte. iis unicode uses the default codemap. Don’t use the %u option. so something like: “foo/////////bar” get normalized to “foo/bar. like %uxxxx. ASCII encoding is also enabled to enforce correct behavior. use no. The directory: 63 . you must enable also enable utf 8 yes. then configure with a yes. The iis unicode option handles the mapping of non-ASCII codepoints that the IIS server accepts and decodes normal UTF-8 requests. We leave out utf-8 because I think how this works is that the % encoded utf-8 is decoded to the Unicode byte in the first pass.rim. When iis unicode is enabled. 20. This is not in the HTTP standard. since some web sites refer to files using directory traversals. and is a performance enhancement if needed. oversize dir length <non-zero positive integer> This option takes a non-zero positive integer as an argument.. apache whitespace <yes|no> This option deals with the non-RFC standard of using tab for a space delimiter. 27. The non strict option assumes the URI is between the first and second space even if there is no valid HTTP identifier after the second space. then this option does nothing. If the proxy alert keyword is not enabled. Alerts on this option may be interesting. If a url directory is larger than this argument size. chunk length <non-zero positive integer> This option is an anomaly detector for abnormally large chunk sizes. pipeline requests are inspected for attacks. no pipeline req This option turns HTTP pipeline decoding off. but may also be false positive prone./bar gets normalized to: /foo/bar If you want to configure an alert. pipeline requests are not decoded and analyzed per HTTP protocol field. But you can still get an alert on this option. otherwise. an alert is generated. so if the emulated web server is Apache. No argument is specified. This alert may give false positives. allow proxy use By specifying this keyword. non strict This option turns on non-strict URI parsing for the broken way in which Apache servers will decode a URI. 30. specify no. 31. 24. 26. specify yes. the user is allowing proxy use on this server. This means that no alert will be generated if the proxy alert global keyword has been used.html alsjdfk alsj lj aj la jsj s\n”. enable this option. but Apache takes this non-standard delimiter was well. Since this is common. Apache uses this. Only use this option on servers that will accept URIs like this: ”get /index. we always take this as standard since the most popular web servers accept it. This picks up the Apache chunk encoding exploits. This has no effect on HTTP rules in the rule set. but when this option is enabled. no alerts This option turns off all alerts that are generated by the HTTP Inspect preprocessor module. iis delimiter <yes|no> This started out being IIS-specific. like whisker -i 4. and may also alert on HTTP tunneling that uses chunk encoding. 64 ./bar gets normalized to: /foo/bar The directory: /foo/. This should limit the alerts to IDS evasion type attacks. It is only inspected with the generic pattern matching. 28. The allow proxy use keyword is just a way to suppress unauthorized proxy use for an authorized server. By default./foo/fake\_dir/. A good argument value is 300 characters. 29. The argument specifies the max char directory length for URL directory. 25. The integer is the maximum number of HTTP client request header fields.32. directory. then there is nothing to inspect.htm http/1. webroot <yes|no> This option generates an alert when a directory traversal traverses past the web server root directory. content: "foo". So if you need extra performance. or ”utf-32be”. Whether this option is on or not. inspect uri only This is a performance optimization. It only alerts when the directory traversals go past the web server root directory. because it doesn’t alert on directory traversals that stay within the web server directory structure. which is associated with certain web attacks. max header length <positive integer up to 65535> This option takes an integer as an argument. It’s important to note that if this option is used without any uricontent rules. generating an alert if the extra bytes are non-zero. It is useful for normalizing data in HTTP Cookies that may be encoded. It is useful for normalizing Referrer URIs that may appear in the HTTP Header. tab uri delimiter This option turns on the use of the tab character (0x09) as a delimiter for a URI. This alert is off by default. not including Cookies (using the same configuration parameters as the URI normalization (ie. ”utf-16be”. enable this optimization. 35. a tab in the URI should be treated as any other character. To enable. Requests that exceed this length will cause a ”Long Header” alert. 39. 65 . The integer is the maximum length allowed for an HTTP client request header field. specify an integer argument to max header length of 1 to 65535. you’ll catch most of the attacks. ) and the we inspect the following URI: get /foo. normalize utf This option turns on normalization of HTTP response bodies where the Content-Type header lists the character set as ”utf-16le”. Specifying a value of 0 is treated as disabling the alert. normalize headers This option turns on normalization for HTTP Header Fields. Apache accepts tab as a delimiter. multi-slash. max headers <positive integer up to 1024> This option takes an integer as an argument. To enable. HTTP Inspect will attempt to normalize these back into 8-bit encoding.. 36. For example. 33. When enabled. and if there are none available. if we have the following rule set: alert tcp any any -> any 80 ( msg:"content".). Specifying a value of 0 is treated as disabling the alert. As this field usually contains 90-95% of the web attacks. only the URI portion of HTTP requests will be inspected for attacks. etc. The alert is off by default. 37. specify an integer argument to max headers of 1 to 1024. This generates much fewer false positives than the directory option. IIS does not. etc.). ”utf-32le”. multi-slash. Requests that contain more HTTP Headers than this value will cause a ”Max Header” alert. then no inspection will take place.0\r\n\r\n No alert will be generated when inspect uri only is enabled. 38. This is obvious since the URI is only inspected with uricontent rules. No argument is specified. 34. directory. For IIS. a tab is treated as whitespace if a space character (0x20) precedes it. http methods {cmd[cmd]} This specifies additional HTTP Request Methods outside of those checked by default within the preprocessor (GET and POST). It saves state between individual packets. SMTP handles stateless and stateful processing. and TLS data. Given a data buffer. The list should be enclosed within braces and delimited by spaces. http_methods { PUT CONNECT } ! △NOTE Please note the maximum length for a method name is 7 Examples preprocessor http_inspect_server: \ server 10. line feed or carriage return. SMTP will decode the buffer and find SMTP commands and responses. It will also mark the command. However maintaining correct state is dependent on the reassembly of the client side of the stream (ie.1. The config option.2.1. data header data body sections. 66 .40. tabs. braces and methods also needs to be separated by braces.7 SMTP Preprocessor The SMTP preprocessor is an SMTP decoder for user applications. a loss of coherent stream data results in a loss of state). no alerts Turn off all alerts for this preprocessor. such as port and inspection type. 4. Normalization checks for more than one space character after a command. 2. for encrypted SMTP. ignore tls data Ignore TLS-encrypted data when processing rules. In addition. 12. Absence of this option or a ”0” means never alert on data header line length. } This specifies on what ports to check for SMTP data. alt max command line len <int> { <cmd> [<cmd>] } Overrides max command line len for specific commands. 11. which improves performance. Space characters are defined as space (ASCII 0x20) or tab (ASCII 0x09). normalize <all | none | cmds> This turns on normalization. 10.Configuration SMTP has the usual configuration items. . RFC 2821 recommends 512 as a maximum command line length. this is relatively safe to do and can improve the performance of data inspection. Absence of this option or a ”0” means never alert on command line length. max command line len <int> Alert if an SMTP command line is longer than this value. The configuration options are described below: 1. 9. Absence of this option or a ”0” means never alert on response line length. invalid cmds { <Space-delimited list of commands> } Alert if this command is sent from client side. 8.. ports { <port> [<port>] . 6. valid cmds { <Space-delimited list of commands> } List of valid commands. Since so few (none in the current snort rule set) exploits are against mail data. regular mail data can be ignored for an additional performance boost. We do not alert on commands in this list. Default is an empty list. 3. SMTP command lines can be normalized to remove extraneous spaces. max header line len <int> Alert if an SMTP DATA header line is longer than this value. RFC 2821 recommends 512 as a maximum response line length. Also.. TLS-encrypted traffic can be ignored. 7. this will include 25 and possibly 465. Default is an empty list. inspection type <stateful | stateless> Indicate whether to operate in stateful or stateless mode. RFC 2821 recommends 1024 as a maximum data header line length. ignore data Ignore data section of mail (except for mail headers) when processing rules. max response line len <int> Alert if an SMTP response line is longer than this value. cmds just checks commands listed with the normalize cmds parameter. 5. all checks all commands none turns off normalization for all commands. Typically. print cmds List all commands understood by the preprocessor. When stateful inspection is turned on the base64 encoded MIME attachments/data across multiple packets are decoded too. 14.5. Drop if alerted.13. max mime mem <int> This option determines (in bytes) the maximum amount of memory the SMTP preprocessor will use for decoding base64 encode MIME attachments/data. Multiple base64 encoded MIME attachments/data in one packet are pipelined. This value can be set from 3276 bytes to 100MB. The decoding of base64 encoded attachments/data ends when either the max mime depth or maximum MIME sessions (calculated using max mime depth and max mime mem) is reached or when the encoded data ends. This not normally printed out with the configuration because it can print so much data. This is useful when specifying the max mime depth and max mime mem in default policy without turning on the SMTP preprocessor. Note: It is suggested to set this value such that the max mime session calculated as follows is atleast 1. See 3. 18. The decoded data is available for detection using the rule option file data:mime. enable mime decoding Enables Base64 decoding of Mime attachments/data. Default is enable. 17. 16.24 rule option for more details. This option along with max mime depth determines the base64 encoded MIME/SMTP sessions that will be decoded at any given instant. disabled Disables the SMTP preprocessor in a policy. alert unknown cmds Alert if we don’t recognize command. max mime depth <int> Specifies the maximum number of base64 encoded data to decode per SMTP session. The default value for this in snort in 1460 bytes. xlink2state { enable | disable [drop] } Enable/disable xlink2state alert. The option take values ranging from 5 to 20480 bytes. . The default value for this option is 838860. 15. 20. Default is off. max mime session = max mime mem /(max mime depth + max decoded bytes) max decoded bytes = (max mime depth/4)*3 Also note that these values for max mime mem and max mime depth need to be same across all policy. normalize cmds { <Space-delimited list of commands> } Normalize this list of commands Default is { RCPT VRFY EXPN }. 19. For the preprocessor configuration.6). FTP/Telnet has a very “rich” user configuration. they are referred to as RCPT and MAIL. FTP/Telnet will decode the stream. and FTP Server. FTP/Telnet works on both client requests and server responses. there are four areas of configuration: Global. } \ } \ HELO ETRN } \ VRFY } 2. identifying FTP commands and responses and Telnet escape sequences and normalize the fields. The presence of the option indicates the option itself is on.8 FTP/Telnet Preprocessor FTP/Telnet is an improvement to the Telnet decoder and provides stateful inspection capability for both FTP and Telnet data streams. which should allow the user to emulate any type of FTP server or FTP Client. FTP/Telnet has the capability to handle stateless processing. while the yes/no argument applies to the alerting functionality associated with that option. meaning it looks for information and handles reassembled data correctly. The default is to run FTP/Telnet in stateful inspection mode. Within FTP/Telnet. Within the code.2. FTP Client. Users can configure individual FTP servers and clients with a variety of options. similar to that of HTTP Inspect (See 2. ! △NOTE Some configuration options have an argument of yes or no. The following example gives the generic global configuration format: 69 . Telnet. This argument specifies whether the user wants the configuration option to generate a ftptelnet alert or not.2. the preprocessor actually maps RCPT and MAIL to the correct command name. meaning it only looks for information on a packet-bypacket basis. respectively. Global Configuration The global configuration deals with configuration options that determine the global functioning of FTP/Telnet. and subsequent instances will override previously set values. whereas in stateful mode. a particular session will be noted as encrypted and not inspected any further. 2. 3.. Configuration 1. check encrypted Instructs the preprocessor to continue to check an encrypted session for a subsequent command to cease encryption. ! △NOTE When inspection type is in stateless mode. The FTP/Telnet global configuration must appear before the other three areas of configuration. checks for encrypted traffic will occur on every packet.. you’ll get an error if you try otherwise. inspection type This indicates whether to operate in stateful or stateless mode. detect anomalies In order to support certain options. The detect anomalies option enables alerting on Telnet SB without the corresponding SE. so adding port 22 will only yield false positives. Example IP specific FTP Server Configuration preprocessor _telnet_protocol: \ ftp server 10. ports {<port> [<port>< . Most of your FTP servers will most likely end up using the default configuration. Being that FTP uses the Telnet protocol on the control connection. subnegotiation begins with SB (subnegotiation begin) and must end with an SE (subnegotiation end). Telnet supports subnegotiation. It functions similarly to its predecessor. 4.1. certain implementations of Telnet servers will ignore the SB without a corresponding SE. Configuration by IP Address This format is very similar to “default”..1. it is also susceptible to this behavior. Example Default FTP Server Configuration preprocessor ftp_telnet_protocol: \ ftp server default ports { 21 } Refer to 73 for the list of options set in default ftp server configuration.1 ports { 21 } ftp_cmds { XPWD XCWD } 71 . >]} This is how the user configures which ports to decode as telnet traffic. It is only applicable when the mode is stateful.. ayt attack thresh < number > This option causes the preprocessor to alert when the number of consecutive telnet Are You There (AYT) commands reaches the number specified. This is anomalous behavior which could be an evasion case.Configuration 1. the only difference being that specific IPs can be configured. 3. the telnet decode preprocessor. 2. However. Per the Telnet RFC. Rules written with ’raw’ content options will ignore the normalized buffer that is created when this option is in use. Typically port 23 will be included.. SSH tunnels cannot be decoded. Default This configuration supplies the default server configuration for any FTP server that is not individually configured. normalize This option tells the preprocessor to normalize the telnet traffic by eliminating the telnet escape sequences. Typically port 21 will be included.. For example the USER command – usernames may be no longer than 16 bytes. optional value enclosed within [] 72 string host port long host port extended host port {}. It can be used as a basic buffer overflow detection. fmt must be enclosed in <>’s and may contain the following: Value int number char <chars> date <datefmt> Description Parameter must be an integer Parameter must be an integer between 1 and 255 Parameter must be a single character. >]} This is how the user configures which ports to decode as FTP command channel traffic. [] . 5. alt max param len <number> {cmd[cmd]} This specifies the maximum allowed parameter length for the specified FTP command(s). ports {<port> [<port>< . outside of the default FTP command set as specified in RFC 959. It can be used as a more specific buffer overflow detection. per RFC 959 Parameter must be a long host port specified. | {}. + . this option causes the preprocessor to print the configuration for each of the FTP commands for this server. This option specifies a list of additional commands allowed by this server. separated by | One of the choices enclosed within {}. as well as any additional commands as needed.. def max param len <number> This specifies the default maximum allowed parameter length for an FTP command. per RFC 2428 One of choices enclosed within. ftp cmds {cmd[cmd]} The preprocessor is configured to alert when it sees an FTP command that is not allowed by the server.literal Parameter is a string (effectively unrestricted) Parameter must be a host/port specified. This may be used to allow the use of the ’X’ commands identified in RFC 775. chk str fmt {cmd[cmd]} This option causes a check for string format attacks in the specified commands. where: n Number C Character [] optional format enclosed | OR {} choice of options . 2. cmd validity cmd < fmt > This option specifies the valid format for parameters of a given command.FTP Server Configuration Options 1. print cmds During initialization. For example: ftp_cmds { XPWD XCWD XCUP XMKD XRMD } 4. so the appropriate configuration would be: alt_max_param_len 16 { USER } 6. 3. per RFC 1639 Parameter must be an extended host port specified. 7. one of <chars> Parameter follows format specified. It can be used to improve performance. If your rule set includes virus-type rules. per RFC 959 and others performed by the preprocessor. it is recommended that this option not be used.Examples of the cmd validity option are shown below. certain FTP servers accept MDTM commands that set the modification time on a file.uuu]. data chan This option causes the rest of snort (rules. While not part of an established standard. # This allows additional modes. accept a format using YYYYMMDDHHmmss[. ignore telnet erase cmds <yes|no> This option allows Snort to ignore telnet escape sequences for erase character (TNC EAC) and erase line (TNC EAL) when normalizing FTP command channel. especially with large file transfers from a trusted source. cmd_validity MODE < char ASBCZ > # Allow for a date in the MDTM command.n[n[n]]] ] string > MDTM is an off case that is worth discussing. 9. Some FTP servers do not process those telnet escape sequences. 10. Injection of telnet escape sequences could be used as an evasion attempt on an FTP command channel. ”data chan” will be removed in a future release. Some others accept a format using YYYYMMDDHHmmss[+—]TZ format.org/internetdrafts/draft-ietf-ftpext-mlst-16. The example above is for the first case (time format as specified in. use the following: cmd_validity MDTM < [ date nnnnnnnnnnnnnn[{+|-}n[n]] ] string > 8. Use of the ”data chan” option is deprecated in favor of the ”ignore data chan” option. ignore data chan <yes|no> This option causes the rest of Snort (rules. FTP commands are added to the set of allowed commands. The most common among servers that do.ietf. 73 . FTP Server Base Configuration Options The base FTP server configuration is as follows. including mode Z which allows for # zip-style compression. Options specified in the configuration file will modify this set of options.. The other options will override those in the base configuration. 11. It can be used to improve performance. telnet cmds <yes|no> This option turns on detection and alerting when telnet escape sequences are seen on the FTP command channel.txt) To check validity for a server that uses the TZ format. Using this option means that NO INSPECTION other than TCP state will be performed on FTP data transfers. cmd_validity MDTM < [ date nnnnnnnnnnnnnn[. especially with large file transfers from a trusted source. it is recommended that this option not be used. These examples are the default checks. If your rule set includes virus-type rules. other preprocessors) to ignore FTP data channel connections. Setting this option to ”yes” ports {<port> [<port>< . >]} This option specifies the source ports that the DNS preprocessor should inspect traffic. After max encrypted packets is reached. if the presumed server generates client traffic.2. enable rdata overflow Check for DNS Client RData TXT Overflow 77 . enable obsolete types Alert on Obsolete (per RFC 1035) Record Types 3. If Challenge-Response Overflow or CRC 32 false positive. enable experimental types Alert on Experimental (per RFC 1035) Record Types 4. the preprocessor will stop processing traffic for a given session. 10. Example Configuration from snort.10 DNS The DNS preprocessor decodes DNS Responses and can detect the following exploits: DNS Client RData Overflow. preprocessor ssh: \ server_ports { 22 } \ max_client_bytes 19600 \ max_encrypted_packets 20 \ enable_respoverflow \ enable_ssh1crc32 2. 11. enable badmsgdir Enable alerts for traffic flowing the wrong direction. try increasing the number of required client bytes with max client bytes. Configuration By default.. The SSH preprocessor should work by default. or if a client generates server traffic. The available configuration options are described below. Obsolete Record Types. enable protomismatch Enables checking for the Protocol Mismatch exploit. 2. all alerts are disabled and the preprocessor checks traffic on port 53. 1. Alerts at 19600 unacknowledged bytes within 20 encrypted packets for the Challenge-Response Overflow/CRC32 exploits. 12. enable paysize Enables alerts for invalid payload sizes. and Experimental Record Types.9. DNS looks at DNS Response traffic over UDP and TCP and it requires Stream preprocessor to be enabled for TCP decoding. For instance.conf Looks for attacks on SSH server port 22.. enable recognition Enable alerts for non-SSH traffic on SSH ports. This requires the noinspect encrypted option to be useful. and that the traffic is legitimately encrypted. SSL is used over port 443 as HTTPS. By default.conf Looks for traffic on DNS server port 53. Use this option for slightly better performance if you trust that your servers are not compromised. By enabling the SSLPP to inspect port 443 and enabling the noinspect encrypted option. By default. Examples/Default Configuration from snort. Do not alert on obsolete or experimental RData record types. It will not operate on TCP sessions picked up midstream. Typically. ports {<port> [<port>< . Check for the DNS Client RData overflow vulnerability. In some cases. noinspect encrypted Disable inspection on traffic that is encrypted. Once the traffic is determined to be encrypted. especially when packets may be missed.2. Default is off. The SSL Dynamic Preprocessor (SSLPP) decodes SSL and TLS traffic and optionally determines if and when Snort should stop inspection of it. such as the handshake.11 SSL/TLS Encrypted traffic should be ignored by Snort for both performance reasons and to reduce false positives. the user should use the ’trustservers’ option.. Verifying that faultless encrypted traffic is sent from both endpoints ensures two things: the last client-side handshake packet was not crafted to evade Snort. Therefore. preprocessor dns: \ ports { 53 } \ enable_rdata_overflow 2. Default is off.The DNS preprocessor does nothing if none of the 3 vulnerabilities it checks for are enabled... the session is not marked as encrypted. only the SSL handshake of each connection will be inspected. If one side responds with an indication that something has failed. documented below. trustservers Disables the requirement that application (encrypted) data must be observed on both sides of the session before a session is marked encrypted. if a user knows that server-side encrypted data can be trusted to mark the session as encrypted. >]} This option specifies which ports SSLPP will inspect traffic on. Configuration 1. the only observed response from one endpoint will be TCP ACKs. no further inspection of the data on the connection is made. and it will cease operation on a session if it loses state because of missing data (dropped packets). 3. SSLPP looks for a handshake followed by encrypted traffic traveling to both sides. 78 . ssl_state:client_keyx. The list of version identifiers are below. The list of states are below.2" Examples ssl_version:sslv3. ssl_state:!server_hello. More than one state can be specified. via a comma separated list. ssl state The ssl state rule option tracks the state of the SSL encryption during the process of hello and key exchange.conf Enables the SSL preprocessor and tells it to disable inspection on encrypted traffic. Lists of identifiers are OR’ed together.0" | "tls1. The option will match if any one of the OR’ed versions are used in the SSL connection. state-list state = ["!"] "client_hello" | "server_hello" | "client_keyx" | "server_keyx" | "unknown" Examples ssl_state:client_hello.tls1. 79 . ssl_version:tls1. version-list version = ["!"] "sslv2" | "sslv3" | "tls1. and are OR’ed together. Syntax ssl_version: <version-list> version-list = version | version .1..0. and more than one identifier can be specified.2. multiple ssl version rule options should be used. ssl_version:!sslv2. To ensure the connection has reached each of a set of states.1" | "tls1. Syntax ssl_state: <state-list> state-list = state | state . To check for two or more SSL versions in use simultaneously. via a comma separated list.tls1. multiple rules using the ssl state rule option should be used. The option will match if the connection is currently in any one of the OR’ed states.server_keyx.Examples/Default Configuration from snort. 168.1 and 192.13 DCE/RPC 2 Preprocessor The main purpose of the preprocessor is to perform SMB desegmentation and DCE/RPC defragmentation to avoid rule evasion using these techniques.168. Write AndX.1 proxy and server.168. Specify a pair of IP and hardware address as the argument to arpspoof detect host. When no arguments are specified to arpspoof.2. The following transports are supported for DCE/RPC: SMB.40.1 f0:0f:00:f0:0f:00 preprocessor arpspoof_detect_host: 192.40. an alert with GID 112 and SID 2 or 3 is generated.168. The preprocessor will use this list when detecting ARP cache overwrite attacks.1 f0:0f:00:f0:0f:00 preprocessor arpspoof_detect_host: 192. SMB desegmentation is performed for the following commands that can be used to transport DCE/RPC requests and responses: Write. the preprocessor checks for unicast ARP requests.2 f0:0f:00:f0:0f:01 The third example configuration has unicast detection enabled. and inconsistent Ethernet to IP mapping.2.168.40. Read Block Raw and Read AndX. When inconsistency occurs. preprocessor arpspoof preprocessor arpspoof_detect_host: 192. When ”-unicast” is specified as the argument of arpspoof. unicast ARP requests. Read. Write Block Raw. Specify one host IP MAC combo per line. The preprocessor merely looks for Ethernet address inconsistencies. preprocessor arpspoof The next example configuration does not do unicast detection but monitors ARP mapping for hosts 192. TCP.40.40. 80 . Transaction. The Ethernet address corresponding to the preceding IP. Alert SID 4 is used in this case.168. New rule options have been implemented to improve performance.2. Example Configuration The first example configuration does neither unicast detection nor ARP mapping monitoring.40. The host with the IP address should be on the same layer 2 segment as Snort is.12 ARP Spoof Preprocessor The ARP spoof preprocessor decodes ARP packets and detects ARP attacks. Format preprocessor arpspoof[: -unicast] preprocessor arpspoof_detect_host: ip mac Option ip mac Description IP address. Transaction Secondary.2. preprocessor arpspoof: -unicast preprocessor arpspoof_detect_host: 192. the preprocessor inspects Ethernet addresses and the addresses in the ARP packets. Write and Close. UDP and RPC over HTTP v. reduce false positives and reduce the count and complexity of DCE/RPC based rules. An alert with GID 112 and SID 1 will be generated if a unicast ARP request is detected.2 f0:0f:00:f0:0f:01 2. e. Accepted SMB commands Samba in particular does not recognize certain commands under an IPC$ tree.22 in that deleting the UID or TID used to create the named pipe instance also invalidates it. Both the UID and TID used to open the named pipe instance must be used when writing data to the same named pipe instance. i.e. stream5. since it is necessary in making a request to the named pipe. Windows 2000 Windows 2000 is interesting in that the first request to a named pipe must use the same binding as that of the other Windows versions. It also follows Samba greater than 3. deleting either the UID or TID invalidates the FID. no binding. requests after that follow the same binding as Samba 3.0. Windows 2003 Windows XP Windows Vista These Windows versions require strict binding between the UID. Samba (all versions) Under an IPC$ tree. Samba greater than 3. the FID becomes invalid. does not accept: 81 . along with a valid FID can be used to make a request. no more requests can be written to that named pipe instance. i. Therefore. the dcerpc2 preprocessor will enable stream reassembly for that session if necessary. i. However. i. Some important differences: Named pipe instance tracking A combination of valid login handle or UID. the frag3 preprocessor should be enabled and configured.Dependency Requirements For proper functioning of the preprocessor: • Stream session tracking must be enabled. Target Based There are enough important differences between Windows and Samba versions that a target based approach has been implemented. however. the FID that was created using this TID becomes invalid. either through configured ports.22 Any valid TID. if the TID used in creating the FID is deleted (via a tree disconnect). the FID that was created using this TID becomes invalid. The binding between these is dependent on OS/software version.22 and earlier. • Stream reassembly must be performed for TCP sessions.0. Samba 3. share handle or TID and file/named pipe handle or FID must be used to write data to a named pipe.0. servers or autodetecting. only the UID used in opening the named pipe can be used to make a request using the FID handle to the named pipe instance.e. The preprocessor requires a session tracker to keep its data. If the TID used to create the FID is deleted (via a tree disconnect). • IP defragmentation should be enabled. TID and FID used to make a request to a named pipe instance. If the UID used to create the named pipe instance is deleted (via a Logoff AndX).e. no more requests can be written to that named pipe instance.0.e.22 and earlier Any valid UID and TID. i. If it is decided that a session is SMB or DCE/RPC. along with a valid FID can be used to make a request. However. all previous interface bindings are invalidated. we don’t want to keep track of data that the server won’t accept. What distinguishes them (when the same named pipe is being written to. An evasion possibility would be accepting a fragment in a request that the server won’t accept that gets sandwiched between an exploit. Ultimately. login/logoff and tree connect/tree disconnect. e. having the same FID) are fields in the SMB header representing a process id (PID) and multiplex id (MID). Any binding after that must use the Alter Context request. Samba 3. Windows (all versions) For all of the Windows versions. Samba (all versions) Uses just the MID to define a ”thread”. Samba. If a Bind after a successful Bind is made. whereas in using the Write* commands.e. only one Bind can ever be made on a session whether or not it succeeds or fails.0.20 and earlier Any amount of Bind requests can be made. These requests can also be segmented with Transaction Secondary commands.20 Another Bind request can be made if the first failed and no interfaces were successfully bound to. AndX command chaining Windows is very strict in what command combinations it allows to be chained.Context ID 82 . data is written to the named pipe as it is received by the server. The PID represents the process this request is a part of. An MID represents different sub-processes within a process (or under a PID). Multiple Bind Requests A Bind request is the first request that must be made in a connection-oriented DCE/RPC session in order to specify the interface/interfaces that one wants to communicate with. all previous interface bindings are invalidated.g. i.Open Write And Close Read Read Block Raw Write Block Raw Windows (all versions) Accepts all of the above commands under an IPC$ tree. Multiple Transaction requests can be made simultaneously to the same named pipe.0. Segments for each ”thread” are stored separately and written to the named pipe when all segments are received. Samba later than 3. It is necessary to track this so as not to munge these requests together (which would be a potential evasion opportunity). multiple logins and tree connects (only one place to return handles for these). DCE/RPC Fragmented requests . Windows (all versions) Uses a combination of PID and MID to define a ”thread”. is very lax and allows some nonsensical combinations. on the other hand. Transaction tracking The differences between a Transaction request and using one of the Write* commands to write data to a named pipe are that (1) a Transaction performs the operations of a write and a read from the named pipe. the client has to explicitly send one of the Read* requests to tell the server to send the response and (2) a Transaction request is not written to the named pipe until all of the data is received (via potential Transaction Secondary requests) whereas with the Write* commands. If another Bind is made. DCE/RPC Fragmented requests . DCE/RPC Stub data byte order The byte order of the stub data is determined differently for Windows and Samba.Operation number Each fragment in a fragmented request carries an operation number (opnum) which is more or less a handle to a function offered by the interface. Samba (all versions) The context id that is ultimately used for the request is contained in the last fragment. Windows Vista The opnum that is ultimately used for the request is contained in the first fragment. Samba (all versions) Windows 2000 Windows 2003 Windows XP The opnum that is ultimately used for the request is contained in the last fragment. The context id field in any other fragment can contain any value. Windows (all versions) The byte order of the stub data is that which was used in the Bind request.. The context id .Each fragment in a fragmented request carries the context id of the bound interface it wants to make the request to. The opnum field in any other fragment can contain any value. Windows (all versions) The context id that is ultimately used for the request is contained in the first fragment. The opnum field in any other fragment can contain any value. Run-time memory includes any memory allocated after configuration.’ event-list "memcap" | "smb" | "co" | "cl" 0-65535 Option explanations memcap Specifies the maximum amount of run-time memory that can be allocated. in effect. By default this value is turned off. Default is 100 MB. Default is disabled. Default is to do defragmentation. disable defrag Tells the preprocessor not to do DCE/RPC defragmentation. events Specifies the classes of events to enable. If a fragment is greater than this size. Default is set to -1. co. reassemble threshold Specifies a minimum number of bytes in the DCE/RPC desegmentation and defragmentation buffers before creating a reassembly packet to send to the detection engine. co Stands for connection-oriented DCE/RPC. (See Events section for an enumeration and explanation of events. The allowed range for this option is 1514 . Option examples memcap 30000 max_frag_len 16840 events none events all events smb events co events [co] events [smb.memcap max-frag-len events pseudo-event event-list event re-thresh = = = = = = = 1024-4194303 (kilobytes) 1514-65535 pseudo-event | event | ’[’ event-list ’]’ "none" | "all" event | event ’. When the preprocessor is disabled only the memcap option is applied when specified with the configuration. smb Alert on events related to SMB processing. max frag len Specifies the maximum fragment size that will be added to the defragmention module. Alert on events related to connectionless DCE/RPC processing. cl Stands for connectionless DCE/RPC. disable this option. disabled Disables the preprocessor. A value of 0 supplied as an argument to this option will. This option is useful in inline mode so as to potentially catch an exploit early before full defragmentation is done. it is truncated before being added to the defragmentation module.65535. co] events [memcap. If the memcap is reached or exceeded. alert. Alert on events related to connection-oriented DCE/RPC processing. cl] reassemble_threshold 500 84 .) memcap Only one event. smb. the default configuration is used if no net configurations match. At most one default configuration can be specified. When processing DCE/RPC traffic. The net option supports IPv6 addresses. co. rpc-over-http-server 593] autodetect [tcp 1025:.0. if non-required options are not specified. Note that port and ip variables defined in snort.. udp 135. The default and net options are mutually exclusive. tcp 135. cl]. A dcerpc2 server configuration must start with default or net options. Option syntax Option default net policy detect Argument NONE <net> <policy> <detect> Required YES YES NO NO Default NONE NONE policy WinXP detect [smb [139. If a net configuration matches. For any dcerpc2 server configuration. memcap 300000.0. If no default configuration is specified.’ ’. max_frag_len 14440 disable_defrag. Zero or more net configurations can be specified. events [memcap. events [memcap. default values will be used for the default configuration.20" "none" | detect-opt | ’[’ detect-list ’]’ detect-opt | detect-opt ’.’ port-list port | port-range ’:’ port | port ’:’ | port ’:’ port 0-65535 85 . the defaults will be used. it will override the default configuration. smb.’ detect-list transport | transport port-item | transport ’[’ port-list ’]’ "smb" | "tcp" | "udp" | "rpc-over-http-proxy" | "rpc-over-http-server" port-item | port-item ’. events smb memcap 50000.22" | "Samba-3.445]. udp 1025:. A net configuration matches if the packet’s server IP address matches an IP address or net specified in the net configuration.conf CANNOT be used. autodetect Specifies the DCE/RPC transport and server ports that the preprocessor should attempt to autodetect on for the transport. This value can be set from 0 to 255. policy Specifies the target-based policy to use when processing.shares share-list share word var-word max-chain = = = = = = share | ’[’ share-list ’]’ share | share ’. shares with ’$’ must be enclosed quotes. Note that most dynamic DCE/RPC ports are above 1024 and ride directly over TCP or UDP. Default maximum is 3 chained commands. smb max chain Specifies the maximum amount of AndX command chaining that is allowed before an alert is generated. UDP and RPC over HTTP server. net Specifies that this configuration is an IP or net specific configuration. The autodetect ports are only queried if no detect transport/ports match the packet. Option examples 86 . The configuration will only apply to the IP addresses and nets supplied as an argument. 593 for RPC over HTTP server and 80 for RPC over HTTP proxy. Default is to autodetect on RPC over HTTP proxy detect ports.TCP/UDP. smb invalid shares Specifies SMB shares that the preprocessor should alert on if an attempt is made to connect to them via a Tree Connect or Tree Connect AndX.’ ’"’ ’]’ ’[’ ’$’ graphical ASCII characters except ’. The order in which the preprocessor will attempt to autodetect will be . Defaults are ports 139 and 445 for SMB. Default is ”WinXP”. the preprocessor will always attempt to autodetect for ports specified in the detect configuration for rpc-over-http-proxy. RPC over HTTP server. Option explanations default Specifies that this configuration is for the default server configuration. A value of 0 disables this option. Defaults are 1025-65535 for TCP. no autodetect http proxy ports By default. This is because the proxy is likely a web server and the preprocessor should not look at all web traffic. It would be very uncommon to see SMB on anything other than ports 139 and 445. RPC over HTTP proxy and lastly SMB. This option is useful if the RPC over HTTP proxy configured with the detect option is only used to proxy DCE/RPC traffic. detect Specifies the DCE/RPC transport and server ports that should be detected on for the transport. Default is empty. 135 for TCP and UDP.’ ’"’ ’]’ ’[’ 0-255 Because the Snort main parser treats ’$’ as the start of a variable and tries to expand it.’ share-list word | ’"’ word ’"’ | ’"’ var-word ’"’ graphical ASCII characters except ’. 6005:]] smb_invalid_shares private smb_invalid_shares "private" smb_invalid_shares "C$" smb_invalid_shares [private. policy WinVista.0. smb_max_chain 1 preprocessor dcerpc2_server: \ net [10.0/24 net [192. "D$". autodetect [tcp.0.4.0/24. policy WinVista. smb_max_chain 3 Complete dcerpc2 default configuration 87 . \ smb_invalid_shares ["C$". "C$"] smb_invalid_shares ["private".445]. tcp] detect [smb 139.0.0/24.10 net 192.10. \ autodetect [tcp 1025:.4.255. detect smb. "ADMIN$"].4. "C$"] smb_max_chain 1 Configuration examples preprocessor dcerpc2_server: \ default preprocessor dcerpc2_server: \ default.2103]] detect [smb [139. rpc-over-http-proxy 8081].10.11.6002:6004]] autodetect none autodetect tcp autodetect [tcp] autodetect tcp 2025: autodetect [tcp 2025:] autodetect tcp [2025:3001. detect [smb. udp 2025:] autodetect [tcp 2025:. "ADMIN$"] preprocessor dcerpc2_server: net 10. udp] autodetect [tcp 2025:. autodetect tcp 1025:. udp 1025:.168. feab:45b3:ab92:8ac4:d322:007f:e5aa:7845] policy Win2000 policy Samba-3. autodetect none Default server configuration preprocessor dcerpc2_server: default.168. tcp 135.168.57].10.0/24.0/24. \ detect [smb.445].11. rpc-over-http-server [1025:6001. feab:45b3::/32] net [192. tcp.3003:] autodetect [tcp [2025:3001. tcp 135.168.168. tcp].0. \ smb_invalid_shares ["C$".56. policy Win2000 preprocessor dcerpc2_server: \ default. udp 135.0. rpc-over-http-server 593].0/24] net 192. rpc-over-http-server 1025:].6005:]].10. tcp [135.3003:]] autodetect [tcp.445]] detect [smb.10.0.feab:45b3::/126]. policy Win2000. udp.0/255. \ detect [smb [139.168. rpc-over-http-proxy [1025:6001.0. policy Win2000 preprocessor dcerpc2_server: \ net [10.445] detect [smb [139. rpc-over-http-server [593.4. udp 135.net 192.feab:45b3::/126].4. no_autodetect_http_proxy_ports preprocessor dcerpc2_server: \ net [10. policy WinXP.255.22 detect none detect smb detect [smb] detect smb 445 detect [smb 445] detect smb [139. policy Samba. the preprocessor will alert. Memcap events SID 1 Description If the memory cap is reached and the preprocessor is configured to alert. udp 1025:. The preprocessor will alert if the remaining NetBIOS packet length is less than the size of the SMB command header to be decoded. policy WinXP. Retarget Response (only from server) and Keep Alive. rpc-over-http-server 593]. If a command requires this and the byte count is less than the minimum required byte count for that command. The preprocessor will alert if the format is not that which is expected for that command. If this offset puts us before data that has already been processed or after the end of payload. the preprocessor will alert. SMB events SID 2 Description An invalid NetBIOS Session Service type was specified in the header. smb_max_chain 3 Events The preprocessor uses GID 133 to register events. id of \xfeSMB is turned away before an eventable point is reached. Note that since the preprocessor does not yet support SMB2.) 3 4 5 6 7 8 9 10 11 12 13 14 88 . have a field containing the total amount of data to be transmitted. SMB commands have pretty specific word counts and if the preprocessor sees a command with a word count that doesn’t jive with that command. Some SMB commands.445]. The preprocessor will alert if the remaining NetBIOS packet length is less than the size of the SMB command data size specified in the command header. Valid types are: Message. such as Transaction. \ autodetect [tcp 1025:. tcp 135. The word count of the command header is invalid. The preprocessor will alert if the NetBIOS Session Service length field contains a value less than the size of an SMB header. The SMB id does not equal \xffSMB. Some commands. If this field is zero. Many SMB commands have a field containing an offset from the beginning of the SMB header to where the data the command is carrying starts. . rpc-over-http-server 1025:]. \ detect [smb [139. udp 135. the preprocessor will alert. especially the commands from the SMB Core implementation require a data format field that specifies the kind of data that will be coming next. An SMB message type was specified in the header. Some commands require a specific format for the data. the preprocessor will alert.preprocessor dcerpc2: memcap 102400 preprocessor dcerpc2_server: \ default. Some commands require a minimum number of bytes after the command header. The preprocessor will alert if the remaining NetBIOS packet length is less than the size of the SMB command byte count specified in the command header. Positive Response (only from server). Either a request was made by the server or a response was given by the client. Negative Response (only from server). Request (only from client). This is anomalous behavior and the preprocessor will alert if it happens. essentially connects to a share and disconnects from the same share in the same request and is anomalous behavior. This is used by the client in subsequent requests to indicate that it has authenticated. This is anomalous behavior and the preprocessor will alert if it happens. it issues a Read* command to the server to tell it to send a response to the data it has written. however. Windows does not allow this behavior. After a client is done writing data using the Write* commands. There is. however. The server response.15 16 17 18 19 20 21 22 23 24 25 26 The preprocessor will alert if the total amount of data sent in a transaction is greater than the total data count specified in the SMB command header. The combination of a Open AndX or Nt Create AndX command with a chained Close command.).) The Close command is used to close that file or named pipe. The combination of a Session Setup AndX command with a chained Logoff AndX command. There should be under normal circumstances no more than a few pending Read* requests at a time and the preprocessor will alert if this number is excessive. The Read* request contains the file id associated with a named pipe instance that the preprocessor will ultimately send the data to. An Open AndX or Nt Create AndX command is used to open/create a file or named pipe. only one place in the SMB header to return a tree handle (or Tid). The preprocessor will alert if the number of chained commands in a single request is greater than or equal to the configured amount (default is 3). essentially logins in and logs off in the same request and is anomalous behavior. With commands that are chained after a Session Setup AndX request. The preprocessor will alert if it sees this. There should be under normal circumstances no more than a few pending tree connects at a time and the preprocessor will alert if this number is excessive. Unlike the Tree Connect AndX response. The preprocessor will alert if the byte count minus a predetermined amount based on the SMB command is not equal to the data size. It looks for a Tree Connect or Tree Connect AndX to the share. only one place in the SMB header to return a login handle (or Uid). Windows does not allow this behavior. so it need to be queued with the request and dequeued with the response. The preprocessor will alert if it sees this. The preprocessor will alert if it sees any of the invalid SMB shares configured. In this case the preprocessor is concerned with the server response. There is. however Samba does. (The byte count must always be greater than or equal to the data size. however Samba does. If multiple Read* requests are sent to the server. 89 . they are responded to in the order they were sent. The combination of a Tree Connect AndX command with a chained Tree Disconnect command.) Some of the Core Protocol commands (from the initial SMB implementation) require that the byte count be some value greater than the data size exactly. The preprocessor will alert if the byte count specified in the SMB command header is less than the data size specified in the SMB command. For the Tree Connect command (and not the Tree Connect AndX command). the login handle returned by the server is used for the subsequent chained commands. (The preprocessor is only interested in named pipes as this is where DCE/RPC requests are written to. The Tree Disconnect command is used to disconnect from that share. essentially opens and closes the named pipe in the same request and is anomalous behavior. The preprocessor will alert if it sees this. does not contain this file id. With AndX command chaining it is possible to chain multiple Session Setup AndX commands within the same request. When a Session Setup AndX request is sent to the server. there is no indication in the Tree Connect response as to whether the share is IPC or not. A Logoff AndX request is sent by the client to indicate it wants to end the session and invalidate the login handle. A Tree Connect AndX command is used to connect to a share. With AndX command chaining it is possible to chain multiple Tree Connect AndX commands within the same request. The preprocessor will alert if the sequence number uses in a request is the same or less than a previously used sequence number on the session. there are no context items specified. The operation number specifies which function the request is calling on the bound interface. The preprocessor will alert if the opnum changes in a fragment mid-request. If a request if fragmented. The preprocessor will alert if a fragment is larger than the maximum negotiated fragment length. The preprocessor will alert if a non-last fragment is less than the size of the negotiated maximum fragment length. The preprocessor will alert if the packet data length is less than the size of the connectionless header. The preprocessor will alert if the connectionless DCE/RPC PDU type is not a valid PDU type. there are no transfer syntaxes to go with the requested interface. The preprocessor will alert if the remaining fragment length is less than the remaining packet size. wrapping the sequence number space produces strange behavior from the server.Connection-oriented DCE/RPC events SID 27 28 29 30 31 32 33 34 Description The preprocessor will alert if the connection-oriented DCE/RPC major version contained in the header is not equal to 5. In testing. so this should be considered anomalous behavior. this number should stay the same for all fragments. Most evasion techniques try to fragment the data as much as possible and usually each fragment comes well below the negotiated transmit size. The preprocessor will alert if the connection-oriented DCE/RPC minor version contained in the header is not equal to 0. The preprocessor will alert if in a Bind or Alter Context request. The preprocessor will alert if in a Bind or Alter Context request. The context id is a handle to a interface that was bound to. It is anomalous behavior to attempt to change the byte order mid-session. 35 36 37 38 39 Connectionless DCE/RPC events SID 40 41 42 43 Description The preprocessor will alert if the connectionless DCE/RPC major version is not equal to 4. The byte order of the request data is determined by the Bind in connection-oriented DCE/RPC for Windows. The preprocessor will alert if the connection-oriented DCE/RPC PDU type contained in the header is not a valid PDU type. The preprocessor will alert if it changes in a fragment mid-request. Rule Options New rule options are supported by enabling the dcerpc2 preprocessor: dce_iface dce_opnum 90 . The preprocessor will alert if the context id changes in a fragment mid-request. this number should stay the same for all fragments. If a request is fragmented. The preprocessor will alert if the fragment length defined in the header is less than the size of the header. The call id for a set of fragments in a fragmented request should stay the same (it is incremented for each complete request). if the any frag option is used to specify evaluating on all fragments. dce_iface:4b324fc8-1670-01d3-1278-5a47bf6ee188. A rule which is looking for data. by default the rule will only be evaluated for a first fragment (or full request. The any frag argument says to evaluate for middle and last fragments as well.e. Optional arguments are an interface version and operator to specify that the version be less than (’<’). Each interface is represented by a UUID. not a fragment) since most rules are written to start at the beginning of a request. will be looking at the wrong data on a fragment other than the first. Each interface UUID is paired with a unique index (or context id) that future requests can use to reference the service that the client is making a call to. any_frag.one for big endian and one for little endian. Some versions of an interface may not be vulnerable to a certain exploit. equal to (’=’) or not equal to (’!’) the version specified. it can. any_frag]. When a client makes a request. It is necessary for a client to bind to a service before being able to make a call to it. This option requires tracking client Bind and Alter Context requests as well as server Bind Ack and Alter Context responses for connection-oriented DCE/RPC in the preprocessor. say 5 bytes into the request (maybe it’s a length field). The representation of the interface UUID is different depending on the endianness specified in the DCE/RPC previously requiring two rules . <operator><version>][. However. This can eliminate false positives where more than one service is bound to successfully since the preprocessor can correlate the bind UUID to the context id used in the request. whether or not the client has bound to a specific interface UUID and whether or not this client request is making a request to it. The server will respond with the interface UUIDs it accepts as valid and will allow the client to make requests to those services. a rule can simply ask the preprocessor. This option is used to specify an interface UUID. The preprocessor eliminates the need for two rules by normalizing the UUID. using this rule option. This can be a source of false positives in fragmented DCE/RPC traffic. since subsequent fragments will contain data deeper into the DCE/RPC request. For each Bind and Alter Context request. specify one or more service interfaces to bind to. i. a DCE/RPC request can be broken up into 1 or more fragments. A DCE/RPC request can specify whether numbers are represented as big endian or little endian. =1. Many checks for data in the DCE/RPC request are only relevant if the DCE/RPC request is a first fragment (or full request). Instead of using flow-bits. Also. Also. however. By default it is reasonable to only evaluate if the request is a first fragment (or full request). a middle or the last fragment. Syntax dce_iface:<uuid>[. <2. An interface contains a version. the client specifies a list of interface UUIDs along with a handle 91 . any_frag. since the beginning of subsequent fragments are already offset some length from the beginning of the request. greater than (’>’). dce_iface:4b324fc8-1670-01d3-1278-5a47bf6ee188. it will specify the context id so the server knows what service the client is making a request to. When a client sends a bind request to the server. dce_iface:4b324fc8-1670-01d3-1278-5a47bf6ee188. Flags (and a field in the connectionless header) are set in the DCE/RPC header to indicate whether the fragment is the first. tracking is required so that when a request is processed. The server response indicates which interfaces it will allow the client to make requests to . |05 00 00| will be inserted into the fast pattern matcher. the version operation is true. As an example. it will unequivocally be used over the above mentioned patterns. opnum-list opnum-item opnum-range opnum = = = = opnum-item | opnum-item ’.dce iface) usually we want to know what function call it is making to that service. If a content in the rule uses the fast pattern rule option. (1) if the rule option flow:to server|from client is used. in both big and little endian format will be inserted into the fast pattern matcher. Syntax dce_opnum:<opnum-list>. ! △NOTE Using this rule option will automatically insert fast pattern contents into the fast pattern matcher. dce opnum The opnum represents a specific function call to an interface.(or context id) for each interface UUID that will be used during the DCE/RPC session to reference.it either accepts or rejects the client’s wish to bind to a certain interface. For UDP rules. the best (meaning longest) pattern will be used. For TCP rules. It is likely that an exploit lies in the particular DCE/RPC function call. the interface UUID. Note that if the rule already has content rule options in it. Note that a defragmented DCE/RPC request will be considered a full request. hexlong and hexshort will be specified and interpreted to be in big endian order (this is usually the default way an interface UUID will be seen and represented).’ opnum-list opnum | opnum-range opnum ’-’ opnum 0-65535 Examples 92 . the context id used in the request can be correlated with the interface UUID it is a handle for. |05 00| will be inserted into the fast pattern matcher. After is has been determined that a client has bound to a specific interface and is making a request to it (see above . (2) if the rule option flow:from server|to client is used. e. dec. <offset> [.4294967295 -65535 to 65535 Examples byte_test:4. relative]. 20-22. 93 . 18-20. [!]<operator>. string. string. opnum range or list containing either or both opnum and/or opnum-range. dce_opnum:15-18. byte jump Syntax byte_jump:<convert>. 35000. the following normal byte jump arguments will not be allowed: big. relative. relative][. regardless of preceding rule options.dce.dce_opnum:15. This reduces the number of rule option checks and the complexity of the rule. The opnum of a DCE/RPC request will be matched against the opnums specified with this option. !=. This option matches if there is DCE/RPC stub data. <offset>[. byte test Syntax byte_test:<convert>. 2280.post_offset -4. dec and oct. dce. but since the DCE/RPC preprocessor will know the endianness of the request. This option is used to place the cursor (used to walk the packet payload in rules processing) at the beginning of the DCE/RPC stub data. 0. dce. dce. When using the dce argument to a byte jump. i.multiplier 2. hex. the following normal byte test arguments will not be allowed: big. >. dce_opnum:15. There are no arguments to this option. This option matches if any one of the opnums specified match the opnum of the DCE/RPC request. little. align][. dce_opnum:15. hex. These rule options will take as a new argument dce and will work basically the same as the normal byte test/byte jump. post_offet <adjustment_value>]. it will be able to do the correct conversion.-4. little. this option will alleviate this need and place the cursor at the beginning of the DCE/RPC stub data. oct and from beginning. When using the dce argument to a byte test.align. -10. This option takes no arguments. multiplier <mult_value>] \ [.65535 -65535 to 65535 Example byte_jump:4. convert operator value offset = = = = 1 | 2 | 4 (only with option "dce") ’<’ | ’=’ | ’>’ | ’&’ | ’ˆ’ 0 . dce. the remote procedure call or function call data. byte_test:2. 17.relative. Example dce_stub_data. byte test and byte jump with dce A DCE/RPC request can specify whether numbers are represented in big or little endian. convert offset mult_value adjustment_value = = = = 1 | 2 | 4 (only with option "dce") -65535 to 65535 0 . relative. <value>. This option is used to specify an opnum (or operation number). dce stub data Since most netbios rules were doing protocol decoding only to get to the DCE/RPC stub data. dce_stub_data. 94 .1024:] \ (msg:"dns R_Dnssrv funcs2 overflow attempt".1024:] \ (msg:"dns R_Dnssrv funcs2 overflow attempt".{12}(\x00\x00\x00\x00|.) alert udp $EXTERNAL_NET any -> $HOME_NET [135. \ classtype:attempted-admin. Dependencies The Stream5 preprocessor must be enabled for the Sensitive Data preprocessor to work.4.4. \ byte_test:4.dce. This is only done on credit card & Social Security numbers.14 Sensitive Data Preprocessor The Sensitive Data preprocessor is a Snort module that performs detection and filtering of Personally Identifiable Information (PII). and email addresses.dce.to_server.relative.-4. reference:cve. dce_opnum:0-11. reference:bugtraq. This information includes credit card numbers. \ byte_test:4.>.593.139. reference:cve. This should be set higher than the highest individual count in your ”sd pattern” rules. byte_jump:4.2007-1748.23470. This option specifies how many need to be detected before alerting. \ pcre:"/ˆ.align. \ dce_iface:50abc2a4-574d-40b3-9d66-ee4fd5fba076.Example of rule complexity reduction The following two rules using the new rule options replace 64 (set and isset flowbit) rules that are necessary if the new rule options are not used: alert tcp $EXTERNAL_NET any -> $HOME_NET [135.256.S.2007-1748. flow:established. The preprocessor config starts with: preprocessor sensitive_data: Option syntax Option alert threshold mask output ssn file alert_threshold = Argument <number> NONE <filename> 1 . \ pcre:"/ˆ.relative. flow:established.-4.dce. mask output This option replaces all but the last 4 digits of a detected PII with ”X”s. dce_opnum:0-11.{12})/sR". sid:1000068.{12})/sR". byte_jump:4. sid:1000069.relative. where an organization’s regulations may prevent them from seeing unencrypted numbers.23470. and the rule options. U.{12}(\x00\x00\x00\x00|. reference:bugtraq.445.2.>. A limited regular expression syntax is also included for defining your own PII. \ dce_iface:50abc2a4-574d-40b3-9d66-ee4fd5fba076.65535 Required NO NO NO Default alert threshold 25 OFF OFF Option explanations alert threshold The preprocessor will alert when any combination of PII are detected in a session.256.to_server. \ classtype:attempted-admin. dce_stub_data.dce.relative. Preprocessor Configuration Sensitive Data configuration is split into two parts: the preprocessor config. Social Security numbers.) 2.align. These numbers may have spaces. These numbers can be updated in Snort by supplying a CSV file with the new maximum Group numbers to use. The count is tracked across all packets in a session. Social Security numbers. Syntax sd_pattern:<count>. Group (2 digits). Snort recognizes Social Security numbers issued up through November 2009. pattern This is where the pattern of the PII gets specified. and Serial sections. the Social Security Administration publishes a list of which Group numbers are in use for each Area. and American Express. Group. A new rule option is provided by the preprocessor: sd_pattern This rule option specifies what type of PII a rule should detect.ssn file A Social Security number is broken up into 3 sections: Area (3 digits). The SSNs are expected to have dashes between the Area. There are a few built-in patterns to choose from: credit card The ”credit card” pattern matches 15. and Serial (4 digits). dashes. SSNs have no check digits. On a monthly basis. Credit card numbers matched this way have their check digits verified using the Luhn algorithm. and Serial sections. <pattern>.255 pattern = any string Option Explanations count This dictates how many times a PII pattern must be matched for an alert to be generated. us social This pattern matches against 9-digit U. Social Security numbers without dashes separating the Area.and 16-digit credit card numbers. Discover. By default. This covers Visa. us social nodashes This pattern matches U.S. or nothing in between groups. Example preprocessor config preprocessor sensitive_data: alert_threshold 25 \ mask_output \ ssn_file ssn_groups_Jan10.csv Rule Options Snort rules are used to specify which PII the preprocessor should look for. Group. email 95 . but the preprocessor will check matches against the list of currently allocated group numbers.S. Mastercard. count = 1 . 15 Normalizer When operating Snort in inline mode. gid:138.us_social. If a policy is configured for inline test or passive mode. \} matches { and } \? matches a question mark. normalizations will only be enabled if the selected DAQ supports packet replacement and is operating in inline mode. . There are also many new preprocessor and decoder rules to alert on or drop packets with ”abnormal” encodings. Note that in the following.This pattern matches against email addresses. Rules using sd pattern must use GID 138. following the format (123)456-7890 Whole rule example: alert tcp $HOME_NET $HIGH_PORTS -> $EXTERNAL_NET $SMTP_PORTS \ (msg:"Credit Card numbers sent over email". Trying to use other rule options with sd pattern will result in an error message.2. fields are cleared only if they are non-zero. Alerts on 5 U. metadata:service smtp. then it is the definition of a custom PII pattern.(\d{3})\d{3}-\d{4}. example: ”{3}” matches 3 digits. 2. Other characters in the pattern will be matched literally. use the following when configuring Snort: . To enable the normalizer. ? makes the previous character or escape sequence optional. ! △NOTE\w in this rule option does NOT match underscores.credit_card. Alerts when 2 social security numbers (with dashes) appear in a session. Also.) Caveats sd pattern is not compatible with other rule options. Custom PII types are defined using a limited regex-style syntax. sid:1000. it is helpful to normalize packets to help minimize the chances of evasion.. \\ matches a backslash \{. Unlike PCRE. sd_pattern: 5./configure --enable-normalizer The normalize preprocessor is activated via the conf as outlined below. 96 . rev:1. phone numbers.S. Examples sd_pattern: 2. example: ” ?” matches an optional space. This behaves in a greedy manner. any normalization statements in the policy config are ignored. If the pattern specified is not one of the above built-in patterns. IP4 Normalizations IP4 normalizations are enabled with: preprocessor normalize_ip4: [df]. • Clear the differentiated services field (formerly TOS). • rf reserved flag: clear this bit on incoming packets. • TTL normalization if enabled (explained below). [rf] Base normalizations enabled with ”preprocessor normalize ip4” include: • Truncate packets with excess payload to the datagram length specified in the IP header. TCP Normalizations TCP normalizations are enabled with: 97 . NOP all options octets in hop-by-hop and destination options extension headers. • NOP all options octets. Should also enable require 3whs. • ecn stream clear ECN flags if usage wasn’t negotiated. 13 } <alt_checksum> ::= { 14. timestamp. • Remove any data from RST packet. and any explicitly allowed with the allow keyword.255) Base normalizations enabled with ”preprocessor normalize tcp” include: • Remove data on SYN. • Set the urgent pointer to the payload length if it is greater than the payload length. 12. 10 } <conn_count> ::= { 11. 15 } <md5> ::= { 19 } <num> ::= (3. 5 } <echo> ::= { 6.. \ [opts [allow <allowed_opt>+]] <ecn_type> ::= stream | packet <allowed_opt> ::= \ sack | echo | partial_order | conn_count | alt_checksum | md5 | <num> <sack> ::= { 4. • opts NOP all option bytes other than maximum segment size. 98 . • ecn packet clear ECN flags on a per packet basis (regardless of negotiation). • Clear the urgent pointer if the urgent flag is not set. • urp urgent pointer: don’t adjust the urgent pointer if it is greater than payload length. • Trim data to MSS. Any segments that can’t be properly reassembled will be dropped. 7 } <partial_order> ::= { 9. You can allow options to pass by name or number. • Clear the urgent flag if the urgent pointer is not set. • Clear any option padding bytes. • Trim data to window. • Clear the urgent pointer and the urgent flag if there is no payload. window scaling.preprocessor normalize_tcp: \ [ips] [urp] \ [ecn <ecn_type>]. • Clear the reserved bits in the TCP header. Optional normalizations include: • ips ensure consistency in retransmitted data (also forces reassembly policy to ”first”). They also allow one to specify the rule type or action of a decoder or preprocessor event on a rule by rule basis.6: preprocessor stream5_tcp: min_ttl <#> By default min ttl = 1 (TTL normalization is disabled). the drop cases only apply if Snort is running inline. See doc/README. decoder events will not be generated regardless of whether or not there are corresponding rules for the event.decode for config options that control decoder events.255) If new ttl ¿ min ttl. For example.8. Decoder config options will still determine whether or not to generate decoder events.g. • opts clear TS ECR if ACK..3 Decoder and Preprocessor Rules Decoder and preprocessor rules allow one to enable and disable decoder and preprocessor events on a rule by rule basis. these options will take precedence over the event type of the rule. config enable decode drops. or valid but not negotiated. 99 . then if a packet is received with a TTL ¡ min ttl. NOP the timestamp octets. block the packet.conf or the decoder or preprocessor rule type is drop..conf. • opts MSS and window scale options are NOP’d if SYN flag is not set. e. Also note that if the decoder is configured to enable drops.• opts if timestamp is present but invalid. • opts if timestamp was negotiated but not present.255) <new_ttl> ::= (<min_ttl>+1. as follows: config min_ttl: <min_ttl> config new_ttl: <new_ttl> <min_ttl> ::= (1. • opts trim payload length to MSS if longer. 2. the TTL will be set to new ttl. When TTL normalization is turned on the new ttl is set to 5 by default. A packet will be dropped if either a decoder config drop option is in snort. if config disable decode alerts is in snort. Note that this configuration item was deprecated in 2. Of course. and have the names decoder. The gen-msg.2. rev: 1.conf. include $PREPROC_RULE_PATH/preprocessor. gid: 116.3. classtype:protocol-command-decode.. rev: 1.rules. just comment it with a # or remove the rule completely from the file (commenting is recommended).rules and preprocessor. Any one of the following rule types can be used: alert log pass drop sdrop reject For example one can change: alert ( msg: "DECODE_NOT_IPV4_DGRAM". To change the rule type or action of a decoder/preprocessor rule. README. define the path to where the rules are located and uncomment the include lines in snort.rules To disable any rule. var PREPROC_RULE_PATH /path/to/preproc_rules .rules include $PREPROC_RULE_PATH/decoder. the following config option in snort.rules respectively.map under etc directory is also updated with new decoder and preprocessor rules. just replace alert with the desired rule type. The generator ids ( gid ) for different preprocessors and the decoder are as follows: 2. \ metadata: rule-type decode . See README. sid: 1. To enable these rules in snort.conf that reference the rules files. These files are updated as new decoder and preprocessor events are added to Snort. sid: 1. gid: 116.1 Configuring The following options to configure will enable decoder and preprocessor rules: $ .2 Reverting to original behavior If you have configured snort to use decoder and preprocessor rules.3.) to drop ( msg: "DECODE_NOT_IPV4_DGRAM". classtype:protocol-command-decode.conf will make Snort revert to the old behavior: config autogenerate_preprocessor_decoder_rules 100 .. \ metadata: rule-type decode .rules and preprocessor.gre and the various preprocessor READMEs for descriptions of the rules in decoder./configure --enable-decoder-preprocessor-rules The decoder and preprocessor rules are located in the preproc rules/ directory in the top level source tree.decode.) to drop (as well as alert on) packets where the Ethernet protocol is IPv4 but version field in IPv4 header has a value other than 4. This is covered in section 3. and the first applicable action is taken. you also have to remove the decoder and preprocessor rules and any reference to them from snort. in which case they are evaluated in the order they appear in the configuration file.4 Event Processing Snort provides a variety of mechanisms to tune event processing to suit your needs: • Detection Filters You can use detection filters to specify a threshold that must be exceeded before a rule generates an event. • Event Suppression You can completely suppress the logging of unintersting events. This option applies to rules not specified and the default behavior is to alert. otherwise they will be loaded. • Event Filters You can use event filters to reduce the number of logged events for noisy rules. 2. 101 .conf. Multiple rate filters can be defined on the same rule. • Rate Filters You can use rate filters to change a rule action when the number or rate of events indicates a possible attack.10. 2. This can be tuned to significantly reduce false alarms.4.7.1 Rate Filtering rate filter provides rate based attack prevention by allowing users to configure a new action to take for a specified time when a given rate is exceeded. source and destination means client and server respectively. even if the rate falls below the configured limit. apply_to <ip-list>] The options are described in the table below . 0 seconds only applies to internal rules (gen id 135) and other use will produce a fatal error by Snort. This means the match statistics are maintained for each unique source IP address. An event filter may be used to manage number of alerts after the rule action is enabled by rate filter.allow a maximum of 100 successful simultaneous connections from any one IP address. timeout 10 Example 2 .allow a maximum of 100 connection attempts per second from any one IP address. and sdrop can be used only when snort is used in inline mode.Format Rate filters are used as standalone configurations (outside of a rule) and have the following format: rate_filter \ gen_id <gid>. For example. which is optional. track by rule and apply to may not be used together. sig_id 1. reject. Option track by src | by dst | by rule Description rate is tracked either by source IP address. or they are aggregated at rule level. \ new_action drop. count c seconds s new action alert | drop | pass | log | sdrop | reject timeout t apply to <ip-list> Examples Example 1 . For rules related to Stream5 sessions. seconds <s>. track by rule and apply to may not be used together. If t is 0. seconds 1. drop. \ count <c>. then rule action is never reverted back. sig_id <sid>. Note that events are generated during the timeout period. restrict the configuration to only to source or destination IP address (indicated by track parameter) determined by <ip-list>. revert to the original rule action after t seconds. and block further connections from that IP address for 10 seconds: 102 . \ timeout <seconds> \ [. rate filter may be used to detect if the number of connections to a specific server exceed a specific count. \ new_action alert|drop|pass|log|sdrop|reject. sdrop and reject are conditionally compiled with GIDS. c must be nonzero value. the time period over which count is accrued. new action replaces rule action for t seconds. \ count 100. the maximum number of rule matches in s seconds before the rate filter limit to is exceeded. \ track <by_src|by_dst|by_rule>. and block further connection attempts from that IP address for 10 seconds: rate_filter \ gen_id 135. for each unique destination IP address.all are required except apply to. 0 seconds means count is a total count instead of a specific rate. or by rule. destination IP address. \ track by_src. ! △NOTE filter may be defined for a given gen id.4. sig id. timeout 10 2. then ignores events for the rest of the time interval. \ track by_src. \ track <by_src|by_dst>. sig_id <sid>. \ count 100. then ignores any additional events during the time interval. 103 . sig id pair. sig_id 2. sig id != 0 is not allowed). threshold is deprecated and will not be supported in future releases. event filters with sig id 0 are considered ”global” because they apply to all rules with the given gen id. seconds 0. This can be tuned to significantly reduce false alarms. the first new action event event filters of the timeout period is never suppressed. \ type <limit|threshold|both>.rate_filter \ gen_id 135. the global filtering test is applied. then the filter applies to all rules. There are 3 types of event filters: • limit Alerts on the 1st m events during the time interval. \ count <c>. Such events indicate a change of state that are significant to the user monitoring the network. sig_id <sid>. Global event filters do not override what’s in a signature or a more specific stand-alone event filter. if they do not block an event from being logged. however. seconds <s> threshold \ gen_id <gid>. If more than one event filter is Only one event applied to a specific gen id. \ new_action drop. • both Alerts once per time interval after seeing m occurrences of the event. Format event_filter \ gen_id <gid>. \ count <c>. \ track <by_src|by_dst>. seconds <s> threshold is an alias for event filter. Standard filtering tests are applied first. ! △NOTE can be used to suppress excessive rate filter alerts.2 Event Filtering Event filtering can be used to reduce the number of logged alerts for noisy rules by limiting the number of times a particular event is logged during a specified time interval. \ type <limit|threshold|both>. • threshold Alerts every m times we see this event during the time interval. Both formats are equivalent and support the options described below . Snort will terminate with an error while reading the configuration information. (gen id 0.all are required. If gen id is also 0. Thresholds in a rule (deprecated) will override a global event filter. sig id 0 can be used to specify a ”global” threshold that applies to all rules. \ count 1. number of rule matching in s seconds that will cause event filter limit to be exceeded. then ignores events for the rest of the time interval. or destination IP address. but only if we exceed 30 events in 60 seconds: event_filter \ gen_id 1. \ type limit. track by_src. sig id 0 specifies a ”global” filter because it applies to all sig ids for the given gen id. track by_src. seconds 60 104 \ . \ type threshold. or for each unique destination IP addresses. sig_id 1851. time period over which count is accrued. This means count is maintained for each unique source IP addresses. rate is tracked either by source IP address. type limit alerts on the 1st m events during the time interval. \ count 30. count 1. c must be nonzero value. Type threshold alerts every m times we see this event during the time interval. seconds 60 Limit logging to every 3rd event: event_filter \ gen_id 1. sig_id 0. s must be nonzero value. \ type limit. seconds 60 Limit logging to just 1 event per 60 seconds. gen id 0. Examples Limit logging to 1 event per 60 seconds: event_filter \ gen_id 1. sig_id 0.Option gen id <gid> sig id <sid> type limit|threshold|both track by src|by dst count c seconds s Description Specify the generator ID of an associated rule. sig_id 1853. track by_src. Type both alerts once per time interval after seeing m occurrences of the event. track by_src. seconds 60 Limit to logging 1 event per 60 seconds per IP triggering each rule (rule gen id is 1): event_filter \ gen_id 1. \ count 1. triggering each rule for each event generator: event_filter \ gen_id 0. then ignores any additional events during the time interval. sig_id 1852. \ count 3. Ports or anything else are not tracked. track by_src. \ type both. \ type limit. Specify the signature ID of an associated rule. seconds 60 Limit to logging 1 event per 60 seconds per IP. You may also combine one event filter and several suppressions to the same non-zero SID. SIDs. Suppress by source IP address or destination IP address.Events in Snort are generated in the usual way. \ suppress \ gen_id <gid>. sig_id <sid>. ip <ip-list> Option gen id <gid> sig id <sid> track by src|by dst ip <list> Description Specify the generator ID of an associated rule. gen id 0.4. ip must be provided as well. Suppression tests are performed prior to either standard or global thresholding tests. event filters are handled as part of the output system. \ track <by_src|by_dst>. If track is provided. and IP addresses via an IP list . Specify the signature ID of an associated rule. Examples Suppress this event completely: suppress gen_id 1. but if present. Format The suppress configuration has two forms: suppress \ gen_id <gid>.map for details on gen ids. sig id 0 specifies a ”global” filter because it applies to all sig ids for the given gen id. This is optional. Users can also configure a memcap for threshold with a “config:” option: config event_filter: memcap <bytes> # this is deprecated: config threshold: memcap <bytes> 2. sig_id <sid>. You may apply multiple suppressions to a non-zero SID.. ip must be provided as well. Restrict the suppression to only source or destination IP addresses (indicated by track parameter) determined by ¡list¿. This allows a rule to be completely suppressed. Suppression are standalone configurations that reference generators. Read genmsg. sig id 0 can be used to specify a ”global” threshold that applies to all rules. sig_id 1852: Suppress this event from this IP: 105 . sig_id 1852. such as max content length or event ordering using the event queue.54 Suppress this event to this CIDR block: suppress gen_id 1. ip 10. track by_dst.’.0/24 2. log This determines the number of events to log for a given packet or stream. For example. alert. • content length . We currently have two different methods: • priority . 2.4 Event Logging Snort supports logging multiple events per packet/stream that are prioritized with different insertion methods.1.suppress gen_id 1.1. 3. The default value is content length. 1. order events This argument determines the way that the incoming events are ordered. You can’t log more than the max event number that was specified.4. sig_id 1852. etc. and rules that have a longer content are ordered before rules with shorter contents.The highest priority (1 being the highest) events are ordered first. The default value is 3. ip 10. log. but change event order: config event_queue: order_events priority Use the default event queue values but change the number of logged events: config event_queue: log 2 106 .. max queue This determines the maximum size of the event queue.1. The method in which events are ordered does not affect rule types such as pass. The default value is 8.Rules are ordered before decode or preprocessor alerts. if the event queue has a max size of 8. only 8 events will be stored for a single packet or stream.1. track by_src. These files will be found in the logging directory. save results to perf. Each require only a simple config option to snort.2. the statistics will be saved in these files. you must build snort with the --enable-perfprofiling option to the configure script.txt 107 .txt append • Print the top 10 rules. filename perf. 2.txt with timestamp in filename config profile rules: print 20.txt append • Print top 20 rules. The filenames will have timestamps appended to them. sort avg ticks • Print all rules. \ sort <sort_option> \ [.conf and Snort will print statistics on the worst (or all) performers on exit. sort by avg ticks (default configuration if option is turned on) config profile rules • Print all rules. and append to file rules stats. sort total ticks • Print with default options. save results to performance. based on highest average time config profile rules: print 10.5 Performance Profiling Snort can provide statistics on rule and preprocessor performance.5. sort by avg ticks. based on total time config profile rules: print 100. a new file will be created each time Snort is run. sort checks • Print top 100 rules. When a file name is provided in profile rules or profile preprocs. sorted by number of checks config profile rules: print all. If append is not specified.txt config profile rules: filename rules stats. To use this feature.1 Rule Profiling Format config profile_rules: \ print [all | <num>]. sort total ticks 2. will be high for rules that have no options) • Alerts (number of alerts generated from this rule) • CPU Ticks • Avg Ticks per Check • Avg Ticks per Match • Avg Ticks per Nonmatch Interpreting this info is the key.0 46229.2 Preprocessor Profiling Format config profile_preprocs: \ print [all | <num>]. that most likely contains PCRE. We are looking at moving some of these into code.0 0.0 0.0 90054 45027. this information will be printed to the console when Snort exits. the few options may or may not match.5. Quick to check. But.0 0. \ sort <sort_option> \ [. The filenames will have timestamps appended to them. if that rule is causing alerts.0 53911.conf to specify a file where this will be written.0 92458 46229. A high Avg/Check is a poor performing rule.0 Avg/Match Avg/Nonmatch ========= ============ 385698. If ”append” is not specified.1: Rule Profiling Example Output Output Snort will print a table much like the following at exit. a new file will be created each time Snort is run. especially those with low SIDs. print 4. By default.0 107822 53911.. The Microsecs (or Ticks) column is important because that is the total time spent evaluating a given rule. it makes sense to leave it alone.0 0. filename <filename> [append]] • <num> is the number of preprocessors to print 108 . These files will be found in the logging directory. You can use the ”filename” option in snort.0 45027.0 Figure 2. High Checks and low Avg/Check is usually an any->any rule with few rule options and no content. If ”append” is not specified. this information will be printed to the console when Snort exits.txt append • Print the top 10 preprocessors. The filenames will have timestamps appended to them. Configuration line used to print the above table: config profile_rules: \ print 3. subroutines within preprocessors. the Pct of Caller field will not add up to 100% of the caller’s time. sort by avg ticks (default configuration if option is turned on) config profile preprocs • Print all preprocessors. ports matched.The number is indented for each layer. sort checks . sort by avg ticks. Layer 1 preprocessors are listed under their respective caller (and sorted similarly). sort avg ticks • Print all preprocessors. sorted by number of checks config profile preprocs: Output Snort will print a table much like the following at exit.txt config profile preprocs: filename preprocs stats. • Checks (number of times preprocessor decided to look at a packet. By default.When printing a specific number of preprocessors all subtasks info for a particular preprocessor is printed for each layer 0 preprocessor stat. non-instrumented code. etc) • Exits (number of corresponding exits – just to verify code is instrumented correctly. and append to file preprocs stats. • Preprocessor Name • Layer . app layer header was correct.e. this identifies the percent of the caller’s ticks that is spent for this subtask. 109 print all.For non layer 0 preprocessors.conf to specify a file where this will be written. based on highest average time config profile preprocs: print 10. These files will be found in the logging directory. a new file will be created each time Snort is run. You can use the ”filename” option in snort. should ALWAYS match Checks. Because of task swapping.• <sort option> is one of: checks avg ticks total ticks • <filename> is the output filename • [append] dictates that the output will go to the same file each time (optional) Examples • Print all preprocessors. and other factors. sort total_ticks The columns represent: • Number (rank) . i. unless an exception was trapped) • CPU Ticks • Avg Ticks per Check • Percent of caller . It does give a reasonable indication of how much relative time is spent within each subtask. 00 0.00 0.46 99.01 19.83 0.37 0.00 0.20 47.07 0.00 0.06 3.94 99.33 8.30 0.06 0.37 0.14 25.77 0.00 65.89 2.87 71.84 0.32 0.34 1.81 93.32 0.08 0.12 0.88 44.01 0.39 13.02 0.24 0.00 4.10 0.59 19.94 3.80 0.2: Preprocessor Profiling Example Output 110 .16 0.53 21.59 0.01 0.73 1.51 2.01 0.12 12.11 0.09 0.73 1.91 15.70 0.29 2.79 0.01 0.20 34.23 21.04 0.10 1.92 3.81 39.25 77.81 6.62 3.21 1.00 0.02 47.00 0.00 0.01 0.77 39.12 0.62 17.07 17.04 0.89 0.00 0.00 0.56 39.00 0.08 9.00 0.65 1.43 39.86 4.34 0.00 0.02 11.57 1.85 84.00 0.00 0.34 0.78 1.06 0.00 0.00 0.01 0.16 0.04 0.17 21.78 2.03 0.40 28.72 1.58 0.27 0.00 0.20 19.56 0.16 0.06 0.00 0.02 0.07 6.17 18.53 21.41 0.84 0.00 0.87 0.00 0.22 15657.68 38.16 1.14 307.02 0.51 6.01 0.70 0.20 0.14 0.00 0.70 0.77 0.15 0.06 0.00 Figure 2.17 21.03 8.66 0. sample output.3 Packet Performance Monitoring (PPM) PPM provides thresholding mechanisms that can be used to provide a basic level of latency control for snort. \ debug-pkts # Rule configuration: config ppm: max-rule-time <micro-secs>. . so one or both or neither may be enabled.5. you must build with the –enable-ppm or the –enable-sourcefire option to configure. It does not provide a hard and fast latency guarantee but should in effect provide a good average latency control. \ suspend-expensive-rules. The following sections describe configuration.2. \ threshold count. To use PPM. Both rules and packets can be checked for latency. \ pkt-log. and some implementation details worth noting. The action taken upon detection of excessive latency is configurable. as above. Packet and rule monitoring is independent. \ rule-log [log] [alert] Packets and rules can be configured separately. PPM is configured as follows: # Packet configuration: config ppm: max-pkt-time <micro-secs>. \ suspend-timeout <seconds>. or together in just one config ppm statement. \ fastpath-expensive-packets. then no action is taken other than to increment the count of the number of packets that should be fastpath’d or the rules that should be suspended. These rules were used to generate the sample output that follows.Rule Configuration Options max-rule-time <micro-secs> • enables rule latency thresholding using ’micros-secs’ as the limit. 112 . Example 2: The following suspends rules and aborts packet inspection. threshold 5 If fastpath-expensive-packets or suspend-expensive-rules is not used. A summary of this information is printed out when snort exits.. debug-pkt config ppm: \ max-rule-time 50. 1 nc-rules tested.0438 usecs. 0 rules. \ pkt-log.. fastpath-expensive-packets.3659 usecs PPM: Process-EndPkt[62] PPM: PPM: PPM: PPM: Pkt-Event Pkt[63] used=56.config ppm: \ max-pkt-time 50. \ suspend-timeout 300.15385 usecs PPM: Process-EndPkt[61] PPM: Process-BeginPkt[62] caplen=342 PPM: Pkt[62] Used= 65.. suspend-expensive-rules. packet fastpathed.. Process-BeginPkt[63] caplen=60 Pkt[63] Used= 8.633125 usecs Rule Performance Summary: 113 . alert) are specified. • Since this implementation depends on hardware based high performance frequency counters. Latency control is not enforced after each preprocessor. Output modules are loaded at runtime by specifying the output keyword in the config file: output <name>: <options> output alert_syslog: log_auth log_alert 2. latency thresholding is presently only available on Intel and PPC platforms.6. it is recommended that you tune your thresholding to operate optimally when your system is under load. output plugins send their data to /var/log/snort by default or to a user directed directory (using the -l command line switch). This module also allows the user to specify the logging facility and priority within the Snort config file. giving users greater flexibility in logging alerts. not just the processor time the Snort application receives. Hence the reason this is considered a best effort approach. • This implementation is software based and does not use an interrupt driven timing mechanism and is therefore subject to the granularity of the software based timing tests. Multiple output plugins may be specified in the Snort configuration file. This was a conscious design decision because when a system is loaded. 2. not processor usage by Snort. they are stacked and called in sequence when an event occurs. Available Keywords Facilities • log auth • log authpriv • log daemon 114 . after the preprocessors and detection engine.6 Output Modules Output modules are new as of version 1. As with the standard logging and alerting systems. the latency for a packet is based on the total system time. The format of the directives in the config file is very similar to that of the preprocessors. The output modules are run when the alert or logging subsystems of Snort are called. When multiple plugins of the same type (log. Therefore this implementation cannot implement a precise latency guarantee with strict timing guarantees. Therefore. Due to the granularity of the timing measurements any individual packet may exceed the user specified packet or rule processing time limit.6. They allow Snort to be much more flexible in the formatting and presentation of output to its users.1 alert syslog This module sends alerts to the syslog facility (much like the -s command line switch).max rule time rule events avg nc-rule time Implementation Details : 50 usecs : 0 : 0. • Time checks are made based on the total system time.2675 usecs • Enforcement of packet and rule processing times is done after processing each rule. output alert_syslog: \ [host=<hostname[:<port>].0.] \ <facility> <priority> <options> 115 .0.1. The default host is 127. a hostname and port can be passed as options. The default port is 514. Example output alert_syslog: host=10. Format output alert_fast: [<filename> ["packet"] [<limit>]] <limit> ::= <number>[(’G’|’M’|K’)] • filename: the name of the log file. Example output alert_fast: alert.13 for more information. You may specify ”stdout” for terminal output. The name may include an absolute or relative path. Inside the logging directory. By default. See 2.1. You may specify ”stdout” for terminal output.13 for more information. The default name is ¡logdir¿/alert. See 2.full 116 . The default name is ¡logdir¿/alert. • limit: an optional limit on file size which defaults to 128 MB.1:514.2 alert fast This will print Snort alerts in a quick one-line format to a specified output file. This output method is discouraged for all but the lightest traffic situations. The name may include an absolute or relative path.fast 2. Example output alert_full: alert.6. <facility> <priority> <options> 2.6.1. The minimum is 1 KB. Format output alert_full: [<filename> [<limit>]] <limit> ::= <number>[(’G’|’M’|K’)] • filename: the name of the log file. It is a faster alerting method than full alerts because it doesn’t need to print all of the packet headers to the output file and because it logs to only 1 file. The minimum is 1 KB. a directory will be created per IP.6. only brief single-line entries are logged. The creation of these files slows Snort down considerably.3 alert full This will print Snort alert messages with full packet headers. These files will be decoded packet dumps of the packets that triggered the alerts.6. • packet: this option will cause multiline entries with full packet headers to be logged. The alerts will be written in the default logging directory (/var/log/snort) or in the logging directory specified at the command line. • limit: an optional limit on file size which defaults to 128 MB. Format database: <log | alert>. See 2. <database type>. The name may include an absolute or relative path.2. This is currently an experimental interface.6. When a sequence of packets is to be logged. Example output log_tcpdump: snort.13 for more information.6. The arguments to this plugin are the name of the database to be logged to and a parameter list. dbname . or socket filename extension for UNIX-domain connections. This is useful for performing post-process analysis on collected traffic with the vast number of tools that are available for examining tcpdump-formatted files.log 2.5 log tcpdump The log tcpdump module logs packets to a tcpdump-formatted file. Without a host name.6.Host to connect to. More information on installing and configuring this module can be found on the [91]incident. see Figure 2.org web page. TCP/IP communication is used. Format output log_tcpdump: [<filename> [<limit>]] <limit> ::= <number>[(’G’|’M’|K’)] • filename: the name of the log file. • limit: an optional limit on file size which defaults to 128 MB. <parameter list> The following parameters are available: host . port . Format alert_unixsock Example output alert_unixsock 2.log.Port number to connect to at the server host.Database name 117 .6 database This module from Jed Pickel sends Snort data to a variety of SQL databases.3 for example usage. The default name is ¡logdir¿/snort.4 alert unixsock Sets up a UNIX domain socket and sends alert reports to it. the aggregate size is used to test the rollover condition. A UNIX timestamp is appended to the filename. External programs/processes can listen in on this socket and receive Snort alert and packet data in real time.6. it will connect using a local UNIX domain socket. Parameters are specified with the format parameter = argument. If a non-zero-length string is specified. Represent binary data as an ASCII string.7. Non-ASCII Data is represented as a ‘. then data for IP and TCP options will still be represented as hex because it does not make any sense for that data to be ASCII. signature.>) Searchability .∼1. This is the only option where you will actually lose data. source ip. 118 . Set the type to match the database you are using.Password used if the database demands password authentication sensor name . There are five database types available in the current version of the plugin.’.Because the packet payload and option data is binary. Setting the type to log attaches the database logging functionality to the log facility within the program.not readable requires post processing ascii . tcp flags. dbname=snort user=snort host=localhost password=xyz Figure 2.3: Database Output Plugin Configuration user .5 for more details. destination ip.impossible without post processing Human readability . and odbc. postgresql. log and alert. and protocol) Furthermore.slightly larger than the binary because some characters are escaped (&. oracle. There are two logging types available. Storage requirements . requires post processing base64 . Each has its own advantages and disadvantages: hex (default) .3x the size of the binary Searchability . You severely limit the potential of some analysis applications if you choose this option. destination port.very good detail .Log all details of a packet that caused an alert (including IP/TCP options and the payload) fast . the plugin will be called on the log output chain.not readable unless you are a true geek. but this is still the best choice for some applications. If you do not specify a name. Storage requirements . You can choose from the following options. there is no one simple and portable way to store it in a database.Represent binary data as a base64 string. Blobs are not used because they are not portable across databases.Database username for authentication password .<. ! △NOTE The database output plugin does not have the ability to handle alerts that are generated by using the tag keyword.very good Human readability . source port.very good for searching for a text string impossible if you want to search for binary human readability . one will be generated automatically encoding . If you choose this option.Specify your own name for this Snort sensor. mysql. The following fields are logged: timestamp.How much detailed data do you want to store? The options are: full (default) . See section 3.Represent binary data as a hex string.Log only a minimum amount of data. there is a logging method and database type that must be defined.2x the size of the binary Searchability . Setting the type to alert attaches the plugin to the alert output chain within the program. Storage requirements .output database: \ log. mysql. So i leave the encoding option to you. These are mssql. If you set the type to log. The name may include an absolute or relative path. See 2. the output is in the order of the formatting options listed. <limit> ::= <number>[(’G’|’M’|K’)] • filename: the name of the log file.<field>)* <field> ::= "dst"|"src"|"ttl" .. • format: The list of formatting options is below. The output fields and their order may be customized. Format output alert_csv: [<filename> [<format> [<limit>]]] <format> ::= "default"|<list> <list> ::= <field>(. The minimum is 1 KB.13 for more information.6. The default name is ¡logdir¿/alert.csv.7 csv The csv output plugin allows alert data to be written in a format easily importable to a database.6. 119 . –. You may specify ”stdout” for terminal output.2.. If the formatting option is ”default”. but a slightly different logging format. simply specify unified2. When MPLS support is turned on. The unified output plugin logs events in binary format.6. Likewise. port. packet logging. alert logging will only log events and is specified with alert unified2. ! △NOTE By default.csv default output alert_csv: /var/log/alert. If option mpls event types is not used.8 on unified logging for more information.alert. <limit <file size limit in MB>] output log_unified: <base file name> [.6. mpls_event_types] 120 . then MPLS labels will be not be included in unified2 events.9 unified 2 The unified2 output plugin is a replacement for the unified output plugin. or true unified logging. The alert file contains the high-level details of an event (eg: IPs.log. Use option mpls event types to enable this. as the unified output plugin creates two different files.6. To include both logging styles in a single. The name unified is a misnomer. unified 2 files have the file creation time (in Unix Epoch format) appended to each file when it is created. nostamp] [. Packet logging includes a capture of the entire packet and is specified with log unified2. and a log file. Format output alert_unified: <base file name> [. alert logging. Format output alert_unified2: \ filename <base filename> [. Unified2 can work in one of three modes. allowing another programs to handle complex logging mechanisms that would otherwise diminish the performance of Snort. See section 2. an alert file. <limit <file size limit in MB>] Example output alert_unified: snort. The log file contains the detailed packet information (a packet dump with the associated event ID). limit 128 2. message id). It has the same performance characteristics. Both file types are written in a binary format described in spo unified.8 unified The unified output plugin is designed to be the fastest possible method of logging Snort events. ! △NOTE Files have the file creation time (in Unix Epoch format) appended to each file when it is created. unified file. limit 128 output log_unified: snort. <limit <size in MB>] [.csv timestamp. msg 2. protocol. MPLS labels can be included in unified2 events.h.Example output alert_csv: /var/log/alert. . SMTP. The IP stack fragmentation and stream reassembly is mimicked by the ”linux” configuration (see sections 2. They will be used in a future release.168.1.3 Attribute Table Example In the example above. port. TCP port 22 is ssh (running Open SSH).1. • Application Layer Preprocessors The application layer preprocessors (HTTP. the stream and IP frag information are both used. Telnet. etc) are used. FTP. for a given host entry. Conversely. .234 port 2300 because it is identified as telnet. etc) make use of the SERVICE information for connections destined to that host on that port. The confidence metric may be used to indicate the validity of a given service or client application and its respective elements.6 is described. Of the service attributes. ssh.2.234 port 2300 as telnet. HTTP Inspect is configured to inspect traffic on port 2300.168. For example. HTTP Inspect will NOT process the packets on a connection to 192. Below is a list of the common services used by Snort’s application layer preprocessors and Snort rules (see below). for example. even if the telnet portion of the FTP/Telnet preprocessor is only configured to inspect port 23. This host has an IP address of 192.168.2. and protocol (http. A DTD for verification of the Host Attribute Table XML file is provided with the snort packages. only the IP protocol (tcp.1. udp. Snort will inspect packets for a connection to 192. 2. That field is not currently used by Snort. if. and TCP port 2300 is telnet. On that host.234. a host running Red Hat 2.7.1 and 2.<VERSION> <ATTRIBUTE_VALUE>6. The application and version for a given service attribute. but may be in future releases.8.0</ATTRIBUTE_VALUE> <CONFIDENCE>89</CONFIDENCE> </VERSION> </APPLICATION> </CLIENT> </CLIENTS> </HOST> </ATTRIBUTE_TABLE> </SNORT_ATTRIBUTES> ! △NOTE With Snort 2. and any client attributes are ignored.2).1.. established. rules configured for specific ports that have a service metadata will be processed based on the service identified by the attribute table. alert tcp any any -> any 2300 (msg:"Port 2300 traffic". alert tcp any any -> any 2300 (msg:"SSH traffic". Connection Service Does Not Match. service smtp.234 port 2300 because the port does not match.234 port 2300 because that traffic is identified as telnet.) 126 .1. flow:to_server. sid:10000004. alert tcp any any -> any 2300 (msg:"Port 2300 traffic".1. metadata: service telnet. sid:10000003.established. flow:to_server. flow:to_server. • Alert: Rule Has Service Metadata. Connection Service Matches The following rule will be inspected and alert on traffic to host 192.168. alert tcp any any -> any 23 (msg:"Telnet traffic".1. flow:to_server. sid:10000002.established. alert tcp any any -> any 23 (msg:"Telnet traffic". sid:10000006. When both service metadata is present in the rule and in the connection. flow:to_server.234 port 2300 because it is identified as telnet. flow:to_server. Port Does Not Match The following rule will NOT be inspected and NOT alert on traffic to host 192. The following few scenarios identify whether a rule will be inspected or not. but the service is ssh.1. metadata: service telnet. metadata: service telnet.168.established.established.1. flow:to_server.) • Alert: Rule Has No Service Metadata.established. If there are rules that use the service and other rules that do not but the port matches.168.168. Packet has service + other rules with service The first rule will NOT be inspected and NOT alert on traffic to host 192. Port Matches The following rule will be inspected and alert on traffic to host 192.) • No Alert: Rule Has Service Metadata.234 port 2300 because the port matches. sid:10000007. Snort will ONLY inspect the rules that have the service that matches the connection.) • No Alert: Rule Has No Service Metadata.) • Alert: Rule Has Multiple Service Metadata. alert tcp any any -> any 23 (msg:"Port 23 traffic".234 port 2300 because it is identified as telnet.234 port 2300 because the service is identified as telnet and there are other rules with that service. sid:10000001.) • Alert: Rule Has No Service Metadata.Attribute Table Affect on rules Similar to the application layer preprocessors. Snort uses the service rather than the port.168.1. Port Matches The following rule will NOT be inspected and NOT alert on traffic to host 192. Connection Service Matches One of them The following rule will be inspected and alert on traffic to host 192.168.) alert tcp any any -> any 2300 (msg:"Port 2300 traffic". metadata: service ssh. sid:10000005.established. dynamicengine [ file <shared library path> | directory <directory of shared libraries> ] dynamicdetection [ file <shared library path> | directory <directory of shared libraries> ] 2. Or.2. See chapter 4 for more information on dynamic detection rules libraries. Note that for some preprocessors.8. Specify file.8. Snort must be configured with the --disable-dynamicplugin flag. specify directory. specify directory. See chapter 4 for more information on dynamic preprocessor libraries. (Same effect as --dynamic-engine-lib or --dynamic-preprocessor-lib-dir options). followed by the full or relative path to a directory of preprocessor shared libraries. Specify file.2 Directives Syntax dynamicpreprocessor [ file <shared library path> | directory <directory of shared libraries> ] Description Tells snort to load the dynamic preprocessor shared library (if file is used) or all dynamic preprocessor shared libraries (if directory is used). however.8 Dynamic Modules Dynamically loadable modules were introduced with Snort 2. All newly created sessions will. A separate thread will parse and create a swappable configuration object while the main Snort packet processing thread continues inspecting traffic under the current configuration.6. followed by the full or relative path to a directory of preprocessor shared libraries. (Same effect as --dynamic-preprocessor-lib or --dynamic-preprocessor-lib-dir options). Or. 2.. Or. the main Snort packet processing thread will swap in the new configuration to use and will continue processing under the new configuration. followed by the full or relative path to the shared library.1 Format <directive> <parameters> 2. Tells snort to load the dynamic engine shared library (if file is used) or all dynamic engine shared libraries (if directory is used).conf or via command-line options. They can be loaded via directives in snort. followed by the full or relative path to the shared library. use the new configuration. existing session data will continue to use the configuration under which they were created in order to continue with proper state for that session. ! △NOTE To disable use of dynamic modules. (Same effect as --dynamic-detection-lib or --dynamic-detection-lib-dir options). 127 . Specify file. followed by the full or relative path to a directory of detection rules shared libraries. See chapter 4 for more information on dynamic engine libraries. followed by the full or relative path to the shared library. Tells snort to load the dynamic detection rules shared library (if file is used) or all dynamic detection rules shared libraries (if directory is used). $ kill -SIGHUP <snort pid> ! △NOTE is not enabled.e.g. Snort will restart (as it always has) upon receipt of a SIGHUP.2 Reloading a configuration First modify your snort. If reload support ! △NOTEconfiguration will still result in Snort fatal erroring. any new/modified/removed shared objects will require a restart. • Any changes to output will require a restart.3 Non-reloadable configuration options There are a number of option changes that are currently non-reloadable because they require changes to output. so you should test your new configuration An invalid before issuing a reload. ! △NOTE is not currently supported in Windows. startup memory allocations. Non-reloadable configuration options of note: • Adding/modifying/removing shared objects via dynamicdetection. e. There is also an ancillary option that determines how Snort should behave if any non-reloadable options are changed (see section 2.).9. e. Reloadable configuration options of note: • Adding/modifying/removing text rules and variables are reloadable. Changes to the following options are not reloadable: attribute_table config alertfile config asn1 config chroot 128 . This functionality 2. dynamicengine and dynamicpreprocessor are not reloadable.9. To disable this behavior and have Snort exit instead of restart.2.conf (the file passed to the -c option on the command line).conf -T 2.9.3 below).9. This option is enabled by default and the behavior is for Snort to restart if any nonreloadable options are added/modified/removed. to initiate a reload. etc.g. $ snort -c snort. add --enable-reload to configure when compiling. send Snort a SIGHUP signal. i. • Adding/modifying/removing preprocessor configurations are reloadable (except as noted below).1 Enabling support To enable support for reloading a configuration. Then.. their value applies to all other configurations. Each configuration can have different preprocessor settings and detection rules.1 Creating Multiple Configurations Default configuration for snort is specified using the existing -c option.Refers to ip subnets.2 Configuration Specific Elements Config Options Generally config options defined within the default configuration are global by default i.conf . Subnets can be CIDR blocks for IPV6 or IPv4. ipList .e. VLANs/Subnets not bound to any specific configuration will use the default configuration. ! △NOTE can not be used in the same line. config config config config config checksum_drop disable_decode_alerts disable_decode_drops disable_ipopt_alerts disable_ipopt_drops 130 .conf for specific configuration. policy_id policy_mode policy_version The following config options are specific to each configuration.Refers to the absolute or relative path to the snort. The following config options are specific to each configuration.2.conf> vlan <vlanIdList> config binding: <path_to_snort.10. A maximum of 512 individual IPv4 or IPv6 addresses or CIDRs can be specified. A default configuration binds multiple vlans or networks to non-default configurations. ! △NOTE Vlan Ids 0 and 4095 are reserved. Configurations can be applied based on either Vlans or Vlan and Subnets Subnets not both. using the following configuration line: config binding: <path_to_snort. Negative vland Ids and alphanumeric are not supported.conf> net <ipList> path to snort. the default values of the option (not the default configuration values) take effect. they are included as valid in terms of configuring Snort. vlanIdList . If not defined in a configuration. 2. Valid vlanId is any number in 0-4095 range.10. Each unique snort configuration file will create a new configuration instance within. Even though 2. The format for ranges is two vlanId separated by a ”-”.10 Multiple Configurations Snort now supports multiple configurations based on VLAN Id or IP subnet within a single instance of Snort. Spaces are allowed within ranges.Refers to the comma seperated list of vlandIds and vlanId ranges. If the rules in a configuration use variables. 131 .. those variables must be defined in that configuration. through specific limit on memory usage or number of instances. This is required as some mandatory preprocessor configuration options are processed only in default configuration. The options control total memory usage for a preprocessor across all policies. This policy id will be used to identify alerts from a specific configuration in the unified2 records. Variables Variables defined using ”var”.Refers to a 16-bit unsigned value. limited to: Source IP address and port Destination IP address and port Action A higher revision of a rule in one configuration will override other revisions of the same rule in other configurations. are processed only in default policy. ! △NOTE If no policy id is specified. A preprocessor must be configured in default configuration before it can be configured in non-default configuration. snort assigns 0 (zero) value to the configuration. payload detection options. To enable vlanId logging in unified2 records the following option can be used. Parts of the rule header can be specified differently across configurations. non-payload detection options. Options controlling specific preprocessor memory usage. A rule shares all parts of the rule options. and post-detection options. Events and Output An unique policy id can be assigned by user. including the general options. If a rule is not specified in a configuration then the rule will never raise an event for the configuration. to each configuration using the following config line: config policy_id: <id> id . ”portvar” and ”ipvar” are specific to configurations. 11 Active Response Snort 2. this can lead to conflicts between configurations if source address is bound to one configuration and destination address is bound to another.11.10. If VLANID is present. snort will use unified2 event type 104 and 105 for IPv4 and IPv6 respectively.output alert_unified2: vlan_event_types (alert logging only) output unified2: filename <filename>.25) <min_sec> ::= (1.. vlan_event_types (true unified logging) filename . then the innermost VLANID is used to find bound configuration.9 includes a number of changes to better handle inline operation. The packet is assigned non-default configuration if found otherwise the check is repeated using source IP address. 132 .3 How Configuration is applied? Snort assigns every incoming packet to a unique configuration based on the following criteria. \ min_response_seconds <min_sec> <max_rsp> ::= (0. If the bound configuration is the default configuration. ! △NOTElogged will have the vlanId from the packet if vlan headers are present otherwise 0 will be used. .1 Enabling Active Response This enables active responses (snort will send TCP RST or ICMP unreachable/port) when dropping a session. default configuration is used if no other matching configuration is found. Each event 2. For addressed based configuration binding.. snort will use the first configuration in the order of definition. that can be applied to the packet. In the end. including: • a single mechanism for all responses • fully encoded reset or icmp unreachable packets • updated flexible response rule option • updated react rule option • added block and sblock rule actions These changes are outlined below. then destination IP address is searched to the most specific subnet that is bound to a non-default configuration./configure --enable-active-response / -DACTIVE_RESPONSE preprocessor stream5_global: \ max_active_responses <max_rsp>.300) Active responses will be encoded based on the triggering packet. vlan event types .When this option is set. In this case.Refers to the absolute or relative filename. 2. TTL will be set to the value captured at session pickup. 2. resp:<resp_t>./configure --enable-flexresp3 / -DENABLE_RESPOND -DENABLE_RESPONSE3 alert tcp any any -> any 80 (content:"a". In inline mode the reset is put straight into the stream in lieu of the triggering packet so strafing is not necessary. At most 1 ICMP unreachable is sent. sid:1.11.11./configure --enable-active-response config response: attempts <att> <att> ::= (1. * Flexresp is deleted. these features are deprecated. This is built with: .2. these features are no longer avaliable: .4 React react is a rule option keyword that enables sending an HTML page on a session and then resetting. This sequence ”strafing” is really only useful in passive mode. TCP data (sent for react) is multiplied similarly./configure --enable-flexresp / -DENABLE_RESPOND -DENABLE_RESPONSE config flexresp: attempts 1 * Flexresp2 is deleted./configure --enable-react / -DENABLE_REACT The page to be sent can be read from a file: 133 . if and only if attempts ¿ 0. Each active response will actually cause this number of TCP resets to be sent. . non-functional. . and will be deleted in a future release: .20) 2.11. Each attempt (sent in rapid succession) has a different sequence number.3 Flexresp Flexresp and flexresp2 are replaced with flexresp3.. sid:4. msg:"Unauthorized Access Prohibited!". a resp option can be used instead.html> or else the default is used: <default_page> ::= \ "HTTP/1. the response isn’t strictly limited to HTTP. the page is loaded and the selected message.5 Rule Actions The block and sblock actions have been introduced as synonyms for drop and sdrop to help avoid confusion between packets dropped due to load (eg lack of available buffers for incoming packets) and packets blocked due to Snort’s analysis. If no page should be sent.org/TR/xhtml11/DTD/xhtml11. 2. charset=UTF-8\" />\r\n" \ "<title>Access Denied</title>\r\n" \ "</head>\r\n" \ "<body>\r\n" \ "<h1>Access Denied</h1>\r\n" \ "<p>%s</p>\r\n" \ "</body>\r\n" \ "</html>\r\n". [proxy <port#>] The original version sent the web page to one end of the session only if the other end of the session was port 80 or the optional proxy port. which defaults to: <default_msg> ::= \ "You are attempting to access a forbidden site.w3. This is an example rule: drop tcp any any -> any $HTTP_PORTS ( \ content: "d". Note that the file must contain the entire response. In fact. The deprecated options are ignored.".) <react_opts> ::= [msg] [.1//EN\"\r\n" \ " \".<br />" \ "Consult your system administrator for details. charset=utf-8\r\n" "\r\n" "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1. including any HTTP headers. When the rule is configured.w3. 134 .1 403 Forbidden\r\n" "Connection: close\r\n" "Content-Type: text/html. <dep_opts>] These options are deprecated: <dep_opts> ::= [block|warn]. You could craft a binary payload of arbitrary content.11.config react: <block. \ react: <react_opts>.org/1999/xhtml\" xml:lang=\"en\">\r\n" \ "<head>\r\n" \ "<meta http-equiv=\"Content-Type\" content=\"text/html.dtd\">\r\n" \ "<html xmlns=\". The new version always sends the page to the client. 1: Sample Snort Rule 135 . The text up to the first parenthesis is the rule header and the section enclosed in parenthesis contains the rule options.168.1 Rule Actions The rule header contains the information that defines the who.0/24 111 \ (content:"|00 01 86 a5|". the various rules in a Snort rules library file can be considered to form a large logical OR statement. The rule option section contains alert messages and information on which parts of the packet should be inspected to determine if the rule action should be taken.Chapter 3 Writing Snort Rules 3. msg:"mountd access".2.) Figure 3.1 illustrates a sample Snort rule. the rule header and the rule options. 3. The words before the colons in the rule options section are called option keywords. the elements can be considered to form a logical AND statement. ! △NOTE Note that the rule options section is not specifically required by any rule. There are a number of simple guidelines to remember when developing Snort rules that will help safeguard your sanity. Snort rules are divided into two logical sections. where. they are just used for the sake of making tighter definitions of packets to collect or alert on (or drop. source and destination IP addresses and netmasks. All of the elements in that make up a rule must be true for the indicated rule action to be taken. for that matter). This was required in versions prior to 1. as well as what to do in the event that a packet with all the attributes indicated in the rule should show up.2 Rules Headers 3. Figure 3.1. protocol. Most Snort rules are written in a single line. When taken together. and the source and destination ports information. and what of a packet.1 The Basics Snort uses a simple. The first item in a rule is the rule alert tcp any any -> 192. rules may span multiple lines by adding a backslash \ to the end of the line. In current versions of Snort. lightweight rules description language that is flexible and quite powerful. At the same time.8. The rule header contains the rule’s action. log the packet 3.alert and then turn on another dynamic rule 5.action. There are four protocols that Snort currently analyzes for suspicious behavior – TCP. For example. pass. activate . The CIDR block indicates the netmask that should be applied to the rule’s address and any incoming packets that are tested against the rule. and then log the packet 2.1. sdrop . You can also define your own rule types and associate one or more output plugins with them. log . A CIDR block mask of /24 indicates a Class C network. IGRP.block and log the packet 7. The keyword any may be used to define any address. then act as a log rule 6. You can then use the rule types as actions in Snort rules. log it. activate. Snort does not have a mechanism to provide host name lookup for the IP address fields in the config file. and IP. pass .ignore the packet 4.168. 136 . This example will create a type that will log to just tcpdump: ruletype suspicious { type log output log_tcpdump: suspicious. 3.2.2. drop .block the packet. There are 5 available default actions in Snort. and then send a TCP reset if the protocol is TCP or an ICMP port unreachable message if the protocol is UDP.remain idle until activated by an activate rule .generate an alert using the selected alert method. user=snort dbname=snort host=localhost } 3. In the future there may be more. The CIDR designations give us a nice short-hand way to designate large address spaces with just a few characters. you have additional options which include drop. and sdrop. reject .block the packet but do not log it.2 Protocols The next field in a rule is the protocol. say.255. The rule action tells Snort what to do when it finds a packet that matches the rule criteria. OSPF. ICMP. if you are running Snort in inline mode. 1.log } This example will create a rule type that will log to syslog and a MySQL database: ruletype redalert { type alert output alert_syslog: LOG_AUTH LOG_ALERT output database: log.1. the destination address would match on any address in that range. dynamic . alert .1. Any rule that used this designation for. alert. In addition. mysql. and dynamic.168.0/24 would signify the block of addresses from 192.1 to 192. /16 a Class B network.168. such as ARP. GRE. RIP. and /32 indicates a specific machine address. UDP. etc. IPX. 8. log. reject. the address/CIDR combination 192. The addresses are formed by a straight numeric IP address and a CIDR[3] block.3 IP Addresses The next portion of the rule header deals with the IP address and port information for a given rule. The range operator may be applied in a number of ways to take on different meanings.). The negation operator may be applied against any of the other rule types (except any.1.168. ranges. such as 111 for portmapper. such as in Figure 3. For the time being. An IP list is specified by enclosing a comma separated list of IP addresses and CIDR blocks within square brackets. For example.1. which would translate to none.0/24] any -> \ [192.168.0/24 1:1024 log udp traffic coming from any port and destination ports ranging from 1 to 1024 log tcp any any -> 192.4: Port Range Examples 137 . including any ports. meaning literally any port. This rule’s IP addresses indicate any tcp packet with a source IP address not originating from the internal network and a destination address on the internal network.168.1. Port negation is indicated by using the negation operator !.. There is an operator that can be applied to IP addresses.168.1. This operator tells Snort to match any IP address except the one indicated by the listed IP address. 3.168.0/24 500: log tcp traffic from privileged ports less than or equal to 1024 going to ports greater than or equal to 500 Figure 3. of the traffic that the rule applies to.5 The Direction Operator The direction operator -> indicates the orientation.1.2: Example IP Address Negation Rule alert tcp ![192.2.2. the IP list may not include spaces between the addresses. Any ports are a wildcard value.0 Class C network.0/24] 111 (content:"|00 01 86 a5|".1.168. how Zen.3: IP Address Lists In Figure 3. if for some twisted reason you wanted to log everything except the X Windows ports. static port definitions. Static ports are indicated by a single port number. See Figure 3. 3. or direction.2. an easy modification to the initial example is to make it alert on any traffic that originates outside of the local net with the negation operator as shown in Figure 3. the negation operator.) Figure 3. \ msg:"external mountd access". Port ranges are indicated with the range operator :.1.1.168.1. 23 for telnet. The IP address and port numbers on the left side of the direction operator is considered to be the traffic coming from the source log udp any any -> 192.1. or 80 for http. For example. etc. and by negation. You may also specify lists of IP addresses. and the destination address was set to match on the 192.) Figure 3.0/24. the source IP address was set to match for any computer talking.. you could do something like the rule in Figure 3.1.5.1.10.168.0/24 any -> 192.10.3 for an example of an IP list in action.0/24 111 \ (content:"|00 01 86 a5|".4.0/24. The negation operator is indicated with a !.alert tcp !192.0/24 :6000 log tcp traffic from any port going to ports less than or equal to 6000 log tcp any :1024 -> 192.4 Port Numbers Port numbers may be specified in a number of ways. msg:"external mountd access".1. note that there is no <. There is also a bidirectional operator. so there’s value in collecting those packets for later analysis. The reason the <. the direction operator did not have proper error checking and many people used an invalid token. except they have a *required* option field: activates. activate tcp !$HOME_NET any -> $HOME_NET 143 (flags:PA. but they have a different option field: activated by. Activate/dynamic rule pairs give Snort a powerful capability.5: Example of Port Negation log tcp !192. Dynamic rules are just like log rules except are dynamically enabled when the activate rule id goes off.6 Activate/Dynamic Rules ! △NOTE Activate and Dynamic rules are being phased out in favor of a combination of tagging (3.) dynamic tcp !$HOME_NET any -> $HOME_NET 143 (activated_by:1.168. \ content:"|E8C0FFFFFF|/bin". Dynamic rules act just like log rules.7. 3. Activate rules act just like alert rules.0/24 !6000:6010 Figure 3. \ msg:"IMAP buffer overflow!". and the address and port information on the right side of the operator is the destination host. All Snort rule options are separated from each other using the semicolon (.1. Also. Rule option keywords are separated from their arguments with a colon (:) character.7. activates:1.6.3 Rule Options Rule options form the heart of Snort’s intrusion detection engine. Activate rules are just like alerts but also tell Snort to add a rule when a specific network event occurs.10).0/24 any <> 192.1. This is handy for recording/analyzing both sides of a conversation. These rules tell Snort to alert when it detects an IMAP buffer overflow and collect the next 50 packets headed for port 143 coming from outside $HOME NET headed to $HOME NET.log tcp any any -> 192.1. In Snort versions before 1. Put ’em together and they look like Figure 3.) Figure 3. count:50.6.5) and flowbits (3.7: Activate/Dynamic Rule Example 138 .0/24 23 Figure 3. count. such as telnet or POP3 sessions. This is very useful if you want to set Snort up to perform follow on recording when a specific rule goes off. Dynamic rules have a second required field as well.6: Snort rules using the Bidirectional Operator host.operator.does not exist is so that rules always read consistently.7. 3. An example of the bidirectional operator being used to record both sides of a telnet session is shown in Figure 3.168.168.) character. This tells Snort to consider the address/port pairs in either the source or destination orientation. which is indicated with a <> symbol.2. there’s a very good possibility that useful data will be contained within the next 50 (or whatever) packets going to that same service port on the network.8. combining ease of use with power and flexibility. If the buffer overflow happened and was successful. You can now have one rule activate another when it’s action is performed for a set number of packets. 3.com/bid/ are four major categories of rule options.. <id>.whitehats.org/show/osvdb/ http:// System bugtraq cve nessus arachnids mcafee osvdb url Format reference:<id system>.4 General Rule Options 3.php3?id= (currently down). [reference:<id system>.cgi/ for a system that is indexing descriptions of alerts based on of the sid (See Section 3. Make sure to also take a look at: Supported Systems URL Prefix. reference:arachnids. \ flags:AP.org/cgi-bin/cvename. The plugin currently supports several specific systems as well as unique URLs. <id>. content:"|fff4 fffd 06|". It is a simple text string that utilizes the \ as an escape character to indicate a discrete character that might otherwise confuse Snort’s rules parser (such as the semi-colon .securityfocus. \ 139 .cgi?name= msg The msg rule option tells the logging and alerting engine the message to print along with a packet dump or to an alert.com/vil/content/v.) alert tcp any any -> any 21 (msg:"IDS287/ftp-wuftp260-venglin-linux".nessus.2 reference The reference keyword allows rules to include references to external attack identification systems.mitre. This plugin is to be used by output plugins to provide a link to additional information about the alert produced.4.nai.snort.” 3. Format msg:"<message text>".IDS411.com/info/IDS.] Examples alert tcp any any -> any 7070 (msg:"IDS411/dos-realaudio".4). Table 3.4. character). ) 140 . (See section 3. it will default to 1 and the rule will be part of the general rule subsystem. This information is useful when postprocessing alert to map an ID to an alert message. This information allows output plugins to identify rules easily.) 3. See etc/generators in the source tree for the current generator ids in use.000 be used.999 Rules included with the Snort distribution • >=1. Format gid:<generator id>. (See section 3. it is not recommended that the gid keyword be used. rev:1.4. content:"|31c031db 31c9b046 cd80 31c031db|".4. To avoid potential conflict with gids defined in Snort (that for some reason aren’t noted it etc/generators).map contains contains more information on preprocessor and decoder gids. Example This example is a rule with a generator id of 1000001.5) • <100 Reserved for future use • 100-999.000.4 sid The sid keyword is used to uniquely identify Snort rules.flags:AP. \ reference:arachnids.map contains a mapping of alert messages to Snort rule IDs. reference:bugtraq. \ reference:cve.1387. Format sid:<snort rules id>.IDS287.) 3.4. alert tcp any any -> any 80 (content:"BOB". Example This example is a rule with the Snort Rule ID of 1000983. it is recommended that values starting at 1. gid:1000001. For general rule writing. sid:1. For example gid 1 is associated with the rules subsystem and various gids over 100 are designated for specific preprocessors and the decoder.4.000. This option should be used with the rev keyword. This option should be used with the sid keyword. rev:1. alert tcp any any -> any 80 (content:"BOB".CAN-2000-1574.4) The file etc/gen-msg. sid:1000983.000 Used for local rules The file sid-msg. 6 classtype The classtype keyword is used to categorize a rule as detecting an attack that is part of a more general type of attack class. flags:A+.<default priority> These attack classifications are listed in Table 3. rev:1.4) Format rev:<revision integer>. allow signatures and descriptions to be refined and replaced with updated information. classtype:attempted-recon. \ content:"expn root".5 rev The rev keyword is used to uniquely identify revisions of Snort rules.) Attack classifications defined by Snort reside in the classification. Revisions.) 3. The file uses the following syntax: config classification: <class name>.<class description>. This option should be used with the sid keyword.4. Defining classifications for rules provides a way to better organize the event data Snort produces. along with Snort rule id’s. alert tcp any any -> any 80 (content:"BOB". Example This example is a rule with the Snort Rule Revision of 1. Example alert tcp any any -> any 25 (msg:"SMTP expn root". Snort provides a default set of attack classes that are used by the default set of rules it provides.4. A priority of 1 (high) is the most severe and 4 (very low) is the least severe. They are currently ordered with 4 default priorities. (See section 3. Format classtype:<class name>.3. Table 3. nocase. sid:1000983.2. 3. Examples alert tcp any any -> any 80 (msg:"WEB-MISC phf attempt".config that are used by the rules it provides. Examples of each case are given below.4. priority:10. priority:10 ). A classtype rule assigns a default priority (defined by the config classification option) that may be overridden with a priority rule. 142 . flags:A+. Format priority:<priority integer>. \ dsize:>128. \ content:"/cgi-bin/phf". classtype:attempted-admin. Snort provides a default set of classifications in classification.) alert tcp any any -> any 80 (msg:"EXPLOIT ntpdx overflow".conf by using the config classification option.7 priority The priority tag assigns a severity level to rules. with a key and a value. Examples alert tcp any any -> any 80 (msg:"Shared Library Rule Example". The gid keyword (generator id) is used to identify what part of Snort generates the event when a particular rule fires. metadata:key1 value1. soid 3|12345.3. Table 3.8 metadata The metadata tag allows a rule writer to embed additional information about the rule. \ metadata:service http. Certain metadata keys and values have meaning to Snort and are listed in Table 3. key2 value2.4. \ metadata:engine shared.) alert tcp any any -> any 80 (msg:"Shared Library Rule Example".4. The reference keyword allows rules to include references to external attack identification systems. The first uses multiple metadata keywords. When the value exactly matches the service ID as specified in the table. . Keys other than those listed in the table are effectively ignored by Snort and can be free-form.3.) alert tcp any any -> any 80 (msg:"HTTP Service Rule Example".7 for details on the Host Attribute Table. metadata:key1 value1. the rule is applied to that packet.) 3. See Section 2. with keys separated by commas. 143 . while keys and values are separated by a space. the second a single metadata keyword. typically in a key-value format. otherwise. \ metadata:engine shared. Multiple keys are separated by a comma..9 General Rule Quick Reference Table 3. Format The examples below show an stub rule from a shared library rule. metadata:soid 3|12345. The metadata keyword allows a rule writer to embed additional information about the rule. the result will return a match. Be aware that this test is case sensitive. the test is successful and the remainder of the rule option tests are performed. and there are only 5 bytes of payload and there is no ”A” in those 5 bytes. If there must be 50 bytes for a valid match. The option data for the content keyword is somewhat complex. the alert will be triggered on packets that do not contain this content. Examples alert tcp any any -> any 139 (content:"|5c 00|P|00|I|00|P|00|E|00 5c|".) alert tcp any any -> any 80 (content:!"GET". It allows the user to set rules that search for specific content in the packet payload and trigger response based on that data. If data exactly matching the argument data string is contained anywhere within the packet’s payload. The binary data is generally enclosed within the pipe (|) character and represented as bytecode. If the rule is preceded by a !.1 content The content keyword is one of the more important features of Snort. 144 . This is useful when writing rules that want to alert on packets that do not match a certain pattern ! △NOTE Also note that the following characters must be escaped inside a content rule: . 3. within:50. Note that multiple content rules can be specified in one rule. if using content:!"A". it can contain mixed text and binary data. The classtype keyword is used to categorize a rule as detecting an attack that is part of a more general type of attack class.5. This allows rules to be tailored for less false positives. typically in a key-value format. modifiers included. the Boyer-Moore pattern match function is called and the (rather computationally expensive) test is performed against the packet contents.5 Payload Detection Rule Options 3.) ! △NOTE A ! modifier negates the results of the entire content search. use isdataat as a pre-cursor to the content. \ " Format content:[!]"<content string>". Bytecode represents binary data as hexadecimal numbers and is a good shorthand method for describing complex binary data. Whenever a content option pattern match is performed. For example. The rev keyword is used to uniquely identify revisions of Snort rules. The example below shows use of mixed text and binary data in a Snort rule. The priority keyword assigns a severity level to rules.sid rev classtype priority metadata The sid keyword is used to uniquely identify Snort rules. 5. nocase.5.5.5.13 http uri 3. 145 . The modifier keywords change how the previously specified content works.7 http client body 3.5.9 http raw cookie 3.3 rawbytes The rawbytes keyword allows rules to look at the raw packet data. nocase modifies the previous content keyword in the rule.5.5. This acts as a modifier to the previous content 3. Example alert tcp any any -> any 21 (msg:"FTP ROOT".4 offset 3.2 rawbytes 3.5.5.12 http method 3.5: Content Modifiers Modifier Section nocase 3. content:"USER root".2 nocase The nocase keyword allows the rule writer to specify that the Snort should look for the specific pattern.5.5. format rawbytes.17 fast pattern 3.8 http cookie 3.1 option.3 depth 3. ignoring any decoding that was done by preprocessors. ignoring case.16 http stat msg 3.5.) 3.10 http header 3.Changing content behavior The content keyword has a number of modifier keywords.14 http raw uri 3.5.15 http stat code 3.5.5.5.5.11 http raw header 3.5. These modifier keywords are: Table 3.5 distance 3.6 within 3. Format nocase.5.19 3.5. or within (to modify the same content). 3. depth:20. This keyword allows values greater than or equal to the pattern length being searched. alert tcp any any -> any 80 (content:"cgi-bin/phf". Format depth:[<number>|<var_name>]. distance. content:"|FF F1|". offset modifies the previous ’content’ keyword in the rule.) 146 . offset. You can not use offset with itself. distance. rawbytes.5 offset The offset keyword allows the rule writer to specify where to start searching for a pattern within a packet. Format offset:[<number>|<var_name>]. instead of the decoded traffic provided by the Telnet decoder. A depth of 5 would tell Snort to only look for the specified pattern within the first 5 bytes of the payload. there must be a content in the rule before offset is specified. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule. As the depth keyword is a modifier to the previous content keyword. depth modifies the previous ‘content’ keyword in the rule. or within (to modify the same content). The offset and depth keywords may be used together.5. there must be a content in the rule before depth is specified. The maximum allowed value for this keyword is 65535. alert tcp any any -> any 21 (msg:"Telnet NOP". An offset of 5 would tell Snort to start looking for the specified pattern after the first 5 bytes of the payload. and depth search rule.4 depth The depth keyword allows the rule writer to specify how far into a packet Snort should search for the specified pattern. The offset and depth keywords may be used together.5. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule. Example The following example shows use of a combined content.Example This example tells the content pattern matcher to look at the raw traffic. As this keyword is a modifier to the previous content keyword.) 3. This keyword allows values from -65535 to 65535. offset:4. You can not use depth with itself. The minimum allowed value is 1. within:10. or depth (to modify the same content). alert tcp any any -> any any (content:"ABC".5. The maximum allowed value for this keyword is 65535.5.) 147 . The distance and within keywords may be used together. You can not use within with itself. This keyword allows values greater than or equal to pattern length being searched. alert tcp any any -> any any (content:"ABC".{1}DEF/.6) rule option.1 ). distance:1. offset.7 within The within keyword is a content modifier that makes sure that at most N bytes are between pattern matches using the content keyword ( See Section 3.5. You can not use distance with itself.5.5). This can be thought of as exactly the same thing as offset (See Section 3. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule. The value can also be set to a string value referencing a variable extracted by the byte extract keyword in the same rule. Format within:[<byte_count>|<var_name>]. offset. Examples This rule constrains the search of EFG to not go past 10 bytes past the ABC match. or depth (to modify the same content). It’s designed to be used in conjunction with the distance (Section 3.5. Format distance:[<byte_count>|<var_name>]. The distance and within keywords may be used together. content:"DEF". content:"EFG". Example The rule below maps to a regular expression of /ABC.3.) 3. This keyword allows values from -65535 to 65535. 5. http_client_body. there must be a content in the rule before http cookie is specified. The cookie buffer also includes the header name (Cookie for HTTP requests or Set-Cookie for HTTP responses). http_cookie. The Cookie Header field will be extracted only when this option is configured. Examples This rule constrains the search for the pattern ”EFG” to the raw body of an HTTP client request. content:"EFG". Format http_client_body.8 http client body The http client body keyword is a content modifier that restricts the search to the body of an HTTP client request. This keyword is dependent on the enable cookie config option. As this keyword is a modifier to the previous content keyword. Examples This rule constrains the search for the pattern ”EFG” to the extracted Cookie Header field of a HTTP client request. alert tcp any any -> any 80 (content:"ABC". If enable cookie is not specified. As this keyword is a modifier to the previous content keyword. The extracted Cookie Header field may be NORMALIZED.5. the cookie still ends up in HTTP header.2. Format http_cookie.) ! △NOTE The http cookie modifier is not allowed to be used with the rawbytes or fast pattern modifiers for the same content. content:"EFG".6). alert tcp any any -> any 80 (content:"ABC". per the configuration of HttpInspect (see 2.) ! △NOTE The http client body modifier is not allowed to be used with the rawbytes modifier for the same content. there must be a content in the rule before ’http client body’ is specified.3.2. 148 . When enable cookie is not specified. 3. The amount of data that is inspected with this option depends on the post depth config option of HttpInspect. Pattern matches with this keyword wont work when post depth is set to -1. using http cookie is the same as using http header. http cookie or fast pattern modifiers for the same content. Examples This rule constrains the search for the pattern ”EFG” to the extracted Unnormalized Cookie Header field of a HTTP client request.6).2.2.6). content:"EFG".2. http_header.6). Examples This rule constrains the search for the pattern ”EFG” to the extracted Header fields of a HTTP client request or a HTTP server response.5. This keyword is dependent on the enable cookie config option. there must be a content in the rule before http raw cookie. there must be a content in the rule before http header is specified. As this keyword is a modifier to the previous content keyword.) ! △NOTE The http raw cookie modifier is not allowed to be used with the rawbytes. content:"EFG". The extracted Header fields may be NORMALIZED.5. The Cookie Header field will be extracted only when this option is configured. alert tcp any any -> any 80 (content:"ABC". Format http_raw_cookie. http_raw_cookie. per the configuration of HttpInspect (see 2. Format http_header. alert tcp any any -> any 80 (content:"ABC".3. As this keyword is a modifier to the previous content keyword.) ! △NOTE The http header modifier is not allowed to be used with the rawbytes modifier for the same content. 149 . 3. ) ! △NOTE modifier is not allowed to be used with the rawbytes modifier for the same content.6)_method. alert tcp any any -> any 80 (content:"ABC". Examples This rule constrains the search for the pattern ”GET” to the extracted Method from a HTTP client request.5.) ! △NOTE. content:"EFG". Format http_method. Format http_raw_header. Using a content rule option followed by a http uri modifier is the same as using a uricontent by itself (see: 3. As this keyword is a modifier to the previous content keyword. As this keyword is a modifier to the previous content keyword. alert tcp any any -> any 80 (content:"ABC".13 http method The http method keyword is a content modifier that restricts the search to the extracted Method from a HTTP client request. As this keyword is a modifier to the previous content keyword.20). The http method 3. there must be a content in the rule before http method is specified. there must be a content in the rule before http raw header is specified. 150 .14 http uri The http uri keyword is a content modifier that restricts the search to the NORMALIZED request URI field . content:"GET". http_raw_header. http header or fast pattern The http raw modifiers for the same content.3.5. there must be a content in the rule before http uri is specified. 3.5.5.2. http uri or fast pattern modifiers for the same content.5.6). 3. content:"EFG". 3. Examples This rule constrains the search for the pattern ”EFG” to the UNNORMALIZED URI. Examples This rule constrains the search for the pattern ”EFG” to the NORMALIZED URI. http_uri. As this keyword is a modifier to the previous content keyword. http_raw_uri. there must be a content in the rule before http raw uri is specified. content:"EFG".15 http raw uri The http raw uri keyword is a content modifier that restricts the search to the UNNORMALIZED request URI field . Format http_stat_code. there must be a content in the rule before http stat code is specified.Format http_uri. alert tcp any any -> any 80 (content:"ABC". Format http_raw_uri.16 http stat code The http stat code keyword is a content modifier that restricts the search to the extracted Status code field from a HTTP server response.2.) ! △NOTE The http raw uri modifier is not allowed to be used with the rawbytes.) ! △NOTE The http uri modifier is not allowed to be used with the rawbytes modifier for the same content.5. The Status Code field will be extracted only if the extended reponse inspection is configured for the HttpInspect (see 2. 151 . alert tcp any any -> any 80 (content:"ABC". As this keyword is a modifier to the previous content keyword. 6). 3. ’double encode’. There are eleven keywords associated with http encode.) ! △NOTE The http stat code modifier is not allowed to be used with the rawbytes or fast pattern modifiers for the same content. ’iis encode’ and ’bare byte’ determine the encoding type which would trigger the alert. The config option ’normalize headers’ needs to be turned on for rules to work with the keyword ’header’. Format http_stat_msg. alert tcp any any -> any 80 (content:"ABC". Negation is allowed on these keywords.2.Examples This rule constrains the search for the pattern ”200” to the extracted Status Code field of a HTTP server response. http_stat_msg. ’uencode’. ’ascii’. ’non ascii’.) ! △NOTE The http stat msg modifier is not allowed to be used with the rawbytes or fast pattern modifiers for the same content. there must be a content in the rule before http stat msg is specified. The keywords ’utf8’. The keywords ’uri’. This rule option will not be able to detect encodings if the specified HTTP fields are not NORMALIZED.2. The keyword ’cookie’ is dependent on config options ’enable cookie’ and ’normalize cookies’ (see 2.6). Examples This rule constrains the search for the pattern ”Not Found” to the extracted Status Message field of a HTTP server response. The Status Message field will be extracted only if the extended reponse inspection is configured for the HttpInspect (see 2. These keywords can be combined using a OR operation. 152 .5.17 http stat msg The http stat msg keyword is a content modifier that restricts the search to the extracted Status Message field from a HTTP server response. content:"200". As this keyword is a modifier to the previous content keyword. 3. alert tcp any any -> any 80 (content:"ABC".5.2. ’header’ and ’cookie’ determine the HTTP fields used to search for a particular encoding type.18 http encode The http encode keyword will enable alerting based on encoding type present in a HTTP client request or a HTTP server response (per the configuration of HttpInspect 2. ’base36’.6). http_stat_code. content:"Not Found". Though this may seem to be overhead. http stat msg. As this keyword is a modifier to the previous content keyword. http raw cookie. ! △NOTE The fast pattern modifier cannot be used with the following http content modifiers: http cookie. it is useful if a shorter content is more ”unique” than the longer content. http_encode:uri.5. Note. however.) ! △NOTE Negation(!) and OR(|) operations cannot be used in conjunction with each other for the http encode keyword.) alert tcp any any -> any any (msg:"No UTF8". The better the content used for the fast pattern matcher. The OR and negation operations work only on the encoding type field and not on http buffer type field. The fast pattern matcher is used to select only those rules that have a chance of matching by using a content in the rule for selection and only evaluating that rule if the content is found in the payload.19 fast pattern The fast pattern keyword is a content modifier that sets the content within a rule to be used with the fast pattern matcher. meaning the shorter content is less likely to be found in a packet than the longer content. 3.!utf8. http raw uri. The fast pattern option may be specified only once per rule. it can significantly reduce the number of rules that need to be evaluated and thus increases performance. http raw header. http_encode:ur>. [!]<encoding type> http_encode:[uri|header|cookie]. Since the default behavior of fast pattern determination is to use the longest content in the rule. that it is okay to use the fast pattern modifier if another http content modifier not mentioned above is used in combination with one of the above to modify the same content. the less likely the rule will needlessly be evaluated. [!][<utf8|double_encode|non_ascii|base36|uencode|bare_byte|ascii|iis_e Examples alert tcp any any -> any any (msg:"UTF8/UEncode Encoding present".utf8|uencode. http stat code. there must be a content rule option in the rule before fast pattern is specified. 153 . such as %2f or directory traversals.! △NOTE modifier can be used with negated contents only if those contents are not modified with The fast pattern offset. fast_pattern.<length> are mutually exclusive. content:"IJKLMNO". the URI: 154 . The reason is that the things you are looking for are normalized out of the URI buffer. alert tcp any any -> any 80 (content:"ABCDEFGH". distance or within. fast_pattern. alert tcp any any -> any 80 (content:"ABCDEFGH".<length> can be used to specify that only a portion of the content should be used for the fast pattern matcher.5. the meaning is simply to use the specified content as the fast pattern content for the rule. but still evaluate the content rule option as ”IJKLMNO”.) This rule says to use ”JKLMN” as the fast pattern content. alert tcp any any -> any 80 (content:"ABCDEFGH". depth. distance or within. When used alone. For example.20 uricontent The uricontent keyword in the Snort rule language searches the NORMALIZED request URI field. The optional argument only can be used to specify that the content should only be used for the fast pattern matcher and should not be evaluated as a rule option. content:"IJKLMNO". depth. This is useful. even though it is shorter than the earlier pattern ”ABCDEFGH”. The optional argument <offset>. Note that (1) the modified content must be case insensitive since patterns are inserted into the pattern matcher in a case insensitive manner. nocase. Format The fast pattern option can be used alone or optionally take arguments. as it saves the time necessary to evaluate the rule option. for example. fast_pattern:<offset>.) 3. fast_pattern:only. fast_pattern:1. these rules will not alert.5. This is useful if the pattern is very long and only a portion of the pattern is necessary to satisfy ”uniqueness” thus reducing the memory required to store the entire pattern in the fast pattern matcher. fast_pattern:only.<length>. This is equivalent to using the http uri modifier to a content keyword. (2) negated contents cannot be used and (3) contents cannot have any positional modifiers such as offset. if a known content must be located in the payload independent of location in the payload. As such if you are writing rules that include things that are normalized. content:"IJKLMNO". The optional Examples This rule causes the pattern ”IJKLMNO” to be used with the fast pattern matcher. ! △NOTE arguments only and <offset>.) This rule says to use the content ”IJKLMNO” for the fast pattern matcher and that the content should only be used for the fast pattern matcher and not evaluated as a content rule option. 6 within 3. the minimum length.5.2. You can write rules that look for the non-normalized content by using the content option. urilen:[<|>]<number>. The following example will match URIs that are 5 bytes long: 155 .6: Uricontent Modifiers Modifier Section nocase 3.5.4 offset 3. the URI: /cgi-bin/aaaaaaaaaaaaaaaaaaaaaaaaaa/.2 depth 3.5 distance 3. the maximum length. write the content that you want to find in the context that the URI will be normalized./scripts/./winnt/system32/cmd.5. For example.5.5.21 urilen The urilen keyword in the Snort rule language specifies the exact length. If you wish to uricontent search the UNNORMALIZED request URI field. Format uricontent:[!]"<content string>".1) uricontent can be used with several of the modifiers available to the content keyword..7 fast pattern 3.19 This option works in conjunction with the HTTP Inspect preprocessor specified in Section 2. or range of URI lengths to match..exe?/c+ver will get normalized into: /winnt/system32/cmd.exe?/c+ver Another example.5. ! △NOTE cannot be modified by a rawbytes modifier or any of the other HTTP modifiers. (See Section 3. use the http raw uri modifier with a content option. if Snort normalizes directory traversals.5.6. These include: Table 3. do not include directory traversals.%252fp%68f? will get normalized into: /cgi-bin/phf? When writing a uricontent rule.%c0%af. 3.5. Format urilen:min<>max.. D. P.urilen:5. then verifies that there is not a newline character within 50 bytes of the end of the PASS string. For example. isdataat:50. ignoring any decoding that was done by the preprocessors. 3. The following example will match URIs that are greater than 5 bytes and less than 10 bytes: urilen:5<>10. The post-re modifiers set compile time flags for the regular expression.2. This modifier will work with the relative modifier as long as the previous content match was in the raw packet data. \ content:!"|0a|". The following example will match URIs that are shorter than 5 bytes: urilen:<5. When the rawbytes modifier is specified with isdataat.8. C. would alert if there were not 10 bytes after ”foo” before the payload ended. S and Y.relative. relative|rawbytes].5. ! △NOTE R (relative) and B (rawbytes) are not allowed with any of the HTTP modifiers such as U. I. It will alert if a certain amount of data is not present within the payload.org Format pcre:[!]"(/<regex>/|m<delim><regex><delim>)[ismxAEGRUBPHMCOIDKYS]". within:50. 3. isdataat:!10. For more detail on what can be done via a pcre regular expression. K. Format isdataat:[!]<int>[. optionally looking for data relative to the end of the previous content match.pcre. and 3. it looks at the raw packet data.6. This option works in conjunction with the HTTP Inspect preprocessor specified in Section 2. A ! modifier negates the results of the isdataat test. M. 3. then verifies there is at least 50 bytes after the end of the string PASS.22 isdataat Verify that the payload has data at a specified location.9 for descriptions of each modifier.23 pcre The pcre keyword allows rules to be written using perl compatible regular expressions. 156 .) This rule looks for the string PASS exists in the packet. the rule with modifiers content:"foo". The modifiers H.5. check out the PCRE web site. See tables 3.7. Example alert tcp any any -> any 111 (content:"PASS". See 2. in that it simply sets a reference for other relative rule options ( byte test. This file data can point to either a file or a block of data. byte jump. PCRE when used without a uricontent only evaluates the first URI.5.7: Perl compatible modifiers for pcre case insensitive include newlines in the dot metacharacter By default. the string is treated as one big line of characters. Format file_data. alert ip any any -> any any (pcre:"/BLAH/i". Inverts the ”greediness” of the quantifiers so that they are not greedy by default. For this option to work with HTTP response. When used with argument mime it places the cursor at the beginning of the base64 decoded MIME attachment or base64 decoded MIME body.i s m x Table 3. you must use either a content or a uricontent. but become greedy if followed by ”?”. certain HTTP Inspect options such as extended response inspection and inspect gzip (for decompressed gzip data) needs to be turned on. pcre) to use. In order to use pcre to inspect all URIs. $ also matches immediately before the final character if it is a newline (but not before any other newlines).) ! △NOTE Snort’s handling of multiple URIs with PCRE does not work as expected. This option will operate similarly to the dce stub data option added with DCE/RPC2. This option matches if there is HTTP response body or SMTP body or SMTP MIME base64 decoded data.24 file data This option is used to place the cursor (used to walk the packet payload in rules processing) at the beginning of either the entity body of a HTTP response or the SMTP body data. ˆ and $ match at the beginning and ending of the string. When m is set. ˆ and $ match immediately following or immediately before any newline in the buffer.6 for more details.7 for more details. Example This example performs a case-insensitive search for the string BLAH in the payload.8: PCRE compatible modifiers for pcre the pattern must match only at the start of the buffer (same as ˆ ) Set $ to match only at the end of the subject string. See 2. as well as the very start and very end of the buffer.2. Without E.2. This is dependent on the SMTP config option enable mime decoding. file_data:mime. whitespace data characters in the pattern are ignored except when escaped or inside a character class A E G Table 3. 3. 157 . This modifier is not allowed with the HTTP request uri buffer modifier(U) for the same content. ! △NOTE Multiple base64 encoded attachments in one packet are pipelined. Match unnormalized HTTP request or HTTP response cookie (Similar to http raw cookie). Match the unnormalized HTTP request uri buffer (Similar to http raw uri).25 base64 decode This option is used to decode the base64 encoded data. Match normalized HTTP request method (Similar to http method) Match normalized HTTP request or HTTP response cookie (Similar to http cookie).1. 158 . This modifier is not allowed with the unnormalized HTTP request or HTTP response cookie modifier(K) for the same content.9: Snort specific modifiers for pcre Match relative to the end of the last pattern match. Format base64_decode[:[bytes <bytes_to_decode>][. within:3. This option unfolds the data before decoding it. ][offset <offset>[. (Similar to distance:0. Example alert tcp any 80 -> any any(msg:"foo at the start of http response body". It completely ignores the limits while evaluating the pcre pattern specified.\ file_data:mime.R U I P H D M C K S Y B O Table 3. This option is particularly useful in case of HTTP headers such as HTTP authorization headers. content:"foo". content:"foo". nocase. relative]]]. This modifier is not allowed with the normalized HTTP request or HTTP response cookie modifier(C) for the same content.) 3.). within:10. Match unnormalized HTTP request or HTTP response header (Similar to http raw header). the decoded URI buffers (Similar to uricontent and http uri). \ file_data. This modifier is not allowed with the unnormalized HTTP request uri buffer modifier(I) for the same content.5. This modifier is not allowed with the unnormalized HTTP request or HTTP response header modifier(D) for the same content.) alert tcp any any -> any any(msg:"MIME BASE64 Encoded Data".3). content:"Authorization: NTLM". This option does not take any arguments. base64_data. byte jump. offset 6. content:"NTLMSSP".5. base64_decode.) alert tcp any any -> any any (msg:"Authorization NTLM". base64_data. Examples alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"Base64 Encoded Data". Format base64_data.Option bytes offset relative Description Number of base64 encoded bytes to decode. Determines the offset relative to the doe ptr when the option relative is specified or relative to the start of the packet payload to begin inspection of base64 encoded data.) 3. When this option is not specified we look for base64 encoded data till either the end of header line is reached or end of packet payload is reached. \ content:"foo bar". in that it simply sets a reference for other relative rule options ( byte test. http_header. 159 . \ content:"NTLMSSP". relative. If folding is not present the search for base64 encoded data will end when we see a carriage return or line feed or both without a following space or tab. within:8. ! △NOTE This option can be extended to protocols with folding similar to HTTP. base64_data. Fast pattern content matches are not allowed with this buffer. within:20. \ within:20. \ base64_decode:bytes 12. base64_decode:relative.) alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"Authorization NTLM". Specifies the inspection for base64 encoded data is relative to the doe ptr. This argument takes positive and non-zero values only. The rule option base64 decode needs to be specified before the base64 data option. This argument takes positive and non-zero values only. \ content:"Authorization:". This option matches if there is base64 decoded buffer. The above arguments to base64 decode are optional. pcre) to use. This option needs to be used in conjunction with base64 data for any other relative rule options to work on base64 decoded buffer. This option will operate similarly to the file data option. ! △NOTE Any non-relative rule options in the rule will reset the cursor(doe ptr) from base64 decode buffer.26 base64 data This option is used to place the cursor (used to walk the packet payload in rules processing) at the beginning of the base64 decode buffer if present. 5.Process data as big endian (default) • little .Converted string data is represented in hexadecimal • dec . within:8. relative. http_header.5. [!]<operator>. then the operator is set to =. string. \ base64_decode:bytes 12. Option bytes to convert operator Any of the operators can also include ! to check if the operator is not true. <offset> \ [. <endian>][.2.10 ’<’ | ’=’ | ’>’ | ’&’ | ’ˆ’ 0 . ! △NOTE Snort uses the C operators for each of these operators. If ! is specified without an operator. For a more detailed explanation. <value>.bitwise AND • ˆ . offset 6.equal • & .Process data as little endian string number type Data is stored in string format in packet Type of number being read: • hex .greater than • = . dce].4294967295 -65535 to 65535 Description Number of bytes to pick up from the packet. please read Section 3. See section 2. 2 and 4.less than • > . Format byte_test:<bytes to convert>.Converted string data is represented in decimal • oct . bytes operator value offset = = = = 1 .9.13 for quick reference).Converted string data is represented in octal dce Let the DCE/RPC 2 preprocessor determine the byte order of the value to be converted.27 byte test Test a byte field against a specific value (with operator). If the & operator is used. \ content:"NTLMSSP". then it would be the same as using if (data & value) { do something(). relative][. The allowed values are 1 to 10 when used without dce.2. \ content:"Authorization:".) 3. base64_data. <number type>][.} Examples alert udp $EXTERNAL_NET any -> $HOME_NET any \ 160 . Capable of testing binary values or converting representative byte strings to their binary equivalent and testing them. Operation to perform to test the value: • < .Example alert tcp any any -> any any (msg:"Authorization NTLM".13 for a description and examples (2. bytes offset mult_value post_offset = = = = 1 .5. 0xdeadbeef. \ content:"|00 04 93 F3|". string. 0.) alert udp any any -> any 1234 \ (byte_test:4. relative. >.28 byte jump The byte jump keyword allows rules to be written for length encoded protocols trivially. \ msg:"got 1234!". 1000. \ content:"|00 00 00 07|". distance:4. <endian>][. \ content:"|00 00 00 07|".10 -65535 to 65535 0 . hex.) alert udp any any -> any 1236 \ (byte_test:2. within:4. \ content:"|00 04 93 F3|". string. relative][. 1234567890. string. =. 1000. dec. rules can be written that skip over specific portions of length-encoded protocols and perform detection in very specific locations. 20. 0.) alert tcp $EXTERNAL_NET any -> $HOME_NET any \ (msg:"AMD procedure 7 plog overflow". please read Section 3. distance:4. \ byte_test:4. <offset> \ [. align][. dec. 123.65535 -65535 to 65535 161 . 0.9.(msg:"AMD procedure 7 plog overflow". dec. string. For a more detailed explanation. relative. 20. from_beginning][.) 3. This pointer is known as the detect offset end pointer. Format byte_jump:<bytes_to_convert>. =. dce]. 0. <number_type>]\ [. or doe ptr. dec. post_offset <adjustment value>][. \ msg:"got DEADBEEF!". \ msg:"got 1234567890!". string. multiplier <mult_value>][. =. \ byte_test:4.) alert udp any any -> any 1235 \ (byte_test:3.5. convert them to their numeric representation. >. =. =.) alert udp any any -> any 1238 \ (byte_test:8.) alert udp any any -> any 1237 \ (byte_test:10. then skips that far forward in the packet. string. within:4. move that many bytes forward and set a pointer for later detection. By having an option that reads the length of a portion of data. The byte jump option does this by reading some number of bytes. 12. 1234. 0. \ msg:"got 12!". \ msg:"got 123!". 29 byte extract The byte extract keyword is another useful option for writing rules against length-encoded protocols. Process data as big endian (default) Process data as little endian Use the DCE/RPC 2 preprocessor to determine the byte-ordering. distance:4. align. \ msg:"statd format string buffer overflow". \ byte_jump:4. string. It reads in some number of bytes from the packet payload and saves it to a variable. See section 2. The DCE/RPC 2 preprocessor must be enabled for this option to work.13 for a description and examples (2. Let the DCE/RPC 2 preprocessor determine the byte order of the value to be converted. <name> \ [. Number of bytes into the payload to start processing Use an offset relative to last pattern match Multiply the number of calculated bytes by <value> and skip forward that number of bytes. ! △NOTE Only two byte extract variables may be created per rule. Skip forward or backwards (positive of negative value) by <value> number of bytes after the other jump options have been applied. align <align value>][. <offset>. 900. \ byte_test:4.Option bytes to convert offset relative multiplier <value> big little string hex dec oct align from beginning post offset <value> dce Description Number of bytes to pick up from the packet.. multiplier <multiplier value>][. <end. relative. Example alert udp any any -> any 32770:34000 (content:"|00 01 86 B8|". Data is stored in string format in packet Converted string data is represented in hexadecimal Converted string data is represented in decimal Converted string data is represented in octal Round the number of converted bytes up to the next <value>-byte boundary. Other options which use byte extract variables A byte extract rule option detects nothing by itself.2. They can be re-used in the same rule any number of times. Use an offset relative to last pattern match Multiply the bytes read from the packet by <value> and save that number into the variable. 12.5.2. instead of using hard-coded values. \ content:"|00 00 00 01|". Format byte_extract:<bytes_to_extract>. The allowed values are 1 to 10 when used without dce. 2 and 4. relative. <value> may be 2 or 4. >. <number_type>][.) 3. Its use is in extracting packet data for use in other rule options. 20. These variables can be referenced later in the rule. Here is a list of places where byte extract variables can be used: 162 .13 for quick reference). within:4. relative][. \ byte_extract:1. depth:str_depth.) 3. absolute_offset <value>|relative_offset <value>]. but it is unknown at this time which services may be exploitable.31 asn1 The ASN.1 type is greater than 500. and looks for various malicious encodings. Compares ASN. offset:str_offset. 1. relative_offset 0’.) 3. relative offset has one argument.5. This is the relative offset from the last content match or byte test/jump. The syntax looks like. Format ftpbounce.established. This means that if an ASN. “oversize length 500”. Offset may be positive or negative.\ classtype:misc-attack. This is the absolute offset from the beginning of the packet. depth.1 type lengths with the supplied argument. \ content:"bad stuff". the whole option evaluates as true. Format asn1:[bitstring_overflow][. Option bitstring overflow double overflow oversize length <value> Description Detects invalid bitstring encodings that are known to be remotely exploitable.Rule Option content/uricontent byte test byte jump isdataat Arguments that Take Variables offset. asn1:bitstring_overflow. absolute offset <value> relative offset <value> 163 . This is known to be an exploitable function in Microsoft. So if you wanted to start decoding and ASN. pcre:"/ˆPORT/smi". the offset value.30 ftpbounce The ftpbounce keyword detects FTP bounce attacks. content:"PORT". Example alert tcp $EXTERNAL_NET any -> $HOME_NET 21 (msg:"FTP PORT bounce attempt".1 detection plugin decodes a packet or a portion of a packet. value offset offset Examples This example uses two variables to: • Read the offset of a string from a byte at offset 0. double_overflow][. within offset. The ASN. the option and the argument are separated by a space or a comma. the offset number. oversize_length <value>][. • Use these values to constrain a pattern match to a smaller area. Detects a double ASCII encoding that is larger than a standard buffer. Offset values may be positive or negative. sid:3441. ftpbounce. then this keyword is evaluated as true. • Read the depth of a string from a byte at offset 1. you would specify ’content:"foo". str_depth. if you wanted to decode snmp packets. Multiple options can be used in an ’asn1’ option and the implied logic is boolean OR. This keyword must have one argument which specifies the length to compare against. \ msg:"Bad Stuff detected within field". alert tcp any any -> any any (byte_extract:1.1 sequence right after the content “foo”. If an option has an argument. For example. 0. absolute offset has one argument. So if any of the arguments evaluate as true. you would say “absolute offset 0”. The preferred usage is to use a space between option and argument. distance. str_offset.1 options provide programmatic detection capabilities as well as some more dynamic type detection. rev:1. nocase. \ flow:to_server.5. absolute_offset 0. 3. 164 . relative_offset 0. 3. \ asn1:oversize_length 10000. CVE-2004-0396: ”Malformed Entry Modified and Unchanged flag insertion”.5. \ flow:to_server.38 Payload Detection Quick Reference Table 3. ! △NOTE This plugin cannot do detection over encrypted sessions.) alert tcp any any -> any 80 (msg:"ASN1 Relative Foo".35 dce stub data See the DCE/RPC 2 Preprocessor section 2.5. cvs:invalid-entry. Examples alert tcp any any -> any 2401 (msg:"CVS Invalid-entry".34 dce opnum See the DCE/RPC 2 Preprocessor section 2.5. which is a way of causing a heap overflow (see CVE-2004-0396) and bad pointer derefenece in versions of CVS 1. \ asn1:bitstring_overflow. 3.) 3. Default CVS server ports are 2401 and 514 and are included in the default ports for stream reassembly.2. SSH (usually port 22).) 3.10: Payload detection rule option keywords Keyword content Description The content keyword allows the user to set rules that search for specific content in the packet payload and trigger response based on that data.5.2. content:"foo".2.37 ssl state See the SSL/TLS Preprocessor section 2.established. 3.13 for a description and examples of using this rule option.2.g.15 and before.5. 3.32 cvs The CVS detection plugin aids in the detection of: Bugtraq-10384. e.Examples alert udp any any -> any 161 (msg:"Oversize SNMP Length".33 dce iface See the DCE/RPC 2 Preprocessor section 2. Format cvs:<option>.13 for a description and examples of using this rule option. Option invalid-entry Description Looks for an invalid Entry string.36 ssl version See the SSL/TLS Preprocessor section 2.5.2.5.13 for a description and examples of using this rule option.11 for a description and examples of using this rule option.11.11 for a description and examples of using this rule option. 13. ignoring any decoding that was done by preprocessors. =. The distance keyword allows the rule writer to specify how far into a packet Snort should ignore before starting to search for the specified pattern relative to the end of the previous pattern match. and looks for various malicious encodings. The asn1 detection plugin decodes a packet or a portion of a packet. Example alert ip any any -> any any \ (msg:"First Fragment".13. Format ttl:[<. >. See the DCE/RPC 2 Preprocessor section 2. 165 . <=. This option keyword was intended for use in the detection of traceroute attempts.2.6. The byte test keyword tests a byte field against a specific value (with operator).6.13. fragbits:M. The uricontent keyword in the Snort rule language searches the normalized request URI field. fragoffset:0. See the DCE/RPC 2 Preprocessor section 2. ttl:[<number>]-[<number>]. This keyword takes numbers from 0 to 255. The within keyword is a content modifier that makes sure that at most N bytes are between pattern matches using the content keyword. The offset keyword allows the rule writer to specify where to start searching for a pattern within a packet.2. then skip that far forward in the packet.2. Format fragoffset:[!|<|>]<number>. 3. The ftpbounce keyword detects FTP bounce attacks. you could use the fragbits keyword and look for the More fragments option in conjunction with a fragoffset of 0. The cvs keyword detects invalid entry strings. The depth keyword allows the rule writer to specify how far into a packet Snort should search for the specified pattern. See the DCE/RPC 2 Preprocessor section 2.) 3. ttl:<3. To catch all the first fragments of an IP session. The isdataat keyword verifies that the payload has data at a specified location. This example checks for a time-to-live value that between 3 and 5. The byte jump keyword allows rules to read the length of a portion of data. >=]<number>.2 ttl The ttl keyword is used to check the IP time-to-live value.6 Non-Payload Detection Rule Options 3. Example This example checks for a time-to-live value that is less than 3.1 fragoffset The fragoffset keyword allows one to compare the IP fragment offset field against a decimal value. The pcre keyword allows rules to be written using perl compatible regular expressions. ttl:=<5. Few other examples are as follows: ttl:<=5. for example. ttl:>=5. Format tos:[!]<number>. Format id:<number>. ttl:=5. 166 . the value 31337 is very popular with some hackers.4 id The id keyword is used to check the IP ID field for a specific value.3 tos The tos keyword is used to check the IP TOS field for a specific value. 3. Example This example looks for a tos value that is not 4 tos:!4. This example checks for a time-to-live value that between 0 and 5. Example This example looks for the IP ID of 31337. ttl:5-3. Some tools (exploits.ttl:3-5. This example checks for a time-to-live value that between 5 and 255. id:31337. The following examples are NOT allowed by ttl keyword: ttl:=>5. 3. ttl:-5. ttl:5-. scanners and other odd programs) set this field specifically for various purposes.6 <STATE_NAME>][. Example This example looks for a TCP acknowledge number of 0.logged_in.logged_in. Format window:[!]<number>. Example This example looks for a TCP window size of 55808. content:"LIST".13 window The window keyword is used to check for a specific TCP window size. flowbits:isset. ack:0. seq:0. 170 .6.) 3. 3. Examples alert tcp any 143 -> any any (msg:"IMAP login". flowbits:set. 3. Format ack:<number>.12 ack The ack keyword is used to check for a specific TCP acknowledge number.Format flowbits:[set|unset|toggle|isset|isnotset|noalert|reset][.) alert tcp any any -> any 143 (msg:"IMAP LIST". window:55808. Example This example looks for a TCP sequence number of 0. flowbits:noalert.11 seq The seq keyword is used to check for a specific TCP sequence number. content:"OK LOGIN". <GROUP_NAME>].6. Format seq:<number>.6. Format icode:min<>max.6. This is useful because some covert channel programs use static ICMP fields when they communicate.6. itype:[<|>]<number>. Format icmp_seq:<number>. icode:>30.3.15 icode The icode keyword is used to check for a specific ICMP code value.17 icmp seq The icmp seq keyword is used to check for a specific ICMP sequence value. 171 .6. 3. This particular plugin was developed to detect the stacheldraht DDoS agent. Example This example looks for an ICMP ID of 0. icode:[<|>]<number>. This particular plugin was developed to detect the stacheldraht DDoS agent. Format icmp_id:<number>. Example This example looks for an ICMP code greater than 30.14 itype The itype keyword is used to check for a specific ICMP type value. Format itype:min<>max. itype:>30.6. Example This example looks for an ICMP type greater than 30. 3.16 icmp id The icmp id keyword is used to check for a specific ICMP ID value. 3. This is useful because some covert channel programs use static ICMP fields when they communicate. icmp_id:0. the RPC keyword is slower than looking for the RPC values by using normal content matching.6. alert ip any any -> any any (ip_proto:igmp. Example This example looks for IGMP traffic. 3. alert tcp any any -> any 111 (rpc:100000.) 172 .) 3.19 ip proto The ip proto keyword allows checks against the IP protocol header. Warning Because of the fast pattern matching engine. For a list of protocols that may be specified by name. and procedure numbers in SUNRPC CALL requests. Format rpc:<application number>. 3. Example The following example looks for an RPC portmap GETPORT request. version. [<version number>|*].Example This example looks for an ICMP Sequence of 0. 3.20 sameip The sameip keyword allows rules to check if the source ip is the same as the destination IP.6. icmp_seq:0. Format sameip.18 rpc The rpc keyword is used to check for a RPC application. Format ip_proto:[!|>|<] <name or number>. Example This example looks for any traffic where the Source IP and the Destination IP is the same. Wildcards are valid for both version and procedure numbers by using ’*’.6. see /etc/protocols. *. alert ip any any -> any any (sameip.). [<procedure number>|*]>. The ttl keyword is used to check the IP time-to-live value. • The optional noalert parameter causes the rule to not generate an alert when it matches.less than • > .greater than or equal Example For example. fastpath]. to disable TCP reassembly for client traffic when we see a HTTP 200 Ok Response message. to look for a session that is less that 6 bytes from the client side. 173 .not equal • <= . • The optional fastpath parameter causes Snort to ignore the rest of the connection.3. Where the operator is one of the following: • < . Example For example. content:"200 OK". <server|client|both>[.6.noalert.11: Non-payload detection rule option keywords Keyword fragoffset ttl tos Description The fragoffset keyword allows one to compare the IP fragment offset field against a decimal value. use: alert tcp any 80 -> any any (flow:to_client.) 3. established. Format stream_size:<server|client|both|either>. The tos keyword is used to check the IP TOS field for a specific value.client.6. stream_reassemble:disable. <operator>.<.23 Non-Payload Detection Quick Reference Table 3.6.equal • != .6. Format stream_reassemble:<enable|disable>. use: alert tcp any any -> any any (stream_size:client. ! △NOTE The stream size option is only available when the Stream5 preprocessor is enabled. noalert][.greater than • = . as determined by the TCP sequence numbers.) 3.less than or equal • >= . <number>.22 stream size The stream size keyword allows a rule to match traffic according to the number of bytes observed. The fragbits keyword is used to check if fragmentation and reserved bits are set in the IP header.1 logto The logto keyword tells Snort to log all packets that trigger this rule to a special output log file. or even web sessions is very useful.7. The flags keyword is used to check if specific TCP flag bits are present. Format session:[printable|binary|all]. Format logto:"filename".7. This is especially handy for combining data from things like NMAP activity. There are many cases where seeing what users are typing in telnet. The itype keyword is used to check for a specific ICMP type value. or all.7 Post-Detection Rule Options 3. The icmp seq keyword is used to check for a specific ICMP sequence value. The icode keyword is used to check for a specific ICMP code value. It should be noted that this option does not work when Snort is in binary logging mode. The window keyword is used to check for a specific TCP window size. There are three available argument keywords for the session rule option: printable. The all keyword substitutes non-printable characters with their hexadecimal equivalents.) 174 . The flow keyword allows rules to only apply to certain directions of the traffic flow. The dsize keyword is used to test the packet payload size. rlogin. Example The following example logs all printable strings in a telnet packet. The binary keyword prints out data in a binary format.2 session The session keyword is built to extract user data from TCP Sessions. The ip proto keyword allows checks against the IP protocol header. etc. The rpc keyword is used to check for a RPC application. The icmp id keyword is used to check for a specific ICMP ID value. The seq keyword is used to check for a specific TCP sequence number. HTTP CGI scans. log tcp any any <> any 12345 (metadata:service ftp-data. binary. The ipopts keyword is used to check if a specific IP option is present. ftp. 3. and procedure numbers in SUNRPC CALL requests. The printable keyword only prints out data that the user would normally see or be able to type.) Given an FTP data session on port 12345. The ack keyword is used to check for a specific TCP acknowledge number. The flowbits keyword allows rules to track states during a transport protocol session. session:binary. 3. The sameip keyword allows rules to check if the source ip is the same as the destination IP. this example logs the payload bytes in binary form. version. log tcp any any <> any 23 (session:printable. 7. so it should not be used in heavy load situations. The session keyword is best suited for post-processing binary (pcap) log files. described in Section 2.3 on how to use the tagged packet limit config option). See 2. 3.1 any (flowbits:isnotset. Tagged traffic is logged to allow analysis of response codes and post-attack traffic. tag:host. metric • packets .3 resp The resp keyword enables an active response that kills the offending session.tagged. Note that neither subsequent alerts nor event filters will prevent a tagged packet from being logged.11.seconds.) Also note that if you have a tag option in a rule that uses a metric other than packets. <metric>[.Tag packets containing the destination IP address of the packet that generated the initial event. the database output plugin.600. does not properly handle tagged alerts.1. Format tag:<type>. Units are specified in the <metric> field.6.src.Log packets in the session that set off the rule • host .1 any \ (content:"TAGMYPACKETS".conf file (see Section 2. <count>. additional traffic involving the source and/or destination host is tagged. (Note that the tagged packet limit was introduced to avoid DoS situations on high bandwidth sensors for tag rules with a high seconds or bytes counts.7.3 for details.4 any -> 10.packets.1.Log packets from the host that caused the tag to activate (uses [direction] modifier) count • <integer> . tagged alerts will be sent to the same output plugins as the original alert.only relevant if host type is used.Tag packets containing the source IP address of the packet that generated the initial event.4 for details.tagged.5 tag The tag keyword allow rules to log more than just the single packet that triggered the rule. 3. Once a rule is triggered.6.11.4 react The react keyword enables an active response that includes sending a web page or other content to the client and then closing the connection. React can be used in both passive and inline modes.1.Tag the host/session for <count> bytes direction .600.1. direction]. See 2.src. type • session . and Stream reassembly will cause duplicate data when the reassembled packets are logged.Tag the host/session for <count> packets • seconds . The binary keyword does not log any protocol headers below the application layer.Count is specified as a number of units. The default tagged packet limit value is 256 and can be modified by using a config option in your snort. alert tcp any any <> 10. Resp can be used in both passive or inline modes.0. • src .1.Warnings Using the session keyword can slow Snort down considerably.Tag the host/session for <count> seconds • bytes . tag:host.7. Subsequent tagged alerts will cause the limit to reset. Currently. 3. • dst .1. You can disable this packet limit for a particular rule by adding a packets metric to your tag option and setting its count to 0 (This can be done on a global scale by setting the tagged packet limit option in snort. but it is the responsibility of the output plugin to properly handle these special alerts.seconds. Doing this will ensure that packets are tagged for the full amount of seconds or bytes and will not be cut off by the tagged packet limit.1.conf to 0). flowbits:set.) 175 .) alert tcp 10. a tagged packet limit will be used to limit the number of tagged packets regardless of whether the seconds or bytes count has been reached. count:50.7. It allows the rule writer to specify how many packets to leave the rule enabled for after it is activated.1.100 any > 10. You can have multiple replacements within a rule. \ detection_filter:track by_src.2. rev:1.) 176 . \ count <c>.6 for more information. See Section 3. after the first 30 failed login attempts: drop tcp 10.7. Snort evaluates a detection filter as the last step of the detection phase.10 detection filter detection filter defines a rate which must be exceeded by a source or destination host before a rule can generate an event.6 for more information. See Section 3. \ sid:1000001. offset:0. 3. Format activated_by:1.2. replace:"<string>". count 30. alert tcp any any -> any 23 (flags:s. seconds 60.9 replace The replace keyword is a feature available in inline mode which will cause Snort to replace the prior matching content with the given string.2. one per content.) 3.1. 3. Both the new string and the content it is to replace must have the same length.7.8 count The count keyword must be used in combination with the activated by keyword. depth:4. \ content:"SSH".2. Format activated_by:1.this rule will fire on every failed login attempt from 10. detection filter has the following format: detection_filter: \ track <by_src|by_dst>. At most one detection filter is permitted per rule.2. 3. flow:established.seconds.100 22 ( \ msg:"SSH Brute Force Attempt".12.100 during one sampling period of 60 seconds. See Section 3. 3.7.to_server. Format activates:1.7 activated by The activated by keyword allows the rule writer to dynamically enable a rule when a specific activate rule is triggered.1.Example This example logs the first 10 seconds or the tagged packet limit (whichever comes first) of any telnet session.1.7. Example . after evaluating all other rule options (regardless of the position of the filter within the rule source).6 activates The activates keyword allows the rule writer to specify a rule to add when a specific network event occurs. tag:session. seconds <s>.6 for more information. nocase.10. established. The session keyword is built to extract user data from TCP Sessions. Examples This rule logs the first event of this SID every 60 seconds. Replace the prior matching content with the given string of the same length. Available in inline mode only. This can be done using the ‘limit’ type of threshold. C must be nonzero. \ track <by_src|by_dst>. These should incorporate the threshold into the rule. nocase. or using a standalone threshold applied to the same rule. threshold:type limit. Format threshold: \ type <limit|threshold|both>. a rule for detecting a too many login password attempts may require more than 5 attempts.11 Post-Detection Quick Reference Table 3.10302. The resp keyword is used attempt to close sessions when an alert is triggered. For instance. threshold can be included as part of a rule. a detection filter would normally be used in conjunction with an event filter to reduce the number of logged events.txt".7. This keyword implements an ability for users to react to traffic that matches a Snort rule by closing connection and sending a notice. 3. This keyword allows the rule writer to dynamically enable a rule when a specific activate rule is triggered.7. The value must be nonzero.2) as standalone configurations instead. There is no functional difference between adding a threshold to a rule. It makes sense that the threshold feature is an integral part of this rule. \ uricontent:"/robots. It allows the rule writer to specify how many packets to leave the rule enabled for after it is activated. The maximum number of rule matches in s seconds allowed before the detection filter limit to be exceeded.10) within rules. or event filters (2.12: Post-detection rule option keywords Keyword logto session resp react tag activates activated by count replace detection filter Description The logto keyword tells Snort to log all packets that trigger this rule to a special output log file. seconds <s>. alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots. Use detection filters (3.4. Since potentially many events will be generated. Some rules may only make sense with a threshold. or you can use standalone thresholds that reference the generator and SID they are applied to. Track by source or destination IP address and if the rule otherwise matches more than the configured rate it will fire. The tag keyword allow rules to log more than just the single packet that triggered the rule. track \ 177 . \ classtype:web-application-activity. reference:nessus.txt access". This means count is maintained for each unique source IP address or each unique destination IP address. flow:to_server.8 Rule Thresholds ! △NOTE Rule thresholds are deprecated and will not be supported in a future release. \ count <c>. Time period over which count is accrued. There is a logical difference. This keyword must be used in combination with the activated by keyword. This keyword allows the rule writer to specify a rule to add when a specific network event occurs.Option track by src|by dst count c seconds s Description Rate is tracked either by source IP address or destination IP address. 3. By writing rules for the vulnerability. the rule is less vulnerable to evasion when an attacker changes the exploit slightly. Rules without content are always evaluated (relative to the protocol and port group in which they reside). FTP is a good example. In FTP. flow:to_server. sid:1000852. Type both alerts once per time interval after seeing m occurrences of the event. threshold:type threshold.txt access". try and have at least one content (or uricontent) rule option in your rule. or for each unique destination IP addresses. tcp. This means count is maintained for each unique source IP addresses.9 Writing Good Rules There are some general concepts to keep in mind when developing Snort rules to maximize efficiency and speed. count 10 . udp. number of rule matching in s seconds that will cause event filter limit to be exceeded. the less likely that rule and all of it’s rule options will be evaluated unnecessarily . rev:1. track \ by_dst. 3. a multi-pattern matcher is used to select rules that have a chance at matching based on a single content.9. look for a the vulnerable command with an argument that is too large. flow:to_server. the client sends: user username_here A simple rule to look for FTP root login attempts could be: 178 .9. seconds 60 . then by ports (ip and icmp use slightly differnet logic). count 10.10302. threshold:type both. potentially putting a drag on performance. sid:1000852. seconds 60.9. \ uricontent:"/robots.10302. If at all possible. While some detection options. time period over which count is accrued. alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots.txt". established. \ track by_dst. alert tcp $external_net any -> $http_servers $http_ports \ (msg:"web-misc robots.txt access". Ports or anything else are not tracked. then ignores events for the rest of the time interval. rev:1.2 Catch the Vulnerability. 3. Not the Exploit Try to write rules that target the vulnerability. or destination IP address. \ classtype:web-application-activity. especially when applied to large rule groups like HTTP. such as pcre and byte test. nocase. perform detection in the payload section of the packet. The longer and more unique a content is. then by those with content and those without. established.) 3. nocase. Selecting rules for evaluation via this ”fast” pattern matcher was found to increase performance. instead of a specific exploit.) This rule logs at most one event every 60 seconds if at least 10 events on this SID are fired. to send the username. then ignores any additional events during the time interval. reference:nessus.Option type limit|threshold|both track by src|by dst count c seconds s Description type limit alerts on the 1st m events during the time interval. rate is tracked either by source IP address. s must be nonzero value.it’s safe to say there is generally more ”good” traffic than ”bad”. instead of shellcode that binds a shell. For example. icmp). c must be nonzero value. they are not used by the fast pattern matching engine.1 Content Matching Snort groups rules by protocol (ip. \ classtype:web-application-activity. Type threshold alerts every m times we see this event during the time interval. \ uricontent:"/robots. 3.txt". reference:nessus. For rules with content.3 Catch the Oddities of the Protocol in the Rule Many services typically send the commands in upper case letters. dsize:1. This option is added to allow the fast pattern matcher to select this rule for evaluation only if the content root is found in the payload. pcre:"/user\s+root/i". content:"|13|". On first read. then check the dsize again.alert tcp any any -> any any 21 (content:"user root". as the dsize check is the first option checked and dsize is a discrete check without recursion. each of the following are accepted by most FTP servers: user root user root user root user root user<tab>root To handle all of the cases that the FTP server might handle. verifying this is traffic going to the server on an established session.9. • The rule has a pcre option. followed by root. which is the longest. the payload “aab” would fail. then the dsize option would fail.4 Optimizing Rules The content matching portion of the detection engine has recursion to handle a few evasion cases. the content 0x13 would be found again starting after where the previous 0x13 was found. ignoring case. a packet with 1024 bytes of 0x13 could cause 1023 too many pattern match attempts and 1023 too many dsize checks. once it is found. content:"b". within:1. By looking at this rule snippit. looking for user. For example. even though it is obvious that the payload “aab” has “a” immediately followed by “b”. The way the recursion works now is if a pattern matches. Why? The content 0x13 would be found in the first byte. most unique string in the attack. then look for the pattern again after where it was found the previous time. Reordering the rule options so that discrete checks (such as dsize) are moved to the beginning of the rule speed up Snort. it is obvious the rule looks for a packet with a single byte of 0x13. However. take the following rule: alert ip any any -> any any (content:"a". because of recursion. looking for root. Rules that are not properly written can cause Snort to waste time duplicating checks. the following rule options are not optimized: content:"|13|". immediately followed by “b”. followed at least one space character (which includes tab). 3. For example.) There are a few important things to note in this rule: • The rule has a flow option. \ content:"root". because the first ”a” is not immediately followed by “b”. The following rule options are discrete and should generally be placed at the beginning of any rule: • dsize • flags • flow 179 .) This rule would look for “a”. A packet of 1024 bytes of 0x13 would fail immediately. and if any of the detection options after that pattern fail.) While it may seem trivial to write a rule that looks for the username root. but it is needed. the recursion implementation is not very smart. a good rule will handle all of the odd things that the protocol might handle when accepting the user command. the rule needs more smarts than a simple string match. Without recursion. Repeat until the pattern is not found again or the opt functions all succeed. The optimized rule snipping would be: dsize:1. that may not sound like a smart idea.established. repeating until 0x13 is not found in the payload again. For example. • The rule has a content option. While recursion is important for detection. and because of recursion. A good rule that looks for root login on ftp would be: alert tcp any any -> any 21 (flow:to_server. ....../........ ...... ... ..... as RPC uses simple length based encoding for passing data...... describe each of the fields. and then null bytes to pad the length of the string to end on a 4 byte boundary./bin/sh.............. and figure out how to write a rule to catch this exploit............. taking four bytes..... Let’s break this up.. ......... a random uint32.... the string... ...... unique to each request rpc type (call = 0.5 Testing Numerical Values The rule options byte test and byte jump were written to support writing rules for protocols that have length encoded data.. 89 00 00 00 00 00 00 00 09 00 00 01 00 00 00 00 9c 00 00 87 00 00 00 00 e2 00 02 88 0a 01 01 20 the request id.....• fragbits • icmp id • icmp seq • icode • id • ipopts • ip proto • itype • seq • session • tos • ttl • ack • window • resp • sameip 3....e....... .. ............/............@(:...metasplo it... ....... .. The string “bob” would show up as 0x00000003626f6200... ............... . • Strings are written as a uint32 specifying the length of the string.... ...9.. The number 26 would show up as 0x0000001a... .... @(:... In order to understand why byte test and byte jump are useful.......... There are a few things to note with RPC: • Numbers are written as uint32s.............metasplo it...../ ............ ...system... . RPC was the protocol that spawned the requirement for these two rule options... Starting at the length of the hostname. depth:4. content:"|00 00 00 00|". depth:4. depth:4.gid of requesting user (0) 00 00 00 00 . offset:12. we use: byte_jump:4. aka none) The rest of the packet is the request that gets passed to procedure 1 of sadmind. we want to read 4 bytes. However. Now that we have all the detection capabilities for our rule.length of the client machine name (0x0a = 10) 4d 45 54 41 53 50 4c 4f 49 54 00 00 . 36 bytes from the beginning of the packet. content:"|00 00 00 01|". In english. we need to make sure that our packet is a call to the procedure 1.unix timestamp (0x40283a10 = 1076378128 = feb 10 01:55:28 2004 gmt) 00 00 00 0a . we have decoded enough of the request to write our rule. the vulnerable procedure. This is where byte test is useful.40 28 3a 10 . turn it into a number. aka none) . we are now at: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 which happens to be the exact location of the uid.metasploit 00 00 00 00 . making sure to account for the padding that RPC requires on strings.extra group ids (0) 00 00 00 00 00 00 00 00 . First. depth:4. the value we want to check. offset:16. within:4. content:"|00 00 00 00|". To do that in a Snort rule. content:"|00 01 87 88|". sadmind runs any request where the client’s uid is 0 as root. content:"|00 00 00 01|". and turn those 4 bytes into an integer and jump that many bytes forward. depth:4. As such. content:"|00 00 00 00|". content:"|00 01 87 88|".36.verifier flavor (0 = auth\_null. depth:4. If we do that. but we want to skip over it and check a number value after the hostname. then we want to look for the uid of 0. We don’t care about the hostname. offset:20. within:4. Then. offset:16. byte_jump sadmind.uid of requesting user (0) 00 00 00 00 . we know the vulnerability is that sadmind trusts the uid coming from the client. content:"|00 00 00 01|". depth:4.36. we need to make sure that our packet has auth unix credentials.length of verifier (0. offset:20. offset:4. and jump that many bytes forward. Then. Then. we need to make sure that our packet is an RPC call. content:"|00 00 00 00|". 181 . aligning on the 4 byte boundary. offset:4. offset:12. content:"|00 00 00 01|". let’s put them all together.align. depth:4.align. offset:16. 182 . we do: byte_test:4. depth:8. If the sadmind service was vulnerable to a buffer overflow when reading the client’s hostname.200. depth:4. so we should combine those patterns.align. To do that. We end up with: content:"|00 00 00 00|". we would check the length of the hostname to make sure it is not too large. Our full rule would be: content:"|00 00 00 00|". depth:8. starting 36 bytes into the packet. depth:4.36. byte_test:4. content:"|00 00 00 01 00 byte_jump:4. offset:12. instead of reading the length of the hostname and jumping that many bytes forward.36. content:"|00 00 00 00|".200. offset:12. content:"|00 01 87 88|". depth:4. within:4. offset:16. we would read 4 bytes. 00 00 01|". In Snort. and then make sure it is not too large (let’s say bigger than 200 bytes). offset:4.36. depth:4.The 3rd and fourth string match are right next to each other. content:"|00 00 00 01 00 00 00 01|". content:"|00 01 87 88|".>. turn it into a number. offset:4.>. 1. int minor. the dynamic API presents a means for loading dynamic libraries and allowing the module to utilize certain functions within the main snort code. The remainder of this chapter will highlight the data structures and API functions used in developing preprocessors. and it provides access to the normalized http and alternate data buffers. fatal errors. It is defined in sf dynamic meta.2 DynamicPreprocessorData The DynamicPreprocessorData structure defines the interface the preprocessor uses to interact with snort itself. It also includes information for setting alerts. 4. handling Inline drops. int major.1 DynamicPluginMeta The DynamicPluginMeta structure defines the type of dynamic module (preprocessor. or detection engine). This data structure should be initialized when the preprocessor shared library is loaded. When enabled via the –enabledynamicplugin configure option. errors. The definition of each is defined in the following sections. 183 . the version information. and rules can now be developed as dynamically loadable module to snort. restart. and rules as a dynamic plugin to snort.1 Data Structures A number of data structures are central to the API.Chapter 4 Dynamic Modules Preprocessors. 4.h. char uniqueName[MAX_NAME_LEN]. but typically is limited to a single functionality such as a preprocessor. and path to the shared library. Beware: the definitions herein may be out of date. and debugging info. char *libraryPath. A shared library can implement all three types. rules. It is defined in sf dynamic preprocessor. It includes function to log messages. detection engines. detection capabilities. access to the StreamAPI. Check the header file for the current definition. This includes functions to register the preprocessor’s configuration parsing. int build. exit. 4. and processing functions. } DynamicPluginMeta.1. This includes functions for logging messages.h as: typedef struct _DynamicEngineData { int version. GetRuleData getRuleData.h. LogMsgFunc logMsg. classification. #ifdef HAVE_WCHAR_H DebugWideMsgFunc debugWideMsg. UriInfo *uriBuffers[MAX_URIINFOS]. 4. priority.. #define RULE_MATCH 1 #define RULE_NOMATCH 0 typedef struct _Rule { IPInfo ip. CheckFlowbit flowbitCheck.1.h. PCRECompileFunc pcreCompile. 4. GetPreprocRuleOptFuncs getPreprocOptFuncs. generator and signature IDs. Rule The Rule structure defines the basic outline of a rule and contains the same set of information that is seen in a text rule. 184 . RegisterRule ruleRegister. fatal errors. errors. Check the header file for the current definitions. DetectAsn1 asn1Detect. and debugging info as well as a means to register and check flowbits. char *dataDumpDirectory. u_int8_t *altBuffer. /* NULL terminated array of RuleOption union */ ruleEvalFunc evalFunc.1. That includes protocol. The following structures are defined in sf snort plugin api.4. It and the data structures it incorporates are defined in sf snort packet.1. It also includes a location to store rule-stubs for dynamic rules that are loaded. SetRuleData setRuleData. LogMsgFunc errMsg. RuleInformation info. int *debugMsgLine. It is defined in sf dynamic engine. It also includes a list of rule options and an optional evaluation function. revision. #endif char **debugMsgFile. address and port information and rule information (classification. and a list of references). LogMsgFunc fatalMsg. PCREStudyFunc pcreStudy. PCREExecFunc pcreExec. } DynamicEngineData.3 DynamicEngineData The DynamicEngineData structure defines the interface a detection engine uses to interact with snort itself. RuleOption **options. Additional data structures may be defined to reference other protocol fields.5 Dynamic Rules A dynamic rule should use any of the following data structures. RegisterBit flowbitRegister. signature ID. destination address and port. HTTP PORTS. /* String format of classification name */ u_int32_t priority. Some of the standard strings and variables are predefined . /* } Rule. where the parameter is a pointer to the SFSnortPacket structure. void *ruleData. char *classification. } RuleReference. char *refIdentifier. used internally */ Hash table for dynamic data pointers */ The rule evaluation function is defined as typedef int (*ruleEvalFunc)(void *). /* NULL terminated array of references */ RuleMetaData **meta. /* Rule Initialized. u_int32_t numOptions. char noAlert. char * dst_port. etc. and direction.char initialized. u_int32_t sigID. src address and port. /* 0 for non TCP/UDP */ char direction. message text. /* 0 for non TCP/UDP */ } IPInfo. typedef struct _IPInfo { u_int8_t protocol. RuleReference **references. used internally */ /* Rule option count. typedef struct _RuleInformation { u_int32_t genID. priority. RuleReference The RuleReference structure defines a single rule reference. used internally */ /* Flag with no alert. char *message. RuleInformation The RuleInformation structure defines the meta data for a rule and includes generator ID. char * src_port. classification. #define #define #define #define #define #define #define ANY_NET HOME_NET EXTERNAL_NET ANY_PORT HTTP_SERVERS HTTP_PORTS SMTP_SERVERS "any" "$HOME_NET" "$EXTERNAL_NET" "any" "$HTTP_SERVERS" "$HTTP_PORTS" "$SMTP_SERVERS" 185 . u_int32_t revision. /* NULL terminated array of references */ } RuleInformation. and a list of references. revision. HOME NET. typedef struct _RuleReference { char *systemName. HTTP SERVERS. IPInfo The IPInfo structure defines the initial matching criteria for a rule and includes the protocol. /* non-zero is bi-directional */ char * dst_addr. including the system name and rereference identifier. char * src_addr.any. FlowFlags *flowFlags. Each option has a flags field that contains specific flags for that option as well as a ”Not” flag. Boyer-Moore content information. The ”Not” flag is used to negate the results of evaluating that option. u_int32_t incrementLength. u_int32_t patternByteFormLength. and flags (one of which must specify the buffer – raw. u_int32_t flags. OPTION_TYPE_FLOWBIT. u_int8_t *patternByteForm. ByteData *byte. typedef enum DynamicOptionType { OPTION_TYPE_PREPROCESSOR. int32_t offset. OPTION_TYPE_SET_CURSOR. OPTION_TYPE_MAX }. } ContentInfo. #define CONTENT_NOCASE #define CONTENT_RELATIVE #define CONTENT_UNICODE2BYTE 0x01 0x02 0x04 186 . FlowBitsInfo *flowBit. PreprocessorOption *preprocOpt. OPTION_TYPE_LOOP. OPTION_TYPE_BYTE_EXTRACT. relative. and a designation that this content is to be used for snorts fast pattern evaluation. etc. } option_u. typedef struct _ContentInfo { u_int8_t *pattern. if no ContentInfo structure in a given rules uses that flag. ByteExtract *byteExtract. depth and offset. URI or normalized – to search). such as the compiled PCRE information. should be marked for fast pattern evaluation. OPTION_TYPE_BYTE_JUMP. } RuleOption. It includes the pattern. union { void *ptr. Additional flags include nocase. u_int32_t depth. OPTION_TYPE_BYTE_TEST. OPTION_TYPE_PCRE. typedef struct _RuleOption { int optionType. Asn1Context *asn1. OPTION_TYPE_ASN1. The most unique content.RuleOption The RuleOption structure defines a single rule option as an option type and a reference to the data specific to that option. that which distinguishes this rule as a possible match to a packet. HdrOptCheck *hdrData. CursorInfo *cursor. /* must include a CONTENT_BUF_X */ void *boyer_ptr. OPTION_TYPE_CURSOR. unicode. • OptionType: Content & Structure: ContentInfo The ContentInfo structure defines an option for a content search. PCREInfo *pcre. In the dynamic detection engine provided with Snort. OPTION_TYPE_HDR_CHECK. the one with the longest content length will be used. the integer ID for a flowbit. The option types and related structures are listed below. OPTION_TYPE_FLOWFLAGS. ContentInfo *content. #define NOT_FLAG 0x10000000 Some options also contain information that is initialized at run time. OPTION_TYPE_CONTENT. LoopInfo *loop. } FlowBitsInfo. } FlowFlags. u_int32_t flags. It includes the flags. and flags to specify the buffer. u_int32_t flags. . void *compiled_extra. established session. #define ASN1_ABS_OFFSET 1 187 .h provides flags: PCRE_CASELESS PCRE_MULTILINE PCRE_DOTALL PCRE_EXTENDED PCRE_ANCHORED PCRE_DOLLAR_ENDONLY PCRE_UNGREEDY */ typedef struct _PCREInfo { char *expr. • OptionType: Flow Flags & Structure: FlowFlags The FlowFlags structure defines a flow option. /* pcre. void *compiled_expr. isset. u_int32_t compile_flags. etc. It mirrors the ASN1 rule option and also includes a flags field. • OptionType: Flowbit & Structure: FlowBitsInfo The FlowBitsInfo structure defines a flowbits option. • OptionType: ASN. which specify the direction (from server. as defined in PCRE.h. u_int32_t id. pcre flags such as caseless. /* must include a CONTENT_BUF_X */ } PCREInfo. It includes the PCRE expression. .1 & Structure: Asn1Context The Asn1Context structure defines the information for an ASN1 option. isnotset). toggle. It includes the name of the flowbit and the operation (set. unset. to server). u_int8_t operation.. etc) -. • OptionType: Cursor Check & Structure: CursorInfo The CursorInfo structure defines an option for a cursor evaluation. Field to check */ Type of comparison */ Value to compare value against */ bits of value to ignore */ • OptionType: Byte Test & Structure: ByteData The ByteData structure defines the information for both ByteTest and ByteJump operations. /* specify one of CONTENT_BUF_X */ } CursorInfo. int double_overflow. and flags. u_int32_t flags. #define #define #define #define #define #define #define #define #define CHECK_EQ CHECK_NEQ CHECK_LT CHECK_GT CHECK_LTE CHECK_GTE CHECK_AND CHECK_XOR CHECK_ALL 0 1 2 3 4 5 6 7 8 188 .etc). a value. as related to content and PCRE searches. The flags must specify the buffer. a mask to ignore that part of the header field. It includes the header field.¿. The cursor is the current position within the evaluation buffer. unsigned int max_length. int offset_type. the operation (¡. int print. /* u_int32_t flags. and flags. ¡. It includes the number of bytes. similar to the isdataat rule option. u_int32_t flags. typedef struct _CursorInfo { int32_t offset. /* u_int32_t op. an operation (for ByteTest. } HdrOptCheck. /* u_int32_t value.=.=. • OptionType: Protocol Header & Structure: HdrOptCheck The HdrOptCheck structure defines an option to check a protocol header for a specific value. It includes an offset and flags that specify the buffer. int length. multiplier. This can be used to verify there is sufficient data to continue evaluation.#define ASN1_REL_OFFSET 2 typedef struct _Asn1Context { int bs_overflow. } Asn1Context. a value. offset. an offset. /* u_int32_t mask_value. as well as byte tests and byte jumps.¿. /* Value of static */ int32_t *dynamicInt. DynamicElement *end. int32_t offset. u_int32_t op. or extracted value */ Offset from cursor */ Used for byte jump -. the value is filled by a related ByteExtract option that is part.ByteExtract.DynamicElement The LoopInfo structure defines the information for a set of options that are to be evaluated repeatedly. One of those options may be a ByteExtract. end. } LoopInfo. #define DYNAMIC_TYPE_INT_STATIC 1 #define DYNAMIC_TYPE_INT_REF 2 typedef struct _DynamicElement { char dynamicType. and increment values as well as the comparison operation for termination. u_int32_t flags. typedef struct _LoopInfo { DynamicElement *start. /* int32_t offset. an offset. DynamicElement *increment. flags specifying the buffer. It includes a cursor adjust that happens through each iteration of the loop. /* } ByteExtract. /* Pointer to value of dynamic */ } data. • OptionType: Loop & Structures: LoopInfo. /* char *refId. u_int32_t multiplier. /* u_int32_t multiplier. } ByteData. CursorInfo *cursorAdjust. 9 10 /* /* /* /* /* /* Number of bytes to extract */ Type of byte comparison. reference to a RuleInfo structure that defines the RuleOptions are to be evaluated through each iteration. It includes the number of bytes. struct _Rule *subRule. /* reference ID (NULL if static) */ union { void *voidPtr. • OptionType: Set Cursor & Structure: CursorInfo See Cursor Check above. */ The ByteExtract structure defines the information to use when extracting bytes for a DynamicElement used a in Loop evaltion. 4. for checkValue. The loop option acts like a FOR loop and includes start. } DynamicElement. specifies * relative. For a dynamic element. It includes whether the element is static (an integer) or dynamic (extracted from a buffer in the packet) and the value.#define CHECK_ATLEASTONE #define CHECK_NONE typedef struct _ByteData { u_int32_t bytes. u_int32_t value. u_int32_t flags.static or reference */ char *refId. /* void *memoryLocation. /* Holder */ int32_t staticInt. typedef struct _ByteExtract { u_int32_t bytes.2 Required Functions Each dynamic module must define a set of functions and data objects to work within this framework. u_int32_t op. 189 . for checkValue */ Value to compare value against. /* type of this field . and a reference to the DynamicElement. u_int8_t initialized. /* u_int32_t flags.32bits is MORE than enough */ must include a CONTENT_BUF_X */ • OptionType: Byte Jump & Structure: ByteData See Byte Test above. u int32 t value. It handles bounds checking for the specified buffer and returns RULE NOMATCH if the cursor is moved out of bounds. • int DumpRules(char *. etc). • int InitializePreprocessor(DynamicPreprocessorData *) This function initializes the data structure for use by the preprocessor into a library global variable. u int8 t *cursor) This function validates that the cursor is within bounds of the specified buffer. It will interact with flowbits used by text-based rules. initialize it to setup content searches. – int checkFlow(void *p. The metadata and setup function for the preprocessor should be defined sf preproc info. 190 . byteJump. ByteData *byteData. • int InitializeEngineLib(DynamicEngineData *) This function initializes the data structure for use by the engine. u int8 t **cursor) This function evaluates a single content for a given packet. – int setCursor(void *p. PCREInfo *pcre. Each of the functions below returns RULE MATCH if the option matches based on the current criteria (cursor position. – int byteTest(void *p. Cursor position is updated and returned in *cursor. etc).h. The sample code provided with Snort predefines those functions and defines the following APIs to be used by a dynamic rules library. CursorInfo *cursorInfo.2. u int8 t *cursor) This function compares the value to the value stored in ByteData. ByteData *byteData. • int RegisterRules(Rule **) This is the function to iterate through each rule in the list. – int checkCursor(void *p. log. as delimited by Asn1Context and cursor. and register flowbits. With a text rule. FlowBitsInfo *flowbits) This function evaluates the flowbits for a given packet. and pcreMatch to adjust the cursor position after a successful match. 4. as specified by ByteExtract and delimited by cursor. u int8 t **cursor) This is a wrapper for extractValue() followed by setCursor(). Asn1Context *asn1. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. – int detectAsn1(void *p. Rule *rule) This is the function to evaluate a rule if the rule does not have its own Rule Evaluation Function. u int8 t **cursor) This function evaluates a single pcre for a given packet. checking for the existence of the expression as delimited by PCREInfo and cursor. as specified by FlowBitsInfo. ByteData *byteData.c. u int8 t *cursor) This function evaluates an ASN. u int8 t **cursor) This function adjusts the cursor as delimited by CursorInfo. It is also used by contentMatch.2. drop. the with option corresponds to depth. – int processFlowbits(void *p. u int8 t *cursor) This is a wrapper for extractValue() followed by checkValue().1 check for a given packet. ByteExtract *byteExtract. – int pcreMatch(void *p. FlowFlags *flowflags) This function evaluates the flow for a given packet.4. These are defined in the file sf dynamic preproc lib.1 Preprocessors Each dynamic preprocessor library must define the following functions.Rule **) This is the function to iterate through each rule in the list and write a rule-stop to be used by snort to control the action of the rule (alert. CursorInfo *cursorInfo.2 Detection Engine Each dynamic detection engine library must define the following functions. – int byteJump(void *p. – int extractValue(void *p. Value extracted is stored in ByteExtract memoryLocation parameter. Cursor position is updated and returned in *cursor. dpd and invokes the setup function. – int checkValue(void *p. – int contentMatch(void *p. u int8 t *cursor) This function extracts the bytes from a given packet. This uses the individual functions outlined below for each of the rule options and handles repetitive content issues. • int ruleMatch(void *p. PCRE evalution data. and the distance option corresponds to offset. New cursor position is returned in *cursor. ContentInfo* content. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. checking for the existence of that content as delimited by ContentInfo and cursor. • int InitializeDetection() This function registers each rule in the rules library. Take extra care to handle this situation and search for the matched pattern again if subsequent rule options fail to match.c and is compiled together with sf dynamic preproc lib.c into lib sfdynamic preprocessor example. 191 . u int8 t **cursor) This function evaluates the preprocessor defined option. HdrOptCheck *optData) This function evaluates the given packet’s protocol headers. This preprocessor always alerts on a Packet if the TCP port matches the one configured. patterns that occur more than once may result in false negatives.h.3. • int EngineVersion(DynamicPluginMeta *) This function defines the version requirements for the corresponding detection engine library. u int8 t **cursor) This function is used to handled repetitive contents to save off a cursor position temporarily to be reset at later point. • int DumpSkeletonRules() This functions writes out the rule-stubs for rules that are loaded.h. 4.– int checkHdrOpt(void *p. • Rule *rules[] A NULL terminated list of Rule structures that this library defines. ! △NOTE 4. – int loopEval(void *p. u int8 t **cursor) This function iterates through the SubRule of LoopInfo.3 Rules Each dynamic rules library must define the following functions. u int8 t **cursor) This function is used to revert to a previously saved temporary cursor position. – void setTempCursor(u int8 t **temp cursor. Define the Setup function to register the initialization function.2. Cursor position is updated and returned in *cursor.3 Examples This section provides a simple example of a dynamic preprocessor and a dynamic rule. #define #define #define #define MAJOR_VERSION 1 MINOR_VERSION 0 BUILD_VERSION 0 PREPROC_NAME "SF_Dynamic_Example_Preprocessor" ExampleSetup #define DYNAMIC_PREPROC_SETUP extern void ExampleSetup().1 Preprocessor Example The following is an example of a simple preprocessor.c. • int LibVersion(DynamicPluginMeta *) This function returns the metadata for the shared library. Cursor position is updated and returned in *cursor.so. If you decide to write you own rule evaluation function. This should be done for both content and PCRE options. defined in sf preproc info. – int preprocOptionEval(void *p. The remainder of the code is defined in spp example. register flowbits. – void revertTempCursor(u int8 t **temp cursor. as specified by HdrOptCheck. This is the metadata for this preprocessor. This assumes the the files sf dynamic preproc lib. Examples are defined in the file sfnort dynamic detection lib. It should set up fast pattern-matcher content. LoopInfo *loop.h are used. The metadata and setup function for the preprocessor should be defined in sfsnort dynamic detection lib.c and sf dynamic preproc lib. The sample code provided with Snort predefines those functions and uses the following data within the dynamic rules library. etc. 4. as delimited by LoopInfo and cursor. as spepcifed by PreprocessorOption. PreprocessorOption *preprocOpt. fatalMsg("ExamplePreproc: Invalid port %d\n". ID 10000 */ _dpd. if (!arg) { _dpd. void ExampleInit(unsigned char *args) { char *arg.registerPreproc("dynamic_example". " \t\n\r"). "\t\n\r").conf. void ExampleInit(unsigned char *). DEBUG_WRAP(_dpd.fatalMsg("ExamplePreproc: Missing port\n"). void ExampleSetup() { _dpd. arg). void *context) { SFSnortPacket *p = (SFSnortPacket *)pkt.fatalMsg("ExamplePreproc: Invalid option %s\n". void ExampleProcess(void *. _dpd. arg = strtok(args.). port). 192 .logMsg(" } else { _dpd. u_int16_t portToCheck. &argEnd. #define SRC_PORT_MATCH 1 #define SRC_PORT_MATCH_STR "example_preprocessor: src port match" #define DST_PORT_MATCH 2 #define DST_PORT_MATCH_STR "example_preprocessor: dest port match" void ExampleProcess(void *pkt.addPreproc(ExampleProcess. void *). PRIORITY_TRANSPORT. ExampleInit).debugMsg(DEBUG_PLUGIN. 10000). 10). "Preprocessor: Example is setup\n"). } Port: %d\n".).debugMsg(DEBUG_PLUGIN. Transport layer. } port = strtoul(arg. "Preprocessor: Example is initialized\n"). _dpd. DEBUG_WRAP(_dpd. char *argEnd. if (!p->ip4_header || p->ip4_header->proto != IPPROTO_TCP || !p->tcp_header) { /* Not for me. arg)) { arg = strtok(NULL.logMsg("Example dynamic preprocessor configuration\n"). } portToCheck = port. portToCheck). } The function to process the packet and log an alert if the either port matches. return */ return.#define GENERATOR_EXAMPLE 256 extern DynamicPreprocessorData _dpd. } /* Register the preprocessor function. if(!strcasecmp("port". } The initialization function to parse the keywords from snort. unsigned long port. if (port < 0 || port > 65535) { _dpd. log alert */ _dpd.2 Rules The following is an example of a simple rule. rev:5. 1. } if (p->dst_port == portToCheck) { /* Destination port matched. return. \ content:"NetBus". The snort rule in normal format: alert tcp $HOME_NET 12345:12346 -> $EXTERNAL_NET any \ (msg:"BACKDOOR netbus active". content is ”NetBus”. 3. return. flow:from_server. take from the current rule set. classtype:misc-activity. 3. NOTE: This content will be used for the fast pattern matcher since it is the longest content option for this rule and no contents have a flag of CONTENT FAST PATTERN. { &sid109flow } }. DST_PORT_MATCH_STR. • Flow option Define the FlowFlags structure and its corresponding RuleOption. static RuleOption sid109option1 = { OPTION_TYPE_FLOWFLAGS.h.) This is the metadata for this rule library. 193 .if (p->src_port == portToCheck) { /* Source port matched. flow is from server. reference:arachnids. Declaration of the data structures. 0. log alert */ _dpd. \ sid:109.alertAdd(GENERATOR_EXAMPLE. It is implemented to work with the detection engine provided with snort.established. Per the text version. case sensitive. /*.3. defined in detection lib meta.established. SID 109. DST_PORT_MATCH. } } 4. SRC_PORT_MATCH. Search on the normalized buffer by default. no depth or offset. 0). 1. SRC_PORT_MATCH_STR.c.401. • Content Option Define the ContentInfo structure and its corresponding RuleOption. 0). and non-relative.alertAdd(GENERATOR_EXAMPLE. 0. static FlowFlags sid109flow = { FLOW_ESTABLISHED|FLOW_TO_CLIENT }. The rule itself. static RuleOption sid109option2 = { OPTION_TYPE_CONTENT. with the protocol header. not yet initialized. RuleOption *sid109options[] = { &sid109option1. /* priority */ "BACKDOOR netbus active". /* metadata */ { 3. { &sid109content } }. akin to => tcp any any -> any any */ { IPPROTO_TCP. /* proto */ HOME_NET. &sid109option2. /* Use internal eval func */ 0. option count. /* depth */ 0. /* message */ sid109refs /* ptr to references */ }. /* Direction */ EXTERNAL_NET. rule data. /* source IP */ "12345:12346". /* holder for NULL. /* ptr to rule options */ NULL. /* sigid */ 5. • Rule and Meta Data Define the references. no alert. /* revision */ "misc-activity". message. used internally */ 0. NULL }. static RuleReference *sid109refs[] = { &sid109ref_arachnids. /* destination port */ }. /* Type */ "401" /* value */ }. /* source port(s) */ 0.static ContentInfo sid109content = { "NetBus". /* holder for 0. /* Holder. /* Holder. /* Holder. NULL }. /* offset */ CONTENT_BUF_NORMALIZED. /* destination IP */ ANY_PORT.use 3 to distinguish a C rule */ 109. classification. used internally for flowbits */ NULL /* Holder. /* pattern to 0. /* genid -. /* classification */ 0. Rule options are evaluated in the order specified. meta data (sid. used internally */ 194 . used internally */ 0. static RuleReference sid109ref_arachnids = { "arachnids". etc). Rule sid109 = { /* protocol header. sid109options. search for */ boyer/moore info */ byte representation of "NetBus" */ length of byte representation */ increment length */ The list of rule options. /* holder for 0 /* holder for }. /* flags */ NULL. NULL }. extern Rule sid109. &sid637. 195 .• The List of rules defined by this rules library The NULL terminated list of rules. Rule *rules[] = { &sid109. etc. pcre. extern Rule sid637. flowbits. The InitializeDetection iterates through each Rule in the list and initializes the content. please use the HEAD branch of cvs. Features go into HEAD. Each of the keyword options is a plugin. Each preprocessor checks to see if this packet is something it should look at. we’ll document what these few things are. 5.net mailing list. We are currently cleaning house on the available output options. We’ve had problems in the past of people submitting patches only to the stable branch (since they are likely writing this stuff for their own IDS purposes). 5. 5.Chapter 5 Snort Development Currently.h for the list of pkt * constants.2. Patches should done with the command diff -nu snort-orig snort-new. traffic is acquired from the network link via libpcap. 5.2 Detection Plugins Basically. new output plugins should go into the barnyard project rather than the Snort project. It can do this by checking: if (p->tcph==null) return. 196 . This is intended to help developers get a basic understanding of whats going on quickly.3 Output Plugins Generally. this chapter is here as a place holder.2 Snort Data Flow First.sourceforge. look at an existing output plugin and copy it to a new item and change a few things. The detection engine checks each packet against the various options listed in the Snort config files.1 Preprocessors For example. End users don’t really need to be reading this section. This allows this to be easily extensible. Packets are then sent through the registered set of preprocessors. Later. If you are going to be helping out with Snort development. there are a lot of packet flags available that can be used to mark a packet as “reassembled” or logged.2. Similarly. 5.. Bug fixes are what goes into STABLE. Check out src/decode. a TCP analysis preprocessor could simply return if the packet does not have a TCP header.1 Submitting Patches Patches to Snort should be sent to the snort-devel@lists. Packets are then sent through the detection engine.2.] 198 .whitehats.html [4] [1] [6] [5] [3]. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue listening from where you left off, or restart the preview.
https://www.scribd.com/doc/52871359/snort-manual
CC-MAIN-2016-30
refinedweb
58,495
60.72
Version 1.23.0 For an overview of this library, along with tutorials and examples, see CodeQL for C# . An if statement, for example if if (x==0) { ... } else { ... } The else part is optional. else import csharp Gets the condition of this selection statement. Gets the else (false) branch of this if statement, if any. Gets the then (true) branch of this if statement. then Gets a textual representation of this element. Holds if basic block controlled is controlled by this control flow element with conditional value s. That is, controlled can only be reached from the callable entry point by going via the s edge out of some basic block ending with this element. controlled s Holds if control flow element controlled is controlled by this control flow element with conditional value s. That is, controlled can only be reached from the callable entry point by going via the s edge out of this element. Holds if this element is from an assembly. Holds if this element is from source code. Gets a child of this element, if any. Gets a child expression of this element, if any. Gets a child statement of this element, if any. Gets a first control flow node executed within this element. Gets a potential last control flow node executed within this element. Gets a control flow node for this element. That is, a node in the control flow graph that corresponds to this element. Gets a location of this element, including sources and assemblies. Gets an element that is reachable from this element. Gets the assembly that this element was compiled into. Gets number of children of this element. Gets the.
https://help.semmle.com/qldoc/csharp/semmle/code/csharp/Stmt.qll/type.Stmt$IfStmt.html
CC-MAIN-2020-05
refinedweb
278
68.26
nsILDAPBERElement is a wrapper interface for a C-SDK BerElement object. More... import "nsILDAPBERElement.idl"; nsILDAPBERElement is a wrapper interface for a C-SDK BerElement object. Typically, this is used as an intermediate object to aid in the manual construction of a BER value. Once the construction is completed by calling methods on this object, an nsILDAPBERValue can be retrieved from the asValue attribute on this interface. <> contains some documentation that mostly (but not exactly) matches the code that this wraps in section 17. Initialize this object. Must be called before calling any other method on this interface. Cause the entire set started by the last startSet() call to be written. Write a string to this element. Start a set. Sets may be nested. an nsILDAPBERValue version of this element. Calls ber_flatten() under the hood. general BER types we know about The following two tags are carried over from the LDAP C SDK; their exact purpose there is not well documented. They both have the same value there as well. Most TAG_* constants can be used in the construction or passing in of values to the aTag arguments to most of the methods in this interface. When returned from a parsing method, 0xffffffff is referred to has the parse-error semantic (ie TAG_LBER_ERROR); when passing it to a construction method, it is used to mean "pick the default tag for this type" (ie TAG_LBER_DEFAULT). BER encoding types and masks.
http://doxygen.db48x.net/comm-central/html/interfacensILDAPBERElement.html
CC-MAIN-2019-09
refinedweb
239
66.64
Many Indian states like Goa are trying to empower children in schools by teaching them computing. The author has become involved with a school’s hobby club, and this article lists a few programming environments he feels school students should be familiar with. I was apprehensive when I volunteered for a hobby programming session, and when it was suggested that Scratch would be the ideal environment to work in. I had never used MIT Scratch and am more comfortable writing code rather than dragging and dropping graphical tiles to create code. I was hoping to drive the students towards Python’s Turtle module. However, Scratch was the environment that appealed to the students. After all, animation is far more exciting! In the process, what we discovered was that different environments make it easier to explore different programming concepts. The following is a short list of programming environments students could explore. In fact, they should explore each one of them, as that will ensure that they have familiarised themselves with the concepts rather than the syntax. Moving to ‘real’ programming using languages like C++, Java, etc, should then be far easier. TurtleArt TurtleArt is a part of the Sugar desktop environment, better known as the software used on OLPC (one laptop per child). This can be installed and used with any Linux desktop environment as well. For example, on Fedora 24, the installation (and correction for any inconsistency in directory names) is as follows: $ sudo dnf install sugar-turtleart $ cd /usr/share/sugar/activtities $ sudo ln -s TurtleBlocks.activity TurtleArt.activity $ cp TurtleArt.activity/turtleblocks.desktop ~/Desktop/ Now, clicking on the TurtleBlocks icon should start the TurtleArt development environment. TurtleArt is an implementation of the LOGO programming language, with extensions, but uses the concept of pluggable blocks from Scratch. You may think of a turtle as an object that can move, turn and carry a pen which may be moved up or down. And, of course, the colour of the pen can be changed. As soon as the turtle has to create anything more than simple lines, it is obvious that the concept of a function becomes very important. The example in Figure 1 illustrates the code for drawing squares, with the starting position (0,0) of the turtle as the centre of the squares. There are no parameters to functions in TurtleArt. A student stores values in a box and retrieves them from the box when needed. Python Turtle The turtle module in Python is a part of the Tkinter package. The LOGO commands are closely modelled following the Python syntax. A turtle is just a Python class. You may create and manage turtle objects using the full scope of Python. In particular, it is easy to create multiple turtle objects and control the movement of each. The simple example below illustrates a race between 10 turtles, where their movement is controlled by a random value: from turtle import * import random def create_turtle(y): turtle = Turtle() turtle.penup() turtle.goto(0,y) turtle.pendown() # customize the pen of each turtle turtle.pensize(width=5) turtle.pencolor((random.random(),random.random(),random.random())) return turtle # create a new turtle and position it at different heights. turtles = [] for number in range(10): turtles.append(create_turtle(20*number – 100)) # The race while True: n = random.randint(0,9) turtle = turtles[n] turtle.forward(1) if turtle.xcor() == 100: print(“Turtle %d wins”%(n+1)) break Scratch Scratch is the common option for Linux. Scratch 2 is not easy to run on Linux as it uses AdobeAIR, which is no longer available for Linux. The major difference between the two is that Scratch 2 allows the creation of custom blocks, which will be discussed later. Scratch uses blocks grouped into various categories for constructing programs. A block may have parameters. It is easy to drag and connect various blocks to construct a program visually. The key components of Scratch are sprites, costumes, sounds and scripts. A sprite is a graphic object (perhaps a virtual elf or fairy?), which is manipulated as an entity. Each sprite can be represented by different shapes, called costumes. Sounds are exactly what you may expect. One or more scripts can be applied to each sprite. The triggering of each script is via an event. Hence, Scratch is a great way to get exposed to event-driven programming. The example shown in Figures 2, 3 and 4 uses the default sprite with two costumes and a sound. Add three backgrounds to the stage. In the example, the sprite alternates between costumes and moves forward, giving the illusion of walking. Once it hits the end of the stage, it goes to the other end and sends a message to change the background. The sound is triggered by pressing the space bar. Figure 2 is the script for the sprite and Figure 3 is the script for the stage. Figure 4 shows a few scenes of the stage. Snap! This was a project at the University of California Berkeley —to add custom blocks to MIT’s Scratch. It was initially an enhancement of Scratch and called BYOB (build your own block). Meanwhile, Scratch 2 was released, offering the same functionality, though written in AdobeAIR. Snap! is a new implementation modelled on Scratch in JavaScript. It is expected to be used online; however, the source can be downloaded and installed on a local machine as follows: $ wget $ cd ~/public_html $ unzip ~/snap.zip Apache should be running and user directories should be enabled. For example, on Fedora, it is in the file /etc/httpd/conf.d/userdir.conf. The source does not have a library of sample sounds, costumes and backgrounds. However, that is not a limitation. You can drag and drop these from the Scratch installation in /usr/share/scratch/Media. In a revised implementation of the example above, create a block Walk with the number of steps as a parameter. The block issues the message ‘Next room’ once it reaches the end. Figure 5 shows the block definition and the sprite scripts. There is no change in the stage script from the Scratch example. You can clone sprites and write recursive code in custom blocks. Each of the environments allows a student to learn and create complex projects, which is fun. The wonderful aspect of programming is that you learn by doing. And open source software is truly amazing!
https://opensourceforu.com/2017/03/from-logo-to-scratch/
CC-MAIN-2018-26
refinedweb
1,067
66.03
I (very foolishly) spent a few minutes today trying to figure out why applying WCF tracing configuration to a ADO.NET Data Services client (i.e. a proxy generated with webdatagen.exe) wasn't producing me any tracing results. It didn't take too long to realise that the client proxy isn't actually a WCF proxy. It just uses HttpWebRequest directly. If you want WCF tracing, put your configuration onto your service side code. I didn't actually find WCF that useful in tracing messages here and so fell back to inserting a proxy and tracing at the HTTP level. That didn't work perfectly for me either - I found most value by taking my Entity Framework class (i.e. the class that derives from ObjectContext) and then adding another class which derives from that, handling the SavingChanges event in the constructor and then sticking a breakpoint in my event handler server-side in order that I could have a look at the ObjectStateManager as calls come in from Data Services layer. What I mean here is... public class MyContext : demoEntities { public MyContext() { this.SavingChanges += OnSavingChanges; } void OnSavingChanges(object sender, EventArgs e) { Debugger.Break(); // Now we can use the debugger to look at the object // state manager. ObjectStateManager m = this.ObjectStateManager; } } where demoEntities is the ObjectContext-derived class that the EF tooling spits out for me from my database. That class is a partial class but the generation tool already throws out a default constructor so I thought it would be "better" to derive here.
http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/01/02/10058.aspx
crawl-002
refinedweb
256
57.06
Solving Unbounded Knapsack Problem using Dynamic Programming Sign up for FREE 1 month of Kindle and read all our books for free. Get FREE domain for 1st year and build your brand new site Reading time: 30 minutes | Coding time: 10 minutes Knapsack problem refers to the problem of optimally filling a bag of a given capacity with objects which have individual size and benefit. The objective is the increase the benefit while respecting the bag's capacity. In the original problem, the number of items are limited and once it is used, it cannot be reused. This restriction is removed in the new version: Unbounded Knapsack Problem. In this case, an item can be used infinite times. This problem can be solved efficiently using Dynamic Programming.Read about the general Knapsack problem here Problem Statement Given N items each with an associated weight and value (benefit or profit). The objective is to fill the knapsack with items such that we have a maximum profit without crossing the weight limit of the knapsack. In the Unbounded version of the problem, we are allowed to select one item multiple times, unlike the classical one, where one item is allowed to be selected only once. Example: Suppose we have three items which is defined by a tuple (weight, benefit). The items are: (7, 12) ; (3, 2), (20, 41) We have a bag with capacity 58. In this case, the optimal filling will be: Item 3 + 3 + 1 + 1 + 2 Note the total benefit is (41+41+12+12+2) = 108 with total weight being 57 (< 59). We could have covered all the weight like: Item 3 + 3 + 2 + 2 + 2 + 2 + 2 + 2 The total weight will become 59 but the benefit will be (41 * 2 + 2 * 6) = 94 (< 108) Hence, in the previous combination, we have taken the optimal distribution. Brute Force Approach If we are given a set of items with their weights and profits and we are asked to compute the maximum possible profit of them, the first approach we'd think of would be the brute-force one. Here, we'll try all possible combinations of items and would take note of profits we achieve in each of them and finally, compute the maximum of those profits as our answer. Pseudocode : maxProfit = 0 for i = 0 to 2^N: bin = binary(i) // Convert the number to binary profit = 0 weight = 0 for j = 0 to bin.length(): if bin[j] is set: // If the bit j is set, we have to include that item. if weight + wt[j] > W: // When weight of the combination exceeds Capacity, Break. profit = 0 break profit = profit + val[j] weight = weight + wt[j] maxProfit = max(maxProfit, profit) // Update max profit. This way, choosing from all combination would mean a time complexity of order Θ(2^N) as there are total nC0 + nC1 + .. nCn = 2^n possible combinations of n items. This would be highly inefficient, given the computation time. Thus, we use dynamic programming method. Dynamic Programming Approach We use dynamic programming approach to solve this problem, similar to what we did in classical knapsack problem. The only difference is we would use a single dimensional array instead of 2-D one used in the classical one. This is because we have infinite supply of every element available to us and hence, we don't need to keep a track of which elements have been used. Thus, our array would be dp[W+1] , where dp[i] indicates the maximum profit we can achieve with a knapsack capacity of i. Here, W is the total knapsack capacity, hence our answer would be dp[W]. dp[i] = maximum profit we can achieve with a knapsack capacity of i Programmatically, we iterate over all the elements available for each knapsack capacity between 1 to W and determine if it can be used to achieve a greater profit. Thus, our dp equation would look something like- dp[i] = max(dp[i], dp[i-wt[j]] + val[j]) if wt[j] < i (item j is taken) Pseudocode: for i = 0 to W: for j = 0 to N: if wt[j] < i : dp[i] = max(dp[i], dp[i-wt[j]] + val[j]) Suppose we are given 4 items, with weight 1,2,5 and 3 respectively and the profits associated with them are 40,30,50 and 25 in the same order. The capacity of the knapsack is given as 2. Proceeding with our approach, initially, our dp array is set to 0. We begin iterating from 1 to 6 (capacity of knapsack). Our wt array = [1,2,5,3] Our val array = [40,30,50,20] Initial dp array = [0,0,0] First Iteration (i=0) j=0 : wt[j] <= i not satisfied. (wt[0] = 1, i = 0) j=1 : wt[j] <= i not satisfied. (wt[1] = 2, i = 0) j=2 : wt[j] <= i not satisfied. (wt[2] = 5, i = 0) j=3 : wt[j] <= i not satisfied. (wt[3] = 3, i = 0) </br> dp array after First iteration = [0,0,0] Second Iteration (i=1) j=0 : wt[j] <= i is satisfied. (wt[0] = 1, i = 1) Thus, dp[1] = max(dp[1],dp[1-1]+val[0]) = max(0,0+40)=40. j=1 : wt[j] <= i not satisfied. (wt[1] = 2, i = 1) j=2 : wt[j] <= i not satisfied. (wt[2] = 5, i = 1) j=3 : wt[j] <= i not satisfied. (wt[3] = 3, i = 1) dp array after Second iteration = [0,40,0] Third Iteration (i=2) j=0 : wt[j] <= i is satisfied. (wt[0] = 1, i = 2) Thus, dp[2] = max(dp[2],dp[2-1]+val[0]) = max(0,40+40)=80. j=1 : wt[j] <= i is satisfied. (wt[1] = 2, i = 2) Thus, dp[2] = max(dp[2],dp[2-2]+val[1]) = max(80,0+30)=80. j=2 : wt[j] <= i not satisfied. (wt[2] = 5, i = 2) j=3 : wt[j] <= i not satisfied. (wt[3] = 3, i = 2) Final dp array = [0,40,80] Now, since i = W (knapsack capacity), our iteration would stop. So, the maximum profit that we can achieve is dp[2] = 80. By using item 1 two times, as it has weight = 1 and profit = 40. Complexities - Time complexity: Θ((W+1)*N). As we can take all items multiple number of times, we check all of them(1 to N) for all weights from 0 to W. Hence, time complexity = (W+1) * N. - Space complexity: Θ(W+1). We maintain a dp array of size W+1, where dp[i] denotes the maximum profit for capacity i. Hence, space complexity = W+1 Here, W = Knapsack Capacity, N = No. of items. Implementations We provide the Dynamic Programming implementation in three languages C++, Python and Java. Implementation in C++: #include <bits/stdc++.h> using namespace std; long int UnboundedKnapsack(long int Capacity,long int n, long int weight[],long int val[]){ long int dp[Capacity+1]; for(int i=0;i < W+1;i++){ dp[i]=0; } for(int i=0;i < W+1;i++){ for(int j=0;j < n;j++){ if(weight[j] < i){ dp[i] = max(dp[i], dp[i-weight[j]] + val[j]); } } } return dp[Capacity]; } int main(){ // The no. of items : long int n = 4; // Weights of all the items : long int weight[4] = {5 , 10, 8, 15}; // Enter values of all the items : long int val[4] = {40, 30, 50, 25}; // Enter the knapsack capacity : long int Capacity = 120; cout << "The maximum value you can achieve in Unbounded Knapsack is: " << UnboundedKnapsack(W,n,wt,val); return 0; } Implementation in Python: # Unbounded Knapsack Problem def UnboundedKnapsack(Capacity,n,weight,val): dp=[] for i in range(Capacity+1): dp.append(0) for i in range(0,Capacity+1): for j in range(0,n): if weight[j] < i: dp[i] = max(dp[i] , dp[i-weight[j]]+val[j]) return dp[Capacity] ''' No. of items ''' n = 4 ''' Weights of all items ''' weight = [5,10,8,15] ''' Values of all items ''' val = [40,30,50,25] ''' Capacity of Knapsack ''' Capacity = 120 print("The maximum value possible is ",UnboundedKnapsack(Capacity,n,weight,val)) Implementation in Java: import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { public static int unboundedKnapsack(int Capacity,int n, int weight[],int val[]){ int[] dp = new int[Capacity+1]; for(int i=0;i < Capacity;i++){ dp[i]=0; } for(int i=0;i < Capacity;i++){ for(int j=0;j < n;j++){ if(weight[j] < i){ dp[i]=Math.max(dp[i],dp[i-weight[j]]+val[j]); } } } return dp[Capacity]; } public static void main(String[] args) { // No. of items int n = 4; // Values(Profits) of items int val[] = {40,30,50,25}; // Weight of items int weight[] = {5,10,8,15}; // Knapsack capacity int Capacity = 120; System.out.println("Maximum value that can be achieved is: "+unboundedKnapsack(Capacity,n,weight,val)); } }
https://iq.opengenus.org/unbounded-knapsack-problem/
CC-MAIN-2021-17
refinedweb
1,501
61.67
The. At this point, it would be useful to have a nice abstraction to handle all this that you could code against while keeping your application's code elegant and simple to use. As usual, when looking for a design to start with, it turns out this problem was already nicely solved for C# developers with the XNA Game Studio GamePad class. GamePad class The September 2014 release of DirectX Tool Kit includes a C++ version of the GamePad class. To make it broadly applicable, it makes use of XInput 9.1.0 on Windows Vista or Windows 7, XInput 1.4 on Windows 8.x, and IGamePad on Xbox One. It's a simple class to use, and it takes care of the nuanced issues above. It implements the same thumb stick deadzone handling system as XNA, which is covered in detail by Shawn Hargreaves in his blog entry "Gamepads suck". The usage issue that continues to be the responsibility of the application is ensuring that you poll it fast enough to not miss user input, which mostly means ensuring your game has a good frame rate. See the documentation wiki page on the new class for details, and the tutorial. The headset audio features of XInput are not supported by the GamePad class. Headset audio is not supported by XInput 9.1.0, has some known issues in XInput 1.3 on Windows 7 and below, works a bit differently in XInput 1.4 on Windows 8, and is completely different again on the Xbox One platform. The GamePad class is supported on all the DirectX Tool Kit platforms: Win32 desktop applications for Windows Vista or later, Windows Store apps for Windows 8.x, and Xbox One. You can create and poll the GamePad class on Windows Phone 8.x as well, but since there's no support for gamepads on that platform it always returns 'no gamepad connected'. Xbox One Controller Support for the Xbox One Controller on Windows was announced by Major Nelson in June and drivers are now hosted on Windows Update, so using it is a simple as plugging it into a Windows PC via a USB cable (see the Xbox One support website). The controller is supported through the XInput API as if it were an Xbox 360 Common Controller with the View button being reported as XINPUT_GAMEPAD_BACK, and the Menu button being reported as XINPUT_GAMEPAD_START. All the other controls map directly, as do the left and right vibration motors. The left and right trigger impulse motors cannot be set via XInput, so they are not currently accessible on Windows. The Xbox One Wireless controller is not compatible with the Xbox 360 Wireless Receiver for Windows, so you have to use a USB cable to use it with Windows. Note also that it will unbind your controller from any Xbox One it is currently setup for, so you'll need to rebind it when you want to use it again with your console. Update: DirectX Tool Kit is also hosted on GitHub. Windows 10: There is a new WinRT API in the Windows.Gaming.Input namespace for universal Windows apps. This API supports both the Xbox 360 Common Controller and the Xbox One controller, including access to the left/right trigger motors. The latest version of GamePad is implemented using this new API when built for Windows 10. Note that existing XInput-based Windows Store applications can link against xinputuap.lib which is an adapter for the new API for universal Windows apps--this adapter does not exist headset audio either. Related: XInput and Windows 8, XInput and XAudio2 I came by a delay, but thanks noting about Windows 10 update and DirectX Tool Kit+ Windows.Gaming.Input namespace! Hi, It seems that this api does not work when executed on a Win10 IOT Core. Could you confirm? Rgds
https://blogs.msdn.microsoft.com/chuckw/2014/09/05/directx-tool-kit-now-with-gamepads/
CC-MAIN-2018-26
refinedweb
646
70.84
Hi, almost every time I create a somewhat more complex figure I have to fight with the not too smart positioning of the plots and the size of margins around the axes. From many postings here I have learned that this is the absolute intention, i.e. it is broken by design unless the programmer takes care about this. I have to admin that I do not really get this idea. I am aware that the defaults will not change anytime soon and so I'd like to ask for an "idiot-proof" mode: this could be enabled by an rcParam and take care of proper dimensions, scale axis labels, titles, margins etc so that they don't cover. Here's an example for a matplotlib script which is a simple as it can get and demonstrates the broken layout which a user gets by default. import scipy import pylab x = scipy.linspace(-50,50, 100) y1 = scipy.rand(100) y2 = scipy.sin(x) y3 = y1 + y2 fig = pylab.figure() ax1 = fig.add_subplot(311) ax2 = fig.add_subplot(312) ax3 = fig.add_subplot(313) ax1.plot(x, y1) ax2.plot(x, y2) ax3.plot(x, y3) ax1.set_title('some title') ax2.set_title('some title') ax3.set_title('some title') pylab.show() Of course, one can adjust the figsize but the results are still far from being adorable. The spacing around the sublplots increases for no apparent reason while the spacing between the subplot remains the same so that everything looks cramped... Thank you many times in advance, best regards, Daniel
https://discourse.matplotlib.org/t/feature-request-automatic-scaling-of-subplots-margins-etc/15428
CC-MAIN-2019-51
refinedweb
257
78.04
GlassFish ESB You may previously have used OpenESB, and if so you will be familiar with GlassFish ESB. GlassFish ESB is simply the OpenESB core runtime, bundled with the GlassFish application server, the NetBeans IDE and some of the more common JBI components already deployed to the ESB. With release 2.2 of GlassFish ESB, the PoJo Service Engine has been included with the software download. Both GlassFish ESB and a number of additional JBI Components can be downloaded from the Open ESB website at:. Creating a PoJo and Deploying to GlassFish ESB Using GlassFish ESB, its easy to create PoJos and deploy them to the JBI runtime adding different bindings such as SOAP or REST to it or to deploy them to the BPEL engine and use them as part of a business process. By deploying to GlassFish ESB, we are allowing many different types of client to consume our PoJos. For this article, I'm going to write a simple PoJo that reverses strings and then deploy this to GlasFish ESB with a SOAP binding. This will allow any web service client that supports SOAP to access the PoJo. If you are familiar at all with NetBeans, then creating a PoJo for deployment to the ESB will be a simple matter. The first stage is creating a Java project in which the PoJo can be created. Note that PoJos are created in standard Java projects and not in a SOA project template. Using the "New Project" option in NetBeans, create a new "Java Class Library" project and call it ReversePoJo. Upon creating a standard Java Project, PoJos can be created in the project by using the New PoJo Service menu option under the ESB category. Select this option and create a new PoJo Service as shown below. Class Name: ReversePoJo Package: com.davidsalter.soa.reversepojo Method Name: reverseString Input Argument Type: String Return Type: String When we press the Finish button, NetBeans will create an empty PoJo using the details supplied. The code is shown below. @Provider public class ReversePoJo { public ReversePoJo() { } @Operation (outMessageTypeQN="{. davidsalter.com/ReversePoJo/}ReversePoJoOperationResponse") public String reverseString(String input) { return input; } } Looking at this code, we can see that this is indeed a simple PoJo. The class does not extend any special framework classes and does not need to implement any framework interfaces. We can see however that there are two annotations used on the class which are required to allow us to deploy the class to GlassFish ESB. @Provider defines that the PoJo is a JBI provider, i.e. it provides business logic to other components. In this case, we are stating that the ReversePoJo class can provide business logic to other components, either by adding bindings to the service and exposing directly from the ESB, or being invoked within the ESB from the BPEL runtime. @Operation defines methods in the class that can be consumed - that is methods that other components can invoke. In the simplest case, @Operation methods take a String as a parameter and return a String. More complex types such as org.w3c.dom.Node and javax.jbi.messaging.NormalizedMessage can however be used for both input and output parameters. To implement the reverseString method in this class, add the following implementation. @Operation (outMessageTypeQN="{. davidsalter.com/ReversePoJo/}ReversePoJoOperationResponse") public String reverseString(String input) { StringBuffer reverse = new StringBuffer(); for (int i = input.length(); i > 0; i--) { reverse.append(input.charAt(i-1)); } return reverse.toString(); } This code simply takes the input argument, reverses it and returns it to the caller. One of the benefits of writing PoJos for deployment to the ESB runtime is that the PoJos can be easily tested. Unit tests can be added to a NetBeans project by selecting the Tools | Create Unit Tests menu option in the project explorer. A simple unit test for this class is shown below. public class ReversePoJoTest { public ReversePoJoTest() { } @Test public void testReverseString() { String input = "abcdefg"; ReversePoJo instance = new ReversePoJo(); String expResult = "gfedcba"; String result = instance.reverseString(input); assertEquals(expResult, result); } } Creating A SOAP Binding for PoJos So far, we've looked at PoJos and seen how we can create a PoJo project that can be deployed to the ESB, and how we can create PoJos within the project. To deploy our ReversePojo to the ESB, we need to add a SOAP binding to it. Within GlassFish ESB, we do this as a Composite Application. To create a Composite Application, select File | New Project and create a Composite Application from the SOA category on the New Project dialog screen. Create a new Composite Application called ReverseCompositeApplication as shown in the following screen shot. Project Name: ReverseCompositeApp Upon creating a blank composite application, the design surface for the application is shown. This surface allows the developer to add different JBI consumers and providers (such as the PoJo we developed earlier) to the composite application, wire them together and add different bindings to the application. For this sample application, we first need to drag and drop the ReversePoJo java application into the JBI modules section of the design surface. This then registers our PoJo as a JBI module to use in the application. Next, we need to add a SOAP binding onto the application. This is done by dragging and dropping a SOAP WSDL binding from the WSDL Bindings palette onto the WSDL ports of the design surface. At this point we need to build the Composite application so that NetBeans can correctly display the components and we can wire them together. After building the application, simply drag a connection from the SOAP binding consumer port to the PoJo provider port to complete the application. What we've done here is created a Composite Application that uses the ReversePojo JBI module as a service provider. We've then added a SOAP binding onto the Composite Application so that external clients can access the PoJo via SOAP calls. Testing the Composite Application Now that we've written our simple composite application, we need to test it to ensure that it is all working as expected. To create a new test case, right click on the Test node for the ReverseCompositeApp and select New Test Case. When creating the test case, NetBeans will display a list of all of the WSDL files found within the Composite Application so that you can specify which one to test. We want to select the WSDL for the composite application so that we can perform a full integration test on the application. On the Select WSDL Document dialog, select ReverseCompositeApp.wsdl After selecting the WSDL to test, we need to select the operation we wish to test. Select ReversePoJoOperation on the following screen. Finally, select Finish to create the test case. When the test case is created, NetBeans will automatically show the XML that is to be passed to the web service for testing. This is stored in a file called Input.xml. <soapenv:Envelope xsi:schemaLocation= "" xmlns:xsi="" xmlns:xsd="" xmlns:soapenv="" xmlns: <soapenv:Body> <rev:ReversePoJoOperation> <part1>abcdefg</part1> </rev:ReversePoJoOperation> </soapenv:Body> </soapenv:Envelope> Edit this file and change the value of the <part1/> element as shown above. This data will be passed into the reverseString method of the ReversePoJo class, so we would expect to get a return result the same as we did for the JUnit test earlier - gfedcba. Right click on the unit test and select the Run option to start the unit test. NetBeans will package up the Composite Application and deploy it to the JBI runtime, starting GlassFish if necessary. Since we have not specified any expected test results, NetBeans will ask if the results from the test are to be stored for future test runs. If we select Yes, the results are stored in the Output.xml file so when future tests are run, NetBeans will compare the test results with this file to indicate whether the tests are successful or not. Opening up the Output.xml will show the test results. <?xml version="1.0" encoding="UTF-8" standalone="no"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV= "" xmlns:xsi="" xsi: <SOAP-ENV:Body> <m:ReversePoJoOperationResponse xmlns:m="ReverseCompositeApp" xmlns: <part1>gfedcba</part1> </m:ReversePoJoOperationResponse> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Hopefully, if everything has gone to plan, you will see the the <part1> element successfully contains the value gfedcba - the reverse of the input string. Summary In this article I've shown how GlassFish ESB can be used to create and deploy PoJo based JBI components and how composite applications can be created to consume the PoJo services with different WSDL bindings. Although a simple example, hopefully this shows the power of the GlassFish ESB runtime which when combined with the design time tools of NetBeans can greatly aid development of integration and SOA based applications. The source code for this article can be downloaded from:.
https://www.packtpub.com/books/content/developing-soa-applications-using-pojos
CC-MAIN-2017-13
refinedweb
1,476
53.21
. their final names, and finally we have done some namespace factoring work. Now, the primitives, host APIs, and parts’ APIs are all in separate namespaces.. such thing as an “exceptional condition”. Whether a condition is exceptional or not depends on the context of usage, --- but reusable libraries rarely know how they will be used. For example, OutOfMemoryException might be exceptional for a simple data entry application; it’s not so exceptional for applications doing their own memory management (e.g. SQL server). In other words, one man’s exceptional condition is another man’s chronic condition. 4 years ago, I blogged about Framework Design Guidelines Digest. At that time, my blog engine did not support attaching files and I did not have a convenient online storage to put the document on, and so I asked people to email me if they want an offline copy. Believe it or not, I still receive 1-2 emails a week with requests for the offline copy. Now that I have a convenient way to put the document online, and the fact that I wanted to make some small updates, I would like to repost the digest. The abstract is below and the full document can be downloaded here. This document is a distillation and a simplification of the most basic guidelines described in detail in a book titled Framework Design Guidelines by Krzysztof Cwalina and Brad Abrams. Framework Design Guidelines were created in the early days of .NET Framework development. They started as a small set of naming and design conventions but have been enhanced, scrutinized, and refined to a point where they are generally considered the canonical way to design frameworks at Microsoft. They carry the experience and cumulative wisdom of thousands of developer hours over several versions of the .NET Framework..
http://blogs.msdn.com/KCwalina/
crawl-002
refinedweb
300
52.8
Note: This post won’t make sense here. Refer to the original post. This is a test of a guide for embedding code on Blogger found on Geed Talkin Siebel. I’ve some code that I’d like to share. When I first learnt Java, I saw these few lines of code. When I was still in secondary school, one of my classmates complained about the syntactic and conceptual complexity of the Deeply impressed by what I’ve done using Java, I didn’t took his words. After several years, I looked at the code for handling zipped files in Apache Tomcat 2.5, and I understand him a little bit. A year ago, when I looked at the official web page of Apache Commons FileUpload impatiently, I could get nothing from the sample code there. Fortunately, with the debugger in Eclipse, I managed to apply the knowledge on the user guide on that site. I’m sure that without any debugging tools, I can never get the job done! Recently, when I backed up my files, I browsed a tutorial about extracting a zipped file on CodeJava and looked at the code there, and I’ve found out that even though I managed to use the ZipInputStream class to handle zipped archives, I still have no idea on how the machine works because the language is too high level. The story ends here. In the past few months, without any knowledge and effort to get a good display of the source code, I just typed the following codes directly into the HTML view of the WYSIWYG editor of Blogger. By doing so, the output is like this: #include <iostream> using namespace std; int main(void) { cout << "Hello world!" << endl; return 0; } Apart from unattractive appearance, the above list doesn’t have line numbers. Though one can easily select and copy and code into a text editor, this is inefficient, when compared to SyntaxHighlighter. After motivation, what’s needed is action. Following the guide mentioned above, I clicked the “copy to clipboard” icon at the top right-hand corner of relevant blocks of source code, and pasted them into the HTML of the template. Don’t worry about the single quotes in line 219. It works fine. Without a successful experience of getting it work, I thought that the above guide didn’t work and had treated it as another guide that I can’t make use of. (After getting things work, I think I was unfair to its author by simply saying that “it doesn’t work!”) I suspected that Blogger’s dynamic view templates inhibits the use of SyntaxHighlighter, just like the case in MathJax, and would like to change the template of this blog. However, the space of displaying figures would be reduced. After that, I gave up this idea and tried to find some way to get SyntaxHighlighter work with the dynamic view. Then I found a detailed but a little bit complicated guide for impatient users on Crux Framework. Luckily, I managed to find another post on doing the same thing. It really saves the day! Pasting the three lines of code at the bottom, it finally works! Yes, there’s just three lines. After getting things done, I’ve realised that for dynamic views, there’s only one missing step in the first guide, which is the last part of the last guide. I can now start embedding source code into my blog posts. For an angled block <>, they need to be converted to <tag> so that the JavaScript will run without errors. It is better to leave it to an online HTML encoder to do this tedious task. One final note: for indentation of source code with tabs, it’s better to convert it to whitespaces first because toggling between the “Compose” and “HTML” modes of the online editor on Blogger will lead to disappearance of the tabs. The replacement is not difficult in Vim. Issuing the command :[range]s:^\t: [num_of_times]: will do. (It depends on the tabstop option on Vim. Adapt it according to your needs.)
https://vincenttam.github.io/blog/2014/01/06/testing-online-code-syntax-highlighters-for-blogs-1-syntaxhighlighter/
CC-MAIN-2019-22
refinedweb
685
71.55
25 February 2011 03:04 [Source: ICIS news] SINGAPORE (ICIS)--Methanex said on Friday it has nominated its methanol Asian posted contract price (APCP) for March at $420/tonne (€307/tonne), a rollover of February’s price. The level was high considering current spot values at around $350/tonne (€252/tonne) CFR (cost and freight) ?xml:namespace> Another buyer said there was a big gap in buying and selling ideas but declined to comment on the actual discounted price. On Thursday, Methanex also rolled its The company had also posted its first quarter methanol European Posted Contract Price at €325/tonne FOB (free on board) (
http://www.icis.com/Articles/2011/02/25/9438653/methanex-rolls-over-420tonne-apcp-for-march-methanol.html
CC-MAIN-2014-49
refinedweb
105
55.58
21 December 2011 17:37 [Source: ICIS news] (ICIS)--Sibur has agreed to sell its fertilizer assets to holding company Siberian Business Union, the Russian major petrochemical holding company said on Wednesday. Sibur will sell a 100% stake in its fertilizer business, SIBUR-Fertilizers, it said. The assets being transferred consist of OAO Azot in Kemerovo and the Angarsk Nitrogen Fertilizer Plant, both in Siberia, as well as Biysk Railcar Repair Enterprise. However, the company said its Mineral Fertilizer asset in ?xml:namespace> The deal is to be approved during the nearest board of directors meeting, it added. Sibur said fertilizers is a non-core businesses for the company, and it is selling the assets to focus on its main business, petrochemistry. Sibur declined to say how much it raised from the sale. SBU is involved in coal mining, engineering and transportation. For more on Sibur
http://www.icis.com/Articles/2011/12/21/9518593/sibur-agrees-to-sell-fertilizer-assets-to-siberian-business-union.html
CC-MAIN-2014-10
refinedweb
146
55.54
a reduced test case that happens in some of our async support for iOS, and throws an ArgumentException from the compiler generated code when some of the captured variables are accessed: using System.Threading.Tasks; using System.Threading; using System; class X { static void Main () { var x = new X (); x.Run (); Thread.Sleep (6000); } Task<int> AnimateAsync (Action callback) { callback (); return null; } void SecondLevel (Action callback) { callback (); } async void Run () { var ret = await AnimateAsync (() => { SecondLevel (() => { // This throws System.ArgumentException: Value does not fall within the expected range. Console.WriteLine (this); }); }); } } This sample, should throw a NullReferenceException when ran (that is what CSC compiled output does). The reason it throws is because I took a lot of code out to make the test case simpler (the return null from task). But Mono's C# compiler output raises an ArgumentException when running on Mono and raises a FatalExecutionEngine error on Windows (error code 0xc0000005) and is described as "This error may be a bug in the CLR or in the unsafe or non-code verifiable portiions of user code. Common sources of this ug include user marshaling errors for COM-interop or PImvoke which may corrupt the stack" Fixed in master
https://bugzilla.xamarin.com/14/14351/bug.html
CC-MAIN-2021-39
refinedweb
199
50.26
Moving from MongoDB to Couchbase server This is a developer-focused guide to moving your application’s data store from MongoDB to Couchbase Server, following on from Laurent’s guide to making the move from PostgreSQL. While it doesn’t cover every corner-case, it does offer pointers to what you should consider when planning your migration. Versions This guide is written for Couchbase Server 4.1 and MongoDB 3.2.. Differences between BSON and JSON It’s likely that your application stores JSON-style documents in MongoDB, so we’ll start there. MongoDB stores data in the BSON format, which is a binary JSON-like format. The key difference for us is that BSON records additional type information. When you export data from MongoDB using a tool such as mongoexport the tool will produce JSON that preserves that type information in a format called Extended JSON. Let’s take a look at an example. First in standard JSON: And now in Extended JSON: As you can see, the Extended JSON is still valid JSON. That means you can store, index, query and retrieve it using Couchbase Server. However, you’ll need to maintain that additional type information on the application layer. Alternatively, you could convert the Extended JSON to standard JSON before you import it into Couchbase Server. that might seem more like an issue for your ops team. However, it does mean that it’s easier to rely on Couchbase Server should usage of your software grow. Replication and consistency Couchbase Server maintains a single active copy of each document and then up to two favouring somewhat unsuitable as namespaces and instead they serve as a way to share configuration For ad-hoc query, Couchbase Server offers N1QL. N1QL is a SQL-like language and so is quite different from MongoDB’s query. Let’s look at an example where we return the name of employees from the London office who have worked there for two years or more, ordered by start date: As you can see, N1QL is very familiar. Read more about N1QL and about views. Concurrency In Couchbase Server, locking always happens at the document level and there are two types: - pessimistic: no other actor can write to that document until it is released or a timeout is hit - optimistic: use CAS values to check if the document has changed since you last touched it and the act accordingly. In a distributed database, optimistic locking is a much more neighbourly approach., Elasticsearch, .NET’s Linq and there’s a NodeJS ODM called Ottoman. Conclusion Moving from one document store to another is relatively straightforward, as the broad shape of your data doesn’t need to change all that much. our forums.
https://blog.couchbase.com/moving-from-mongodb-to-couchbase-server/
CC-MAIN-2018-47
refinedweb
456
60.75
In this lesson you’ll get an introduction into Pandas’ basic data structures: Series and DataFrame. However, this video focusses on the Pandas Series data structure. Basic Pandas Data Structures 00:01 We need to create a new Notebook. Since we’re going to be using stock data here, we’re going to call it Stocks. I’m going to simply do something like this. 00:11 So, we’re going to simply start off by calling our first portion here, we’ll call it # Pandas. We’ll just have it be a playground for the Pandas data points here. 00:21 Let’s start by importing a few of the data structures that Pandas includes. So, from pandas import DataFrame, Series. We’re going to first begin with a Series DataFrame, which is pretty awesome. 00:39 What it’s designed for is to explicitly handle the time-indexed data points. So let’s say, yesterday you had five apples, today you have four, tomorrow you have two. 00:48 That’s the type of thing that the Series object is really good at handling. I’m going to start off with the “Hello, World!” version of Series objects. 00:55 What you need to do is pass it some values, so these would be your apples. Okay, so let’s say you had [1, 2, 3, 4], and then you also could pass an optional index, which would be something like this, which you’d use to index the thing. 01:09 As I said before, these are most useful when they are dates or timestamps or something that’s happening over a period of time, which is the real power of Series. 01:19 Something like stock data, as you’ll see later in another example. But what you can see here is that… 01:28 Oh, I missed a comma there. What you’ll see here is that 01:34 Pandas takes that and makes it into a representational data format. Now it’s representing the int64. The Series data only has things across the top so you understand what is going on. 01:48 So let’s say we’ll only have one set of values, so these sets of values are int64. But if I were to go on ahead and make these all floating points, 02:02 I believe they’d be float64. And then that’s how you go about doing stuff like that. So, another thing you can do here is you can go s.index, and that’ll give you the index column. As you can see, they’re objects, they’re strings saying what the index is. And that’s how you go about it. 02:18 You can do all kinds of other things, which you can dive into the documentation to get. So you can do the mean of that, which ends up being the sum of all of them divided by the length. 02:28 And there’s a bunch of other options that you can do with Series data with Pandas. Next, I’m going to show you an example with some time stamps over time and other things you can do with Series. 02:38 The very first thing we’ll need is some random data, so we’re going to go import random. Then we’re going to go do data = [random.randint()], between 0 and 10000 for x in xrange() of 10000. 03:04 We’re then going to go provide an index. That index will be DateTimeIndex. It starts on January 1st, 2013. The periods will be—that’s the number of samplings we would take—is equal to the length of data. And how frequently they’re sampled is provided by the freq (frequency). 03:32 We can then go something like this. We’re going to go minutely. We’re then going to go s = Series(data, index=index). So, what we really did was first, 03:50 Then what we did was create a DateTimeIndex 04:03 and freq. So, this is a minutely frequency, so when we look at our object here, what we should see is the first minute in January 1st, 2013, second minute in January 1st, 2013, and so on and so forth. 04:19 So as you can see here, we have 10,000 things. Frequency is minutely, of type int64. So, that’s how the Series objects look. You can do a bunch of things like .tail() once you’re dealing with a lot of data. 04:34 You can look at the last, by default it says five, but you can provide a number here, like 10. .head() is vice versa, it’ll give you the first ten, like so. The really cool thing that you can do, though, is seeing that we have s now here… We’ll call that s, we’ll evaluate that out. 04:52 All right. I believe that’s the case. Next, we’ll go s_daily = s.resample(), resample that at a daily frequency. So what that ends up doing, it ends up resampling all the Series objects that you have in your data and it gives you all of the days that we span it to. 05:16 So according to this, it takes over 10,000 samplings minutely, it gives us about seven days of data. And as you can see here, the totals for each of those days, they’re added together. 05:27 So that gives you an easy way from going from a very low frequency to a very high frequency. You can fill forward to fill back. If you go from a low frequency to a much higher frequency, you can fill, you can carry forward. That’s generally how you’d use Series objects, and that’s where really their power lies. Next, let’s go into DataFrames. 05:47 In the previous example, I said this was calculating the sums. This is incorrect. It is actually calculating the means. In order to calculate the sums, you need to pass a how method to the .resample(), which will then resample and then sum the daily values. 06:02 So, it’ll sum all the values that are in the particular day here. So when we run this again, the numbers are much longer. It makes much more sense. ?
https://realpython.com/lessons/basic-pandas-data-structures/
CC-MAIN-2021-17
refinedweb
1,070
83.56
Dependency Injection with Spring.Net Dependency Injection with Spring.Net So, where does Spring.Net fit in? Well, the core of the Spring.Net framework is built on top of the idea of factories and Dependency Injection. Spring.Net is basically just one big configurable object factory. Instead of having to hard-code the often complex factory logic, you simply can provide Spring.Net an XML configuration file to "wire" your objects together. I could go on, but I have already talked enough about theory. Instead, look at some examples of Spring.Net in action. For this article, I will introduce you to the framework and show you a small sample web application that touches on several of the key Dependency Injection principles of Spring.Net. Note that there are many options for Dependency Injection in Spring.Net and this sample is far from a complete example of everything Spring.Net can do, but it is a good start. Configuring Spring.Net Start by adding Spring.Net to your project. First, download the spring.Net library at and add the assemblies to you project. You should only need the core assembly for this example but the others can be added too. Next, add the following code to your Web.config or App.config file: <configSections> <sectionGroup name="spring"> <section name="context" type="Spring.Context.Support.ContextHandler, Spring.Core" /> <section name="objects" type="Spring.Context.Support.DefaultSectionHandler, Spring.Core" /> </sectionGroup> </configSections> <spring> <context> <resource uri="config://spring/objects" /> </context> <objects xmlns=""> <!- ToDo: Put mappings here. --> </objects> </spring> Listing 1: Spring.Net Configuration This is just the basic setup configuration for Spring.Net. There isn't much to it really. At this point, you can start adding objects definitions inside the objects tag. A Simple Message Object Start with a simple message object. This example will show you how to create an object definition where the application retrieves an object with a simple string injected into it by Spring.Net. First, create an object definition and interface as such: public interface ISimpleMessage { String Message { get; } } public class SimpleMessage : ISimpleMessage { private String _message; public SimpleMessage() { } public SimpleMessage(String message) { _message = message; } public String Message { get { return _message; } set { _message = value; } } } Listing 2: Simple Message Object Notice that the interface only provides a method for reading the message and not changing it. This is because the message is expected to be set already when any object reads the message though the interface. Next, make a mapping for the object in your Spring.Net configuration: <objects xmlns=""> <!-- Messages --> <object name="SimpleMessage" type="SimpleMessage, __Code" singleton="false"> <constructor-arg </object> </objects> Listing 3: Simple Message Object Mapping What you did here is make a definition for constructing this object. The name of the object is the handle that will be used to retrieve this object. The class simply defines the class of the object instance. (Note that "__Code" is just the generic namespace for web apps. In non-web apps and when referencing assemblies of objects, use the full namespace instead.) Also, note the "singleton" property is set to false in this example. You will learn what this means and more about the singleton pattern later in this article. Finally, to set the message property of the object, you set a message value as a constructor argument for the object. So now, when the object is initialized, the message will be initialized in the constructor. Now, run some code to retrieve the object from the Spring.Net context: // Get Spring.Net context using (IApplicationContext ctx = ContextRegistry.GetContext()) { // Get a new instance of the message object ISimpleMessage simpleMessage1 = (ISimpleMessage)ctx.GetObject("SimpleMessage"); // Read the message objects message as initialized by Spring.Net ltlSimpleMessage1.Text = simpleMessage1.Message; } Listing 4: Retrieve Simple Message from Spring.Net Context It is pretty easy, right? All this code does is retrieve the Spring.Net application context and then ask the context for an instance of the "SimpleMessage" object. This piece of code only knows the name of the object and what interface to use. It does not know what concrete class is implemented by the interface and it does not have to worry about how to inject the correct message into the object because Spring.Net handles it. Page 2 of 6
http://www.developer.com/net/csharp/article.php/10918_3722931_2/Dependency-Injection-with-SpringNet.htm
CC-MAIN-2016-22
refinedweb
710
60.21
Writing templates¶ Wagtail uses Django’s templating language. For developers new to Django, start with Django’s own template documentation: Templates Python programmers new to Django/Wagtail may prefer more technical documentation: The Django template language: for Python programmers You should be familiar with Django templating basics before continuing with this documentation. Templates¶ Every type of page or “content type” in Wagtail is defined as a “model” in a file called models.py. If your site has a blog, you might have a BlogPage model and another called BlogPageListing. The names of the models are up to the Django developer. For each page model in models.py, Wagtail assumes an HTML template file exists of (almost) the same name. The Front End developer may need to create these templates themselves by referring to models.py to infer template names from the models defined therein. To find a suitable template, Wagtail converts CamelCase names to snake_case. So for a BlogPage, a template blog_page.html will be expected. The name of the template file can be overridden per model if necessary. Template files are assumed to exist here: name_of_project/ name_of_app/ templates/ name_of_app/ blog_page.html models.py For more information, see the Django documentation for the application directories template loader. The data/content entered into each page is accessed/output through Django’s {{ double-brace }} notation. Each field from the model must be accessed by prefixing page.. e.g the page title {{ page.title }} or another field {{ page.author }}. A custom variable name can be configured on the page model. If a custom name is defined, page is still available for use in shared templates. Additionally request. is available and contains Django’s request object. Static assets¶ Static files e.g CSS, JS and images are typically stored here: name_of_project/ name_of_app/ static/ name_of_app/ css/ js/ images/ models.py (The names “css”, “js” etc aren’t important, only their position within the tree.) Any file within the static folder should be inserted into your HTML using the {% static %} tag. More about it: Static files (tag). User images¶ Images uploaded to a Wagtail site by its users (as opposed to a developer’s static files, mentioned above) go into the image library and from there are added to pages via the page editor interface. Unlike other CMSs, adding images to a page does not involve choosing a “version” of the image to use. Wagtail has no predefined image “formats” or “sizes”. Instead the template developer defines image manipulation to occur on the fly when the image is requested, via a special syntax within the template. Images from the library must be requested using this syntax, but a developer’s static images can be added via conventional means e.g img tags. Only images from the library can be manipulated on the fly. Read more about the image manipulation syntax here How to use images in templates. Wagtail User Bar¶ This tag provides a contextual flyout menu. This tag may be used on standard Django views, without page object. The user bar will contain one item pointing to the admin. We recommend putting the tag near the top of the <body> element so keyboard users can reach it. You should consider putting the tag after any skip links but before the navigation and main content of your page. {% load wagtailuserbar %} ... <body> <a id="#content">Skip to content</a> {% wagtailuserbar %} {# This is a good place for the userbar #} <nav> ... </nav> <main id="content"> ... </main> </body>; } Varying output between preview and live¶ Sometimes you may wish to vary the template output depending on whether the page is being previewed or viewed live. For example, if you have visitor tracking code such as Google Analytics in place on your site, it’s a good idea to leave this out when previewing, so that editor activity doesn’t appear in your analytics reports. Wagtail provides a request.is_preview variable to distinguish between preview and live: {% if not request.is_preview %} <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ ... </script> {% endif %}
https://docs.wagtail.org/en/v2.14.1/topics/writing_templates.html
CC-MAIN-2022-21
refinedweb
678
57.87
I'm creating a programs for a project with the propuse to create a simple address book... I have a class PersonInfo from where I get the information from the user...such as name, last name and so on. After I have a class contact...here I would like to save the contact inside and array...be able to print the all list of contact or just one...and ask to the user if he want to add a new one.....and in this class I have a bunch of probelms. I'm just a beginner. Someone can give me some idea about what i do wrong ? Code java: import java.awt.List; import java.util.ArrayList; public class Contact { String name, lastname, address, email, phone, notes; Contact() { ArrayList list; list = new ArrayList(10000); } Contact(int size) { ArrayList list = new ArrayList(size); } void insert() { List.add(); } void view() { for (int i = 0; i < list.size(); i++) { ((PersonInfo)list.get(i)).printEntry(); } } public Contact(String name, String lastname, String address, String email, String phone, String notes) { this.name = name; this.lastname = lastname; this.address = address; this.email = email; this.phone = phone; this.notes = notes; } public void printContact() { System.out.println("Name = " + name); System.out.println("Last Name = " + lastname); System.out.println("Address = " + address); System.out.println("E-mail = " + email); System.out.println("Phone = " + phone); System.out.println("Note(s) = " + notes); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/6240-phone-book-printingthethread.html
CC-MAIN-2015-14
refinedweb
229
64.67
Tools for streaming data to and from S3. Part of the Gilt Foundation Classes. UsageUsage The library provides tools to integrate akka-streams with Amazon S3 storage service. To use it add to your dependencies: "com.gilt" %% "gfc-aws-s3" % "0.1.0" The library contains akka-stream Sources and Sinks to Stream data from and to S3. SinksSinks Allows uploading data to S3 in a streaming manner. The underlying implementation uses S3 Multipart upload API. Due to the API requirements the size of the part could not be less than 5Mb. You are to provide the size of the chunk on Source creation, the internals will automatically slice the incoming data into the chunks of the given size and upload those chunks to S3. To create the source: import com.gilt.gfc.aws.s3.akka.S3MultipartUploaderSink._ val bucketName = "test-bucket" val fileKey = "test-file" val s3Client = AmazonS3ClientBuilder.standard() .withRegion("us-east-1") .build val chunkSize = 6 * 1024 * 1024 // 6 Megabytes val sink = Sink.s3MultipartUpload(s3Client, bucketName, fileKey, chunkSize) The sink could also be created in different style manner: import com.gilt.gfc.aws.s3.akka.S3MultipartUploaderSink val sink = S3MultipartUploaderSink(s3Client, bucketName, fileKey, chunkSize) The materialized value of the sink is the total length of the uploaded file in case of successful uploads. Please, bear in mind, that incomplete uploads eat S3 space (meaning cost you some money) but are not shown in AWS S3 UI. Probably the best idea is to configure S3 so that it will delete parts of the incomplete uploads automatically after given amount of time (docs) SourcesSources Allows accessing S3 objects as a stream source in two different manners - by parts and by chunks. The difference is subtle but important: - accessing by parts means that you know or assume that the file was uploaded using S3 multipart API. If the was not uploaded using multipart API it would be downloaded in a single chunk. This will not eat memory, as the source does real streaming, and allows to control the buffer size for download, but could lead to some problems with very large files, as S3 tends to drop long-lasting connections sometimes. To do that, use: import com.gilt.gfc.aws.s3.akka.S3DownloaderSource._ val bucketName = "test-bucket" val fileKey = "test-file" val s3Client = AmazonS3ClientBuilder.standard() .withRegion("us-east-1") .build val memoryBufferSize = 128 * 1024 // 128 Kb buffer val source = Source.s3MultipartDownload(s3Client, bucketName, fileKey, memoryBufferSize) - accessing by chunks means that you provide a size of the part to download, and the source will ultimately use Rangeheader to access file in "seek-and-read" manner. This approach could be used with any S3 object, regardless of whether it was uploaded using multipart API or not. The size of the chunk will affect the number of the requests sent to S3. To do that use: import com.gilt.gfc.aws.s3.akka.S3DownloaderSource._ val bucketName = "test-bucket" val fileKey = "test-file" val s3Client = AmazonS3ClientBuilder.standard() .withRegion("us-east-1") .build val chunkSize = 1024 * 1024 // 1 Mb chunks to request from S3 val memoryBufferSize = 128 * 1024 // 128 Kb buffer val source = Source.s3ChunkedDownload(s3Client, bucketName, fileKey, chunkSize, memoryBufferSize) The pieces of code above will crease a Source[ByteString], where each ByteString represents a part of the.
https://index.scala-lang.org/gilt/gfc-aws-s3/gfc-aws-s3/0.1.1-RC1?target=_2.11
CC-MAIN-2020-50
refinedweb
543
55.24
Distributed machine learning is complicated, and when combined with deep learning models that are also complex, it can make just getting anything to work into a research project. Add in setting up your GPU hardware and software, and it may become too much to take on. Here we show that Hugging Face's Accelerate library removes some of the burden of using a distributed setup, while still allowing you to retain all of your original PyTorch code. When combined with Paperspace's multi-GPU hardware and their ready-to-go ML runtimes, the Accelerate library makes it much easier to run advanced deep learning models a user may have found difficult before. New possibilities are opened up that might otherwise would go unexplored. What is Accelerate? Accelerate is a library from Hugging Face that simplifies turning PyTorch code for a single GPU into code for multiple GPUs, on single or multiple machines. You can read more about Accelerate on their GitHub repository here. Motivation With state-of-the-art deep learning at the cutting edge, we may not always be able to avoid complexity in the real data or models, but we can reduce the difficulty in running them on GPUs, and on more than one GPU at once. Several libraries exist to do this, but often they either provide higher-level abstractions that remove fine-grained control from the user, or provide another API interface that needs to be learned first before it can be used. This is what inspired the motivation of Accelerate: to allow users who need to write fully general PyTorch code to be able to do so, while reducing the burden of running such code in a distributed manner. Another key capability provided by the library is that a fixed form of code can be run either distributed or not. This is different from the traditional PyTorch distributed launch that has to be changed to go from one to the other, and back again. Code changes to use Accelerate If you need to use fully general PyTorch code, it is likely that you are writing your own training loop for the model. Training Loop A typical PyTorch training loop goes something like this: - Import libraries - Set device (e.g., GPU) - Point model to device - Choose optimizer (e.g., Adam) - Load dataset using DataLoader (so we can pass batches to the model) - Train model in loop (once round per epoch): - Point source data and targets to device - Zero the network gradients - Calculate output from model - Calculate loss (e.g., cross-entropy) - Backpropagate the gradient There may be other steps too, like data preparation, or running the model on test data, depending on the problem being solved. Code Changes In the readme for the Accelerate GitHub repository, the code changes compared to regular PyTorch for a training loop like the above are illustrated, via highlighting of the lines to be changed: Green means new lines that are added, and red means lines that are removed. We can see how the code corresponds to the training loop steps outlined above, and the changes needed. At first glance, the changes don't appear to simplify the code much, but if you imagine the red lines are gone, you can see that we are no longer talking about what device we are on (CPU, GPU, etc.). It has been abstracted away, while leaving the rest of the loop intact. In more detail, the code changes are: - Import the Accelerator library - Use the accelerator as the device, which can be CPU or GPU - Instantiate the model, without having to specify a device - Setup the model, optimizer, and data to be used by Accelerate - We don't need to point source data and targets to device - Accelerator does the backpropagation step Multi-GPU The code above is for a single GPU. On their Hugging Face blog entry, the Accelerate authors then show how PyTorch code needs to be changed to enable multi-GPU using the traditional method. It includes many more lines of code: import os ... from torch.utils.data import DistributedSampler from torch.nn.parallel import DistributedDataParallel local_rank = int(os.environ.get("LOCAL_RANK", -1)) ... device = device = torch.device("cuda", local_rank) ... model = DistributedDataParallel(model) ... sampler = DistributedSampler(dataset) ... data = torch.utils.data.DataLoader(dataset, sampler=sampler) ... sampler.set_epoch(epoch) ... The resulting code then no longer works for a single GPU. In contrast, code using Accelerate already works for multi-GPU, and continues to work for a single GPU as well. So this sounds great, but how does it work in a full program, and how is it invoked? We will now work through an example on Paperspace to show Accelerate in action. Add speed and simplicity to your Machine Learning workflow today Running Accelerate on Paperspace The Accelerate GitHub repository shows how to run the library, via a well documented set of examples. Here, we show how to run it on Paperspace, and walk through some of the examples. We assume that you are signed in, and familiar with basic Notebook usage. You will also need to be on a paid subscription so that you can access the terminal. Paperspace allows you to connect directly to a GitHub repository and use it as a starting point for a project. You can therefore point to the existing Accelerate repository and use it immediately. No need to install PyTorch, or set up a virtual environment first. To run, start a Notebook in the usual way. Use the PyTorch runtime under the Recommended tab, then scroll to the bottom of the page and toggle Advanced Options. Then, under Advanced Options, change the Workspace URL to the location of the Accelerate repository: . To begin with we are using a single GPU, so the default choice for the machine (P4000) is sufficient. We will proceed to multi-GPU later in this article. This will start the Notebook with the repo files present in the Files tab on the left-hand navigation bar. Because the examples supplied with the repo are .py Python scripts, and these work fine on Paperspace in this interface, we don't attempt to show them as a .ipynb notebook here. Although, if you want, the library can be launched from a notebook too. Let's see the example. Simple NLP example Hugging Face was founded on making Natural Language Processing (NLP) easier to access for people, so NLP is an appropriate place to start. Open a terminal from the left-hand navigation bar: Then there are a some short setup steps pip install accelerate pip install datasets transformers pip install scipy sklearn and we can proceed to the example cd examples python ./nlp_example.py This performs fine-tuning training on the well-known BERT transformer model in its base configuration, using the GLUE MRPC dataset concerning whether or not a sentence is a paraphrase of another. It outputs an accuracy of about 85% and F1 score (combination of precision and recall) of just below 90%. So the performance is decent. If you navigate to the metrics tab, you can see that the GPU was indeed used: The script can also be invoked with various arguments to alter behavior. We will mention some of these at the end. Multi-GPU For multi-GPU, the simplifying power of the library Accelerate really starts to show, because the same code as above can be run. Similarly, on Paperspace, to gain a multi-GPU setup, simply switch machine from the single GPU we have been using to a multi-GPU instance. Paperspace offers multi-GPU instances for A4000s, A5000s, A6000s, and A100s, though this varies from region to region. If you are running your Notebook already, you stop your current machine: Then use the dropdown in the left-hand navigation bar: to select a multi-GPU machine, and restart: Changing from P4000 to A4000x2 will work well here. Note: If you were not already running a single GPU machine, create a Notebook in the same way as for the single GPU case above, but choose an A4000x2 machine instead of the P4000 machine. Then, to invoke the script for multi-GPU, do: pip install accelerate datasets transformers scipy sklearn and run through its brief configuration steps to tell it how to be run here: accelerate config In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0 Which type of machine are you using? ([0] No distributed training, [1] multi-CPU, [2] multi-GPU, [3] TPU): 2 How many different machines will you use (use more than 1 for multi-node training)? [1]: 1 Do you want to use DeepSpeed? [yes/NO]: no Do you want to use FullyShardedDataParallel? [yes/NO]: no How many GPU(s) should be used for distributed training? [1]: 2 Do you wish to use FP16 or BF16 (mixed precision)? [NO/fp16/bf16]: no Note that we say 1 machine, because our 2 GPUs are on the same machine, but we confirm that 2 GPUs are to be used. Then we can run as before, now using the launch command instead of python to tell Accelerate to use the config that we just set: accelerate launch ./nlp_example.py You can see that both GPUs are being used by running nvidia-smi in the terminal. More features As hinted at by the configuration file setup above, we have only scratched the surface of the library's features. Some other features that is has include: - A range of arguments to the launched script: see - Multi-CPU in addition to multi-GPU - Multi-GPU on several machines - Launcher from .ipynbJupyter notebook - Mixed-precision floating point - DeepSpeed integration - Multi-CPU with MPI Computer vision example There is also another machine learning example that you can run; it's similar to the NLP task that we have been running here, but for computer vision. It trains a ResNet50 network on the Oxford-IIT Pet Dataset. On our Notebook, you can add the following to a code cell to quickly run the example: pip install accelerate datasets transformers scipy sklearn pip install timm torchvision cd examples wget tar -xzf images.tar.gz python ./cv_example.py --data_dir images Conclusions and next steps We have shown how the Accelerate library from Hugging Face simplifies running PyTorch deep learning models in a distributed manner compared to traditional PyTorch, without removing the fully general nature of the user's code. Similarly, Paperspace simplifies accessing multi-GPU + PyTorch + Accelerate by providing an environment in which they are ready-to-go. For some next steps: - Check out the Accelerate GitHub repository for more examples and functionality - Run your own accelerated and distributed deep learning models on Paperspace Licensing Code in the Hugging Face GitHub repository is made available by them under the Apache License 2.0. Add speed and simplicity to your Machine Learning workflow today
https://blog.paperspace.com/multi-gpu-on-raw-pytorch-with-hugging-faces-accelerate-library/
CC-MAIN-2022-27
refinedweb
1,804
58.32
Random forest is one of the most well-known ensemble methods for good reason – it’s a substantial improvement on simple decision trees. In this post, I’m going to explain how to build a random forest from simple decision trees, and to test how they actually improve the original algorithm. Maybe you first need to know more about a simple tree; if that’s the case, take a look at my previous post. Furthermore, if you would rather read in Spanish, you can find the translation of the post here. Like in any other unsupervised learning method, the starting point is a set of features or attributes, and on the other side, what we would like to explain a set of labels or classes: What is a random forest? Random forest is a method that combines a large number of independent trees trained over random and equally distributed subsets of the data. How to build a random forest The learning stage consists of creating many independent decision trees from slightly different input data: - The initial input data is randomly subsampled with replacement. This step is what Bagging ensemble consists of. However, random forests usually include a second level of randomness; this time subsampling the features: - When optimising each node partition, we will only take into account a random subsample of the attributes. Once a large number of trees have been built, around 1000 for example, the classification stage works like this: - All trees are evaluated independently and averaged to compute the forest estimate. The probability that a given input belongs to a given class is interpreted as the proportion of trees that classify that input as a member of that class. What are the advantages of a random forest over a tree? Stability. Random forests suffer less overfitting to a particular data set than simple trees. Random forest versus simple tree. Test 1: We have designed two trading systems. The first system uses a classification tree and the second one uses a random forest, but both are based on the same strategy: - Attributes: A set of transformations of the input series. - Classes: For each day, it will be the sign of the next price return (i.e. binary responses): 1 if price moves up and 0 otherwise. - Learning stage: We will use the beginning of the time series to build the trees–3000 days in the example. - Classification stage: We will use the remaining years to test classifier performance. For each day in this period, the tree and the forest will return an estimate, 1 or 0, and its probability. Our strategy will buy when the probability of the class 1 is larger than the probability of the class 0, indicating up movement in the series, and sell otherwise. We will also use the classification probability to compute the trade’s magnitude. Let’s see what the results of these strategies are by applying them to several different financial series as “test*”: The result, positive or negative, is less extreme for random forest. It does not happen that the average result of a random forest is always better than a tree result, but the risk taking is always lower. That means better draw down control. The trees that make up the forest were trained with different yet similar datasets, different random subsamples of the original dataset. This provides the random forest with a better capacity to generalise and to perform better in new unknown situations. Random forest versus simple tree. Test 2: Let’s do a second test. Imagine that we would like to build again the previous trees. This time, instead of using 3000 historical data points as the train set, we are going to use 3100 data points. We would expect both strategies to be similar. Although random forest behaves as expected, this is not true for the classification trees, which are very prone to overfitting. We trained individual trees and random forests using slightly larger or smaller data sets, 2500 data to 3500 data points. Then we measured the variability of the results. In the following graphs, we show the range of the results and their standard deviation: It’s clear that the random forest technique is less sensitive variations in the training set. Therefore, it is not true that the random forest method is going to perform better than any classification tree. Nevertheless, we can assure that random forest guarantee better drawdown control and higher stability. These advantages are important enough to make the extra complexity worth it.
https://quantdare.com/random-forest-many-are-better-than-one/
CC-MAIN-2019-18
refinedweb
753
52.6
Providing software solutions since 1976 support.sas.com Knowledge Base Support Training & Bookstore Happenings Support Communities When using PROC CIMPORT with a transport file that contains a catalog containing single-byte or single-byte and double-byte characters such as HANKAKU-KATAKANA, you may see extraneous characters in the description fields of the recreated catalogs. This occurs under MVS in the Japanese LOCALE. Running SAS® in KATAKANA mode (DBCSLANG=KATAKANA) should allow both single-byte Katakana and DBCS characters to appear. Set the DBCSLANG= option to KATAKANA (DBCSLANG=KATAKANA) and try to import the file again with PROC CIMPORT. If the problem persists, and you want to try an additional workaround, recreate the transport file with the OUTTYPE option of PCIBM. If the OUTTYPE=PCIBM option is used when creating the CPORT file, you may see proper description fields after using PROC CIMPORT to convert the catalog. The code below illustrates how to do this. Apply the hot fix if you continue to experience a problem. :
http://support.sas.com/kb/17/841.html
CC-MAIN-2013-20
refinedweb
166
53.92
As mentioned, the single “controversial” part of my talk at Flash on the Beach this year was in questioning polling for input in Flash games. In truth, it was hardly controversial. No death threats. No twitter-based lynch mobs. Just that a couple of guys came up to me and politely expressed disagreement later, and we had a conversation about it. But, as said conversations were done later in the evening at the Old Ship, I thought it might be worth discussing in a clearer state of mind. So the idea is that I said I thought it was better, i.e. more efficient, to use events for keyboard and mouse input, rather than polling. A few people have made keyboard manager classes which allow you to check which keys are down. You can then poll this class to see if the navigation / action keys you are interested in are currently down, and act accordingly. If you are doing this in the game loop, this is going to happen on every frame or interval, and to me, this does not make sense. To demonstrate this kind of setup, here is a bare bones game class. It’s using Richard Lord‘s KeyPoll class. package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.ui.Keyboard; import uk.co.bigroom.input.KeyPoll; public class KeyboardGame extends Sprite { private var keyPoll:KeyPoll; // view private var character:Sprite; // model private var xpos:Number = 200; private var speed:Number = 0; private var direction:Number = 0; public function KeyboardGame() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; keyPoll = new KeyPoll(keyPoll.isDown(Keyboard.LEFT)) { direction = 180; speed = -5; } else if(keyPoll.isDown(Keyboard.RIGHT)) { direction = 0; speed = 5; } else { speed = 0; } } protected function update():void { xpos += speed; } protected function render():void { character.x = xpos; character.rotation = direction; } } } For the sake of simplicity, this is all in one class, with the view being a sprite with some graphics, and the “model” being a few class variables. The game loop runs on every frame and polls for input, updates the model, and renders the view. The input method polls the keyPoll class, checking to see if the left or right cursor keys are pressed. If so, it adjusts the direction and speed in the “model”. If neither is pressed, direction is unchanged and speed is 0. The update method simply updates the xpos based on the speed and the render method moves and rotates the character based on the model. Run it, press the left and right keys and the character turns and moves in the right direction. Yay. So what was I proposing instead? To cut out the polling part. The idea being that the only time you need to handle input is when a key goes down or up. Not on every single frame. So you do something like this: package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.events.KeyboardEvent; import flash.ui.Keyboard; public class KeyboardGameNoPoll extends Sprite { // view private var character:Sprite; // model private var xpos:Number = 200; private var speed:Number = 0; private var direction:Number = 0; public function KeyboardGameNoPoll() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; character = new Sprite(); character.graphics.lineStyle(0); character.graphics.drawCircle(0, 0, 10); character.graphics.lineTo(20, 0); character.x = xpos; character.y = 200; addChild(character); addEventListener(Event.ENTER_FRAME, gameLoop); stage.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown); stage.addEventListener(KeyboardEvent.KEY_UP, onKeyUp); } protected function onKeyDown(event:KeyboardEvent):void { if(event.keyCode == Keyboard.LEFT) { direction = 180; speed = -5; } else if(event.keyCode == Keyboard.RIGHT) { direction = 0; speed = 5; } } protected function onKeyUp(event:KeyboardEvent):void { if(event.keyCode == Keyboard.LEFT || event.keyCode == Keyboard.RIGHT) { speed = 0; } } protected function gameLoop(event:Event):void { // input by events update(); render(); } protected function update():void { xpos += speed; } protected function render():void { character.x = xpos; character.rotation = direction; } } } Here, we add event listeners for key up and key down. On key down, we check which keys is being pressed, and update the model accordingly. When either of the special keys is released, set speed to 0. So, no input method, but update and render are exactly the same. Now, when a key is being pressed, it’s going to get multiple, repeated key down events, so you might say this is basically just polling anyway, even if only when the key is down. Yes, but it’s not like that is just a substitution for the polling in the first version. Actually, if you look in the KeyPoll class, or any other similar keyboard manager class, you are going to find an event listener for key up and key down. And just like this version, those are going to be run repeatedly when a key is held down. So the polling in the first version is ON TOP of that repeated key event stuff, which is going to happen regardless. Problems, and a Comprimise Even in this simple example, however, a problem can soon arise: in the non-polling version, you could run into this situation: 1. User presses right key and holds it. Character starts moving right. 2. User presses left key and holds it, while still holding right key. Character starts moving left. 3. User releases either key, while still holding the other. Character stops, even though one of the action keys is still down. This would not happen in the polling version. If this kind of problem happens in such a simple example, you are bound to run into it in many other forms in more complex input schemes. So it’s likely that you will have to start adding some additional logic to address it. My initial response was to create a leftKeyDown and rightKeyDown property, set these to true in the key down handler, and false in the key up handler and check the value of both to see if speed should be 0: protected function onKeyUp(event:KeyboardEvent):void { if(event.keyCode == Keyboard.LEFT) { leftKeyDown = false; } else if(event.keyCode == Keyboard.RIGHT) { rightKeyDown = false; } if(!leftKeyDown && !rightKeyDown) { speed = 0; } } Unfortunately, this still breaks. If the user presses and holds right, and taps and releases left, the character will continue to move left. So you could do something ridiculous like this:; } } But this is just duplicating what’s going on in the key down handler. So I could extract the duplicated code into another method, but I’m starting to feel like I’m unnecessarily complicating the code for the sole reason of avoiding key polling. I don’t want to be that guy. There’s a beautiful simplicity in the polling version. My main concern is over performance, since the keyboard managers I’ve seen usually involve array lookups. But looking at Richard’s, it’s using byte arrays and liberal bitwise operators (almost to the point of obfuscation). So my initial guess is that’s not too inefficient. Even so, something about it doesn’t sit well with me. What seems to make sense to me is to create a sort of custom keyboard handler for each game. I’ve done this in other games and it worked out pretty well. The thing about keyboard managers is they are generic and reusable, and thus have to be able to take note of, store, and retrieve the state of any possible key. So some type of an array or collection is always needed. But for your specific game, there probably at the most a half dozen keys you are really interested in. These can be stored as class properties with getters (or public properties if that’s not too taboo for you). These properties can also be named something logical to the game, such as moveLeft, moveRight, jump, shoot, etc. rather than Keyboard.LEFT, Keyboard.SPACE, etc. which makes for more readable code. This also abstracts away lower level stuff into whatever you are interested in. If you wanted to change your keyboard mappings you could do it right there, without changing your external code. Something like this: package { import flash.display.Stage; import flash.events.KeyboardEvent; import flash.ui.Keyboard; public class InputLayer { private var stage:Stage; public var movingLeft:Boolean = false; public var movingRight:Boolean = false; public function InputLayer(stage:Stage) { this.stage = stage; stage.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown); stage.addEventListener(KeyboardEvent.KEY_UP, onKeyUp); } protected function onKeyDown(event:KeyboardEvent):void { if(event.keyCode == Keyboard.LEFT) { movingLeft = true; } else if(event.keyCode == Keyboard.RIGHT) { movingRight = true; } } protected function onKeyUp(event:KeyboardEvent):void { if(event.keyCode == Keyboard.LEFT) { movingLeft = false; } else if(event.keyCode == Keyboard.RIGHT) { movingRight = false; } } } } And the final class implementing it: package { import flash.display.Sprite; import flash.display.StageAlign; import flash.display.StageScaleMode; import flash.events.Event; import flash.ui.Keyboard; public class KeyboardGameCustom extends Sprite { private var inputLayer:InputLayer; // view private var character:Sprite; // model private var xpos:Number = 200; private var speed:Number = 0; private var direction:Number = 0; public function KeyboardGameCustom() { stage.scaleMode = StageScaleMode.NO_SCALE; stage.align = StageAlign.TOP_LEFT; inputLayer = new InputLayer(inputLayer.movingLeft) { direction = 180; speed = -5; } else if(inputLayer.movingRight) { direction = 0; speed = 5; } else { speed = 0; } } protected function update():void { xpos += speed; } protected function render():void { character.x = xpos; character.rotation = direction; } } } Furthermore, you could perform other basic logic on the input before setting the input layer properties. For example, shift plus left key might set a runLeft property to true. Not the greatest example, but the point being you don’t need a one to one mapping of keys to Boolean values like you do in a generic manager. So, am I eating crow? Yes, I guess I am. I still think the event method I originally described would be best, but the complexity you’d need to introduce to make it robust enough to handle all the intricacies would soon outweigh the potential performance benefits. And I’m also saving a bit of face by maintaining my opinion that generic keyboard managers are not a good idea, and offering a bit of a better solution. Let the discussion begin. Yummy crow I like your new approach much better, but swap movingRight for an x value between 0 and 1, and you can kill the if statement entirely: character.x += input.x * 5; character.y += input.y * 5; character.rotation = input.angle * (180 / Math.PI); I’m going to open source my gamepad class very soon so hopefully nobody will have to think about this stuff again! sorry x value should be between -1 and 1 to handle both directions. Yeah, Iain, that’s essentially what I meant by saying the properties didn’t have to be key mapped Boolean properties. I used something very similar to this on an iPhone game but incorporated touch and accel. So rather than getting a raw touch location, I was able to read the touch as an angle offset from a specific point. Very useful. Disappointing lack of crow to be eaten actually You make a fair point, and i get it now, but i can’t help but feel that the polling overhead is something worth learning to live with. From my (admittedly simple) performance tests, you’ll need more than a thousand dictionary lookups per frame before you even come close to noticing any actual impact, and dictionary lookups can obviously be replaced for more optimized methods. I suppose i want to separate input management from directly altering the model for the sake of order if nothing else, but also for the sake of flexibility. One thing is directional input, where setting properties on a velocity vector on keydown/up is totally doable, but storing things like input buffers for fighting game type applications practically demands a manager of some sort. Esoteric example, but can you handle the Konami code with a purely event based system? I suppose it feels as though this event based approach would be something you’d implement as an optimization at the far end of development if you desperately needed those cycles, rather than something you base your entire workflow on. Overall, for me at least, a keyboard manager with a fair suite of methods, such as getting a unit vector for the arrow keys and an input buffer of a specified range, as well as aliases for common button functions (jumpButton, fireButton, actionButton, whatever), simply feels more natural to work with during game development. If at any point during the game loop i want to see if conditions for an action are met, i’m free to do so. With your event-driven approach, i get the feeling i’ll constantly be rewriting similar code to solve slightly dissimilar problems. So, er, a generic manager i feel is good for general development; a more specialized performance-focused implementation if that level of optimization is needed. [...] Peters put up a comprehensive blog post on why he disagrees with generic keyboard managers. It’s well worth reading. I have to admit, i couldn’t even conceive of input events [...] I’ve always used something like this, which, when both keys are pressed results in no input: function gameLoop(event:Event):void { var rotationInput:Number = Number(movingRight) – Number(movingLeft); trace(rotationInput); } Perhaps a bit dirty with the casting (not required in AS2). Nice post – I’ve always tended towards an approach like you developed with your InputLayer for any kind of action game where you are expecting simultaneous, and possibly conflicting, input. For simpler stuff, like puzzle or word games, I stick with events – it’s way cleaner and easier to disable keyboard input without tampering with any game loops. An independent input layer is definitely the way to go. It’s not just easy for the developer to change the key inputs, but also easier to allow the user to do so. However, I would still use the KeyPoll class in my input layer - public function get moveLeft():Boolean { return keyPoll.isDown(Keyboard.LEFT); } public function get moveRight():Boolean { return keyPoll.isDown(Keyboard.RIGHT); } It’s just a stylistic preference – the interface to the input layer class is the same so the principle doesn’t change at all. Funny – I thought the most controversial part of the presentation was suggesting that model and view should be one and the same for Flash games (at least when dealing with Sprites/MovieClips). Maybe another post… Personally speaking, I find polling to be more responsive and use it for more time critical games and save event listening for the less “gotta move now to get outta the way” moments. I wonder though, if you go for the InputLayer approach (as would probably be best), is it really more CPU efficient? Either way, you’re testing for a Boolean during the game loop (inputLayer.movingRight OR keyPoll.isDown(Keyboard.RIGHT)). My keyboard class uses a third approach, in which you register each key and give them callback methods to run when the key is first pressed, when it is held down (the class can be set to handle its own timing or use the game loop) and when it is released. Keys can be grouped in order to allow mutually exclusive inputs, eg left and right. Keys can be disabled individually, by groups or all at once and re-registered easily at runtime. Of course it still uses the keyboard events to detect when keys are pressed and released, but there’s really no way round that! I kind of made it as an experiment and it seems to work pretty nicely, but if you can think of any problems with this approach I’d love to hear them Devon, yeah, I thought the view/model combo would be more controversial as well, but everyone I talked to was glad I brought it up as a possibility. As far as the polling vs. events stuff, I knew I was on shaky ground there, but felt it was worth bringing up. If I had had more time to spend on it, I’m sure I would have presented what I did in this post. Not even sure why I went down that path, as I’m always one to fight back against premature optimization. We all have our fixed ideas I guess. But overall, I’m glad it worked out as it did. I have more certainty, a clean architecture, and lots of agreement now. Also, if you didn’t see the presentation, just the slides, they may be a bit misleading. I’m not out and out advocating collapsing the view and model in games. Just pointing out that it might not be such a sin in many cases. Flash’s rendering system is quite different from something like OpenGL or DirectX. If you are using Sprites or Movie Clips, you just put some content in them and position them and leave them there, move them as needed, etc. So in this case you can wind up with a hierarchy of model objects that has to remain in sync with a nearly identical hierarchy of display objects. Adding new objects or removing them from the model creates complexity in updating the display side of things, and if you are using any kind of display object or bitmap hit testing, then the model has to know about the view, anyway. Thus, having the Sprites and MCs BE the model objects can significantly simplify your architecture. It was mentioned as an idea to consider and not to shy away from it because it’s “wrong” to do that. Why not combine the best of both worlds? you could use the KEY_DOWN event and KEY_UP events to create a count — if that count is greater than 0, run the poll; otherwise, ignore it completely. This way, you don’t need to create a custom keyboard handler for each game like you mentioned above… something like this: public class KeyboardGameNoPoll extends Sprite { // view&model code hidden keyPoll = new KeyPoll(stage); addEventListener(Event.ENTER_FRAME, gameLoop); stage.addEventListener(KeyboardEvent.KEY_DOWN, onKeyDown); stage.addEventListener(KeyboardEvent.KEY_UP, onKeyUp); } protected function onKeyDown(event:KeyboardEvent):void { keyCount++; } protected function onKeyUp(event:KeyboardEvent):void { keyCount–; } protected function gameLoop(event:Event):void { if (keyCount>0) input(); update(); render(); } protected function input():void { //calculate speed for each direction, and combine to get //sum of directional speed } Wouldn’t it be an option to combine polling and events? As soon as the user hits the keyboard you catch the event and set the key as pressed. Your controller can catch the same event (or one dispatched by the keyboardmanager). There you can poll what keys are down and update the model/view. 1. User presses right key and holds it. ==> Character starts moving right. 2. User presses left key and holds it, while still holding right key. ==> Left and right are down so character stops. 3. User releases either key, while still holding the other. ==> Character will continue the opposite way I usually use something like this: if(int(movingLeft) ^ int(movingRight)){ if(movingLeft) { // move left } else { // move right } } Instead of your suggestion: if(movingLeft) { // move left } else if(movingRight) { // move right } else { // dont move } I don’t know, maybe your approach has better performance because of my int-casts… I just think it’s cleaner that way By the way, in your example the left-button has priority over the right-button. If you hold down both buttons the player will walk left and skip through the rest of your if-statement. This is not an issue in my approach, if you hold down both buttons the player wont move at all (which seems pretty logical to me). Oh and sorry for the double-post! have you ever programmed games before? it doesn’t look like it. in my experience, polling and events are generally used for different things, becase they each have strengths and weaknesses and neither implementation can solve all possible problems/uses for detecting input. for example in an rpg game i’m making with a friend (in java btw, so sorry, no flash/actionscript here, but the idea is the same) and we use both polling and events for detecting keystate and mousestate changes. without going into to much detail, in java the only way to get input it by events, so we use keylisteners for all keyboard input. then for example toggle buttons, like in our editor, pressing the tilde key toggles rendering the entire map or the current layer we are on. if we used polling here, the toggle button would (and it does, ive tried it) switch back and forth on every call to update(), resulting in flickering as the map changes between rendering modes, and never knowing what state you end up in, as being able to watch and release your finger on the exact 1/60th second interval is rather hard, whereas the key events wait a certain time before repeating the keystroke. on the contrary, for moving the map, and by extension, moving ANY player controlled object in ANY game, using events is a poor choice because as we all know (using my rpg as an example, we will assume we are controlled the movement of the characte), if i pushed the up button, the character would jump up the specified player move distance in one frame, which is rather unimpressive and unimmersive. if using an animation between squares, then we are limited to moving to discrete tile square, such as in pokemon and many other games. however, using polling, and checking on every frame which key is pressed down, means we can move the character a little each frame, thus resulting in smooth motion, and all the while we can still play our walking animation if we are so inclined. so as you can see, polling and events are used for 2 similar but different things. also, the code you posted here (shown below, sorry i dunno how to use code tags here) would never happen, even if you used polling and events simultaneously, like we do in our rpg and other programs. no single key (or any other input event for that matter) should simultaneously use events and be polled. it just makes no sense. even if you needed to use polling and events for the same key, it wouldn’t be in the same part of the program, and so you would write the polling and events parts seperately and not have to worry about clashes.; } } nonetheless i program mainly in java and so my comments may not apply to flash/actionscript but looking at your code it is very similar to both the syntax and OO paradigm of java. [...] I ran into this issue I couldn’t find any information about it, but is partly why Keith is eating crow ;) However in my simple implementation it was easy enough to removeEventListener on KEY_DOWN and [...] Your InputLayer class performs the same functionality as a polling class (“a rose by any other name…”). Your post is a good argument for using a polling class. The idea is to make an easily reusable class once, and then use it all over the place. You cited performance as an issue, but a good input-polling class wont be the issue. It’s better to simplify your UPDATE and RENDER methods, as they’re going to have a heavier processing load. No offense man, but this really smacks of premature optimization. I do like your approach best, because I agree that a keyboard manager class is an unnecessary layer of abstraction on top of the keyboard events Flash already provides. However, if the keyboard manager class is easiest for you then just use that and don’t stress out about it’s performance impact. No, you’re right. I’ve been reformed. To summarize, for performance you’d prefer your initial idea, but for general development you’re okay with polling but prefer a more domain specific interface to the keyboard input management. Right?
http://www.bit-101.com/blog/?p=2406
CC-MAIN-2016-50
refinedweb
4,003
63.39
Cards into learning the language that I can get past most of those negatives. Here I present a classic tale of “life with Scala”. The problem. res1: Iterator[List[Int]] = non-empty iterator scala> List(1,2,3).inits.toList res2: List[List[Int]] = List(List(1, 2, 3), List(1, 2), List(1), List()) scala> List(1,2,3).tails.toList res3: List[List[Int]] = List(List(1, 2, 3), List(2, 3), List(3), List()) The results include the empty collection, but as it happens that doesn’t suit my needs. I want a version that doesn’t return the empty list. What to do? Options include: - Call the existing inits method then strip off the last item. However since we get an iterator this isn’t quite so trivial – we can’t just do inits.init - Write a new custom method to do just what I want and return a collection, not an iterator.. res7: List[Traversable[Int]] = List(List(1, 2, 3, 4), List(1, 2, 3), List(1, 2), List(1)) But Arrays mess it all up This is all looking very promising and straightforward, but it took a fair bit of banging my head against the desk to figure out why the compiler wouldn’t let me call it on an Array. :9: error: value nonEmptyTails is not a member of Array[Int]. res10: Iterator[Traversable[Int]] = non-empty iterator Wrap-up. Just to be clear…: res16: List[Array[Int]] = List(Array(1, 2, 3), Array(1, 2), Array(1)) It’s not quite as pithy as having a nonEmptyTails method, but in all other senses it’s probably superior. 2 Comments Nice! I note that the return type of your nonEmptyTails function is Iterator[Traversable[T]] on all traversable types. If you wanted it to preserve type iteration and be an Iterator[List[T]] on a List[T], Iterator[Seq[T]] on a Seq[T], etcetera, you could try the following implementation: import scala.collection.TraversableLike implicit class TraversableExtras[A, Repr <: TraversableOnce[A]](t: TraversableLike[A, Repr]) { def nonEmptyTails: Iterator[Repr] = { val bufferedInits = t.inits.buffered new Iterator[Repr] { def hasNext() = bufferedInits.head.nonEmpty def next() = bufferedInits.next() } } } TraversableLike has a second type parameter (called Repr here) which is the type of the collection itself, therefore we can return a value of that type rather than a plain Traversable, in order to preserve the type information in the result. Note that this also means that we have to introduce a constraint that Repr is a subtype of TraversableOnce here – this is so the compiler can prove that the nonEmpty operation is available on it. Thanks! Tim Thanks Tim – that’s really helpful. It was that second type parameter of TraversableLike that originally put me off and had me go with Traversable 🙂
https://tech.labs.oliverwyman.com/blog/2014/02/10/a-study-in-scala/
CC-MAIN-2020-05
refinedweb
468
63.19
DSA_NEW(3) OpenSSL DSA_NEW(3) DSA_new, DSA_free - allocate and free DSA objects #include <openssl/dsa.h> DSA* DSA_new(void); void DSA_free(DSA *dsa); DSA_new() allocates and initializes a DSA structure. It is equivalent to calling DSA_new_method(NULL). DSA_free() frees the DSA structure and its components. The values are erased before the memory is returned to the sys- tem. If the allocation fails, DSA_new() returns NULL and sets an error code that can be obtained by ERR_get_error(3). Other- wise it returns a pointer to the newly allocated structure. DSA_free() returns no value. dsa(3), ERR_get_error(3), DSA_generate_parameters(3), DSA_generate_key(3) DSA_new() and DSA_free().
http://mirbsd.mirsolutions.de/htman/i386/man3/DSA_new.htm
crawl-003
refinedweb
103
61.22
Groovy merge two lists? I have two lists: listA: [[Name: mr good, note: good,rating:9], [Name: mr bad, note: bad, rating:5]] listB: [[Name: mr good, note: good,score:77], [Name: mr bad, note: bad, score:12]] I want to get this one listC: [[Name: mr good, note: good,, rating:9, score:77], [Name: mr bad, note: bad, rating:5,score:12]] how could I do it ? thanks. Answers collect all elements in listA, and find the elementA equivilient in listB. Remove it from listB, and return the combined element. If we say your structure is the above, I would probably do: def listC = listA.collect( { elementA -> elementB = listB.find { it.Name == elementA.Name } // Remove matched element from listB listB.remove(elementB) // if elementB == null, I use safe reference and elvis-operator // This map is the next element in the collect [ Name: it.Name, note: "${it.note} ${elementB?.note :? ''}", // Perhaps combine the two notes? rating: it.rating?:0 + elementB?.rating ?: 0, // Perhaps add the ratings? score: it.score?:0 + elementB?.score ?: 0 // Perhaps add the scores? ] // Combine elementA + elementB anyway you like } // Take unmatched elements in listB and add them to listC listC += listB The subject of the question is somewhat general, so I'll post answer to a simplier question, if anyone got here looking for "How to merge two lists into a map in groovy?" def keys = "key1\nkey2\nkey3" def values = "value1,value2,value3" keys = keys.split("\n") values = values.split(",") def map = [:] keys.eachWithIndex() {param,i -> map[keys[i]] = values[i] } print map import groovy.util.logging.Slf4j import org.testng.annotations.Test @Test @Slf4j class ExploreMergeListsOfMaps{ final def listA = [[Name: 'mr good', note: 'good',rating:9], [Name: 'mr bad', note: 'bad',rating:5]] final def listB = [[Name: 'mr good', note: 'good',score:77], [Name: 'mr bad', note: 'bad', score:12]] void tryGroupBy() { def listIn = listA + listB def grouped = listIn.groupBy { item -> item.Name } def answer = grouped.inject([], { candidate, item -> candidate += mergeMapkeys( item.value )}) log.debug(answer.dump()) } private def mergeMapkeys( List maps ) { def ret = maps.inject([:],{ mergedMap , map -> mergedMap << map }) ret } } Need Your Help Facebook Unity SDK how to add like button for (for liking Facebook page)? facebook unity3d facebook-like facebook-unity-sdkWe want to add "like our Facebook page and get reward" to our Unity game. input type=image name and value not being sent by ie and opera image forms internet-explorer inputWhen a form has multiple image inputs and the server side uses their names and/or values to distinguish which one was clicked, it works perfectly in FireFox. However, people often write the whole t...
http://www.brokencontrollers.com/faq/2598152.shtml
CC-MAIN-2019-26
refinedweb
436
66.03
A (If you aren't familiar with language oriented programming and language workbenches - or at least my usage of the terms, you should read my outline article on Language Workbenches. This article discusses an example of using a language workbench and assumes you'll be familiar with the concepts I discussed in that article.) One of these language workbenches is the Meta-Programming System (MPS) from JetBrains. As I was writing my language workbench article I wanted to include a more substantial example with in an actual language workbench to help give you a better picture of what working with such a tool would be like. The example ended up being a big one to describe, so I decided to break it out into its own article. I decided use MPS not because of any opinions about which language workbench is the best (after all they are still in very early stages of development), but simply because that the JetBrains office is just down the road from where I live. A short distance to drive makes collaboration much easier. So as you read this, remember that only part of the point is to look at MPS. The real point of the article is to give you a feel of what this class of tools is like to use. On the surface each tool is quite different - but they share many underlying concepts. JetBrains have opened up the MPS in their Early Access Program, which allows people to download a development version of MPS to play with it. You'll find the example from this article in there. Remember, however, that the tool is still very much under development so what you see when you look at it now may be very different from what I see as I write this. In particular certain screens may have changed and I don't expect to keep the screenshots here up to date with each change. There are also numerous rough edges which are typical of a new kind of tool that still being worked on. However I think it's still worth looking at because the principles are what counts. Agreement DSL This example uses a pattern that I've come across several times, which I now call Agreement Dispatcher. The idea behind an agreement dispatcher is that a system receives events from the outside world and reacts to them differently due to various factors, of which a leading one is the agreement between the host company and the party that the event was about. Perhaps the easiest way to talk about this further is to show an example of the DSL I'll be using as an example. This piece of DSL indicates how a notional utility company reacts to events for customers on its regular plan. The agreement definition consists of values and event handlers, both of which are temporal - their values change over time. This agreement has one value - the base rate that the customer is charged for electricity. From 1 Oct 1999 it was set at $10 per KwH, on the 1st December it was raised precipitously for $12/KwH. The agreement shows reactions to three kinds of events: usage (of electricity), a service call (such as someone coming in to fix a meter), and tax. The handlers are temporal in the same way as the base rate; we can see that the handler for service calls also changed on December 1st. The handler indicates a simple reaction - the posting of some monetary value to an account. The account is stated directly in the DSL, the amount is calculated using a formula. The formula can include values defined in the agreement together with properties on the event. Usage events include a usage property that indicates how many KwH of electricity were used in this billing period. The posting rule for the USAGE event indicates than when we get a usage event we post the product of this usage and the base rate to the customer's base usage account. Figure 2 shows a second agreement, this one for low paid people on a special plan. The only interesting addition to this is that usage formula here involves a conditional, expressed using Excel syntax. The first thing to note about these fragments is that they are very domain oriented and readable in terms of the domain. Although the COBOL inference hangs over me, I'd venture to say they are readable to a non-programmer domain expert. These DSL fragments will generate code that fits into a framework written in Java, indeed these DSLs describe the same scenarios that I used in the description of agreement dispatcher. For the sake of comparison, here's the same configuration code written in Java. public class AgreementRegistryBuilder { public void setUp(AgreementRegistry registry) { registry.register("lowPay", setUpLowPay()); registry.register("regular", setUpRegular()); } public ServiceAgreement setUpLowPay() { ServiceAgreement result = new ServiceAgreement(); result.registerValue("BASE_RATE"); result.setValue("BASE_RATE", 10.0, MfDate.PAST); result.registerValue("CAP"); result.setValue("CAP", new Quantity(50, Unit.KWH), MfDate.PAST); result.setValue("CAP", new Quantity(60, Unit.KWH), new MfDate(1999, 12, 1)); result.registerValue("REDUCED_RATE"); result.setValue("REDUCED_RATE", 5.0, MfDate.PAST); result.addPostingRule(EventType.USAGE, new PoorCapPR(AccountType.BASE_USAGE, true), new MfDate(1999, 10, 1)); result.addPostingRule(EventType.SERVICE_CALL, new AmountFormulaPR(0, Money.dollars(10), AccountType.SERVICE, true), new MfDate(1999, 10, 1)); result.addPostingRule(EventType.TAX, new AmountFormulaPR(0.055, Money.dollars(0), AccountType.TAX, false), new MfDate(1999, 10, 1)); return result; } MultiplyByRatePR(AccountType.BASE_USAGE, true), new MfDate(1999, 10, 1)); result.addPostingRule(EventType.SERVICE_CALL, new AmountFormulaPR(0.5, Money.dollars(10), AccountType.SERVICE, true), new MfDate(1999, 10, 1)); result.addPostingRule(EventType.SERVICE_CALL, new AmountFormulaPR(0.5, Money.dollars(15), AccountType.SERVICE, true), new MfDate(1999, 12, 1)); result.addPostingRule(EventType.TAX, new AmountFormulaPR(0.055, Money.dollars(0), AccountType.TAX, false), new MfDate(1999, 10, 1)); return result; } } The configuration code isn't exactly the same. The posting rule carries a taxable boolean marker that we haven't added to the DSL yet. In addition the formulae are replaced by various Java classes that can be parameterized for the most common cases - this if often better than trying to dynamically create formulae in a Java solution. But I think the basic message comes across - it's much harder to see the domain logic in the Java, because Java's grammar does get in the way. This is particularly so for a non-programmer. (If you're interested in how the resulting framework actually works, take a look at the agreement dispatcher pattern - I'm not going to go into it here. The example in that pattern is similar, but not exactly the same.) You may have noticed that the DSL examples used screen shots rather than text - and that's because although the DSLs look like text they aren't really text. Instead they are projections of the underlying abstract representation, projections that we manipulate in the editor. Figure 3 indicates this. Here I'm adding new base rate. The editor indicates the fields I need to fill in, putting in appropriate values as required. I don't actually type much text - often my main task is picking from pick lists. At the moment the date goes in as structured figures, but in a fully developed system you could use a calendar widget to enter the date. One of the most interesting elements of this is the use of excel-style formulae in the plan. Here's the editor as I add a term to a formula. Notice that the pop up includes various expressions you might want in a formula, plus the values defined in the plan, plus the properties on the event that's being handled in this context. The editor is using a lot of knowledge of the context to help the programmer enter code correctly - much as post-IntelliJ IDEs do. Another point about the formulae is that they come from a separate language from the language used to define agreements. So any DSL that needs to use excel-like formulae can import formulae to their language without having to create all the definitions for themselves. Furthermore these formulae can incorporate symbols from the language that's using the formula language. This is a good example of the kind of symbolic integration that language workbenches strive for. You need to be able to take languages defined by others, but at the same time weave them as seamlessly as possible into your own languages. (As a point of full disclosure, this formula language was in fact written in response to developing this example, but it is separated so it can be used by other languages. This is an accident of the fact we are seeing a tool in development, together with MPSs development philosophy: find interesting applications of MPS and use the needs of these applications to drive the features and design of MPS. This is a development philosophy I favor.) That last screen-shot shows another important point. You'll notice that I didn't finish working on the new rate when I switched over to the formula. One of the past problems with these kind of intelligent, or structured, editors is that they couldn't deal with incorrect input. Each bit of input needs to be correct before you move on. Such a requirement is a big usability problem. When programming you need to be able to switch around easily - even if that means leaving invalid information in place. The consequence of this, for a projecting editor, is that you need to be able to handle invalid information in your abstract representation. Indeed you want to be able to do this and still be able to function as much as possible. In this case one option would be to generate code from the plan, ignoring those temporal elements that are in error. This kind of robust behavior in the face of wanton invalidity is an important feature of language workbenches. The example in MPS here uses a text-like projection. MPS, thus far, focuses on this kind of projection. In contrast the Microsoft DSL tools focus on a graphical projection. I expect that as tools develop they will offer both textual and graphical projections. Despite the modeling crowd's obsession with saying "a picture is worth a thousand words", textual representations are still very useful. I would expect mature language workbenches to support both textual and graphical projections - together with projections that many people don't think as programming environments Defining the Schema Now we can see what the language looks like, we can take a look at how we define it. I'm not going to go through the entire language definition here, I'm just going to pick out a few highlights to give you a feel how it works. Figure 5 shows the schema for the plan construct. (I've also shown on the left the list for other concepts in this agreement language.) If you've done any data modeling, or in particular meta-modeling, this shouldn't have any surprises. I'm not going to explain all elements of the definition here - only the highlights. As usual, remember that this is currently in flux, it probably doesn't look quite like this any more. We define a concept, allow it to extend (inherit from) other concepts. We can give our concept properties and links (similar to attributes and relationships) both at the instance and concept (class level). With links we indicate the multiplicity (in both directions), and the target concept. So in this case we see that a plan is made up of multiple values and events, each of which have their own definitions. Figure 6 shows the definition for event, which is pretty simple. We get something new in the posting rule temporal property. Both values and posting rules end up being governed by this kind of temporal rule, so it makes sense to factor out the common ability to have date-keyed logic. So we have both a temporal property definition ( Figure 7) and extend that with a temporal property for posting rules ( Figure 8). In this case the temporal property defines the notion of a validity date and a value. The posting rule temporal property extends this - but does so in a way that's slightly different to inheritance in an object-oriented language. Rather than adding new link, it specializes this value link saying that it can only link to posting rules. This is similar to what you would achieve with generics in programming language. This idea of specializing a relationship is present in several modeling languages (including UML). I didn't find it terribly useful for most modeling but it is rather handy for meta-modeling. You can think of it as a particular form of constraint. Finally I'll show how the posting rule itself is defined. It extends a concept called formula, which is actually part of a separate formula language. So from this you can get a sense of what's involved to set up a schema in MPS. For each concept you edit the definition making the various links between the elements. I suspect that a data model or UML like class diagram would work better here - this is the kind of thing that works nicely in diagrammatic form. However this style of editor also works pretty well and can allow you to enter a new language schema fairly rapidly. If you're thinking what I hope you're thinking, you'll have noticed something else. The screens for editing schemas look awfully like the screens for editing the DSL. As you may guess there is a DSL for editing schemas - called the structure language in MPS. I edit the schema using the editor that's part of the that DSL. This kind of metacircular bootstrapping is common in language workbenches. Building Editors Now lets take a look at how we define editors in MPS. Figure 11 shows the editor for a plan. In general we build an editor for each concept in our model. (It's not quite one for one, but that's a good place to start thinking.)To define the editor for plans we use the editor for editors (it's getting hard to avoid the metacircularity here.) Editors are defined as a hierarchy of cells. The leaves of the hierarchy can be constants, or references to elements in the the schema. The editor editor (which we are looking at now) uses some symbols to help delimit parts of the editor. Although these symbols are a bit cryptic it's important not to get worried about notation with language workbenches since notation is very easy to change. The top of this cell hierarchy is a cell collection for the whole editor. I select this by selecting the '[/' cell. When you're working with the editor editor, the inspector frame (bottom left) which we didn't use earlier now becomes important. The inspector is used in the same way that property editors are used in GUI builders. Here the inspector shows that we have a vertical cell collection. The sub-cells are: - The row beginning [> plan - A blank row - The row including % value %and its following row. - Another blank row - The row including % event %and its following row. As you can see, one of the problems with this projection is that it's hard to figure out the actual cell hierarchy. It's also questionable to use blank cells to show indentation and whitespace. I expect there'll be a good bit more work on how to make editor editors more usable in the future. The three non-blank rows correspond the line naming the plan, the lines of values, and the lines of events in the plan editor. Figure 1: Here's the example for regular plans again. See how the three non-blank lines in the plan editor correspond to three content areas in the plan: name, values, and events. Now I'll dig into the first of these content areas, the name of the plan. It helps that this is the simplest area to dig into, but even so it's hard to describe it in an article like this because the editor editor uses the inspector to provide a lot of information - as a result I need to use a lot of screen-shots. The name of the plan appears in a single cell within the overall cell collection for the plan editor. This is cell is a cell collection, this time a horizontal collection, of two sub-cells: a constant and a property. (The editor editor indicates a vertical cell collection by [/ and a horizontal collection by [>.) The constant is just the work plan. You can use constant cells to place any markers or hints into an editor. You also use blank constant cells to do layout such as blank lines and indentation. The delimiters in the plan editor (such as [/ and [>) are also constants defined in the editor for editors. We show the name of the plan with a property cell. The property can be any property on the concept that the editor is defining for. Here I show I'm editing the property field on the inspector and I have a pop up showing all the properties on the plan concept - in this case there's only one. The blank lines in the editor are simple constant cells. The value and event lines involve sub-editors. I'll skip over the values row and dig into the events. The event row is a cell in the vertical cell collection that is itself a horizontal cell collection with two sub-cells: a blank constant cells and a ref node list cell marked with (>. A link node has a rather more complex inspector than the other nodes we've seen so far, but what interests us here is two bits of information. As you might guess from the name, ref node list cells will list elements based on following a link in the schema. The editor tells which link to follow and that the list should be made vertically. In Figure 17 I'm showing the pop up for choosing the link (there actually is a choice this time) in the editor pane itself. I could also do it in the inspector. The editor pane shows the link name inside % delimiters. This little example suggests an interesting question when you define editors: should you edit using a separate inspector or directly in the editor pane itself? Keeping stuff out of the editor pane allows you to get an overall structure in the editor pane and to better the see the relationship between the editor definition and the resulting use of the editor. However if you put everything in inspectors you're constantly digging around to see what what's the the cells. This is part of the justification for terse markers (like the [/ and [>). You can click on the markers to see what they are in the inspector, but as you get used to that particular editor you get used to reading the editor pane directly. As you get used to it the terseness is helpful as it allows your eye to see more in less space. You could also image multiple editors for different purposes, some to suit people's experience, others just to suit people's preferences. For example the intentional editor often allows you to switch quickly between different projections according to your preferences. When editing nested tables like this, you can choose to switch between nested tables (called boxy), a lisp like representation (lispy), or a tree view with properties (no cute name). To edit conditional logic there is a C-like programming language view, or a tabular representation. This quick switching between projections is useful because often you can see different aspects of a problem in different projections, so you can often understand more from easy changes between simple projections. But let's get back to the example. We've seen that the plan editor has a cell that lists events vertically. How do we edit those events? At this point we switch over the event editor. Our final tool will embed these event editors in the plan editor. Before we look at the editor, let's refresh ourselves on the schema for event. Here's the definition for the event editor, just using what's in the edit plane. If your eyes are getting use to the terse symbols, you should be able to get most of this without using the inspector. Essentially we have a vertical cell collection with two elements. The bottom of the two is a ref node list cell to list the posting rule temporal properties which will use the editor defined for that concept. The top cell however, shows some stuff we haven't seen yet. The top cell is a horizontal cell collection. It has two sub-cells. The left sub-cell is a constant cell with the word 'event' - nothing new there. The new element is the second cell which is a ref node cell. Ref node cells are similar to ref node list cells, but are for cases where the referred to link is single valued - as it is here for the event type. The ref node cell itself has two parts. The first indicates which link to follow, in this case type. The second indicates which property of the target to display. This is an optional piece - had we left it out the event type would render using its regular editor. Here we are indicating that rather than do that, we just want to render a single property: the name of the type. Now lets look at the editor definition for posting rules. In the example plan ( Figure 1, we see that the editor shows the effective date of the rule followed by the details of the rule itself. Here's the editor definition: This time the root cell is a horizontal cell collection with three sub-cells: a ref node cell for the date, a constant cell for the ":", and another ref node cell for the posting rule itself. Both the date and posting rules are rendered with their own editor. The last editor I'll show is the posting rule editor. Hopefully by now this is almost familiar. The root is a vertical cell collection with two horizontal cell collections as sub-cells. The top cell has the constant "amount:" and a ref node for the expression. The expression is rendered by the editor for expressions which is part of the formula language. The bottom cell has the constant "account:" followed by a ref node for the account which shows the name property of the account. Describing an editor like this in text is awkward, at some point a screen cast of using the editor might be easier to follow. The editor editor is a bit awkward to get used to. This is partly because I'm not used to defining editors, partly its because more work is needed to make the editor editor usable. This is new territory so JetBrains is still learning how this kind of thing should work. The important things to come out of this are that you need a lot of flexibility to define editors so they are as clean as the final plan editor turns out to be. To provide this flexibility you end up with a complex editor for editors. Although there's probably much that can be done to make these more usable, I suspect that it will still take some effort to define an editor that works well for a language. However since the editor is closely integrated with the other elements of a DSL, it's relatively easy to experiment and change editor definitions to explore the best editor. The interplay here between the main edit window and the inspector reveals another point about editors for more complex DSLs such as an editor language. Rather than trying to get all your editing through a single projection, it's often best to use multiple projections that show different things. Here we see the overall structure of the editor in the main editor pane, and lots of details in the inspector. When designing the editor, you can move different elements between different panes. In this case I can see that the third pane, showing the hierarchy of cells, would provide a useful third projection that would complement the inspector and wisiwigish main editor pane. Defining the Generator The last part of the trio is to write the generator. In this case we'll generate a java class that will create the appropriate objects using the framework we currently have. This plan builder class will create an instance of the service agreement class for each each plan we've defined using the DSL. The code that we'll generate will look a little different to the java equivalent code we saw earlier. This is because of the way we're going to handle the calculation formulae. In the pure java version I used parameterized but limited formula classes to set the formulae. In this version the formulae are supplied by the formula language. Here's the edit pane projection of the generator definition Here's the code it generates: (I've added some line breaks to help format it for the web page.) package postingrules; /*Generated by MPS*/ import postingrules.AgreementRegistry; import postingrules.ServiceAgreement; import postingrules.EventType; import postingrules.AccountType; import jetbrains.mps.formulaLanguage.api.MultiplyOperation; import jetbrains.mps.formulaLanguage.api.DoubleConstant; import jetbrains.mps.formulaLanguage.api.IfFunction; import formulaAdapter.*; import mf.*; public class AgreementRegistryBuilder { public void setUp(AgreementRegistry registry) { registry.register("regular", this.setUpRegular()); registry.register("lowPay", this.setUpLowPay()); } PostingRule_Formula(AccountType.BASE_USAGE, true, new MoneyAdapter(new MultiplyOperation( new ValueDouble("BASE_RATE"), new UsageDouble()),(10.0,(15.0, Currency.USD))), new MfDate(1999, 12, 1)); result.addPostingRule( EventType.TAX, new PostingRule_Formula(AccountType.TAX, false, new MoneyMultiplyOperation(new FeeMoney(), new DoubleConstant(0.055))), new MfDate(1999, 10, 1)); return result; } public ServiceAgreement setUpLowPay() { ServiceAgreement result = new ServiceAgreement(); result.registerValue("BASE_RATE"); result.registerValue("REDUCED_RATE"); result.registerValue("CAP"); result.setValue("BASE_RATE", 10.0, MfDate.PAST); result.setValue("REDUCED_RATE", 5.0, MfDate.PAST); result.setValue("CAP", new Quantity(50.0, Unit.KWH), MfDate.PAST); result.setValue("CAP", new Quantity(60.0, Unit.KWH), new MfDate(1999, 12, 1)); result.addPostingRule( EventType.USAGE, new PostingRule_Formula(AccountType.BASE_USAGE, true, new IfFunction<Money>( new QuantityGreaterThenOperation(new UsageQuantity(), new ValueQuantity("CAP")), new MoneyAdapter( new MultiplyOperation(new ValueDouble("BASE_RATE"), new UsageDouble()), Currency.USD), new MoneyAdapter( new MultiplyOperation(new ValueDouble("REDUCED_RATE"), new UsageDouble()), Currency.USD))), new MfDate(1999, 10, 1)); result.addPostingRule( EventType.SERVICE_CALL, new PostingRule_Formula(AccountType.SERVICE, true, new MoneyConstant(10.0, Currency.USD)), new MfDate(1999, 10, 1)); result.addPostingRule( EventType.TAX, new PostingRule_Formula(AccountType.TAX, false, new MoneyMultiplyOperation(new FeeMoney(), new DoubleConstant(0.055))), new MfDate(1999, 10, 1)); return result; } } As usual I'll pick some bits of the generation to walk through, without going into all of it. In particular the generated code from the formula is a rather ugly interpreter formula. This needs to be cleaned up and we hope to do that in the near future. As with any template language, MPS's generator language allows you to write the class in template form with parameter references. One big difference with a language workbench is that you're using a projectional editor to define the template. So we can create a projectional editor for java class generation that knows about java syntax and uses this information to help you in your template generation. Here you see the generator editor has supplied markers for the various kinds of elements we see in java programs. This one only has methods so these others are unused. MPS's generator language uses two kinds of parameter references: property macros (marked with $) and node macros ( $$). Property macros interrogate the abstract representation and return a string to be inserted into the templated output. Node macros interrogate the abstract representation and return further nodes for more processing. Typically you'll use node macros to handle the equivalent of loops in other templating systems. Both types of macro are implemented by java methods in supporting java classes. In time the MPS team wants to replace java by a DSL that's designed to query the abstract syntax for generation, but for the moment they use java code. The property macro is shown with references like $[_registryBuilder_]. Selecting the $ allows you to see in the inspector what java method is invoked by the macro. Integration between MPS and JetBrains's IntelliJ Java IDE allows me to hit the traditional IntelliJ <CTRL>-B and go the definition of the macro in Java public static String propertyMacro_RegistryBuilder_ClassName(SemanticNode sourceNode, SemanticNode templateNode, PropertyDeclaration property, ITemplateGenerator generator) { return NameUtil.capitalize(generator.getSourceModel().getName()) + "RegistryBuilder"; } As you can see this is a pretty simple method. Essentially all it does is concatenate the name of the model that we're working on with "RegistryBuilder" to synthesize the class name. This kind of thing allows you to synthesize various strings to insert in the generated code. While you're in the method you have access to various parts of the abstract representation: both of the agreement DSL and the generator DSL. - sourceNode is the current node in the source language - in this case the agreement language. - templateNode is the current node in the generator language, in this case the current node from the generator definition for builders - property is the current property to which we're applying the macro - property declaration is the declaration (from the schema) for this property. - generator is the current generator instance - this links in with the current project and models. You can see in the editor projection that this parameter reference has a name in the editor projection: _registryBuilder_. This is a label that allows multiple references in the editor. You can see an example of this later in the template. Each agreement is built with a separate method ( setUpRegular() and setUpLowPay()). These need to be called from the overall setup method. So the names of these methods have to be referenced from both the method definition and the call. The label _setUp_plan_ allows us to do that. In Figure 22 you can see the label both within the repeating lines of the setUp method and as the method name in the template that's generated for each method. Indeed, since the template editor is a projectional editor, we can get pop menus to help us choose these labels when we need them. Since the editor knows we are building a template for a java program, it can use this information to help us edit in the projection. The second kind of macro we can see is a node macro. Node macros appear in the editor as $$[more template code]. The template code enclosed in the brackets is applied to each node returned from the macro. Here's the screen for the our agreement creation methods. This links to the following java code. public static List<SemanticNode> templateSourceQuery_Plans(SemanticNode parentSourceNode, ITemplateGenerator generator) { List<SemanticNode> list = new LinkedList<SemanticNode>(); List<SemanticNode> roots = generator.getSourceModel().getRoots(); for (SemanticNode node : roots) { if (node instanceof Plan) { list.add(node); } } return list; } As you see, while a property macro returns a string, a node query returns a list of semantic nodes - in this case it walks through the roots of the abstract representation and returns all the plan nodes there. The generator will then generate the enclosed defined code for each plan. (In this way it acts rather like the looping directive in VTL). When you're inside a node macro, the enclosed template is applied once for each node returned by the macro - setting the sourceNode argument to that node. So when we name the method later on we can use the following bit of java. public static String propertyMacro_Plan_SetUpMethod_Name(SemanticNode sourceNode, SemanticNode templateNode, PropertyDeclaration property, ITemplateGenerator generator) { Plan plan = (Plan) sourceNode; return "setUp" + plan.getName(); } Since the source node is a plan node, and the plan's schema has a name which is a string, we can just use the name to generate the name of the method. The rest of the template works in essentially the same way. Either you obtain properties from your current source node, or you use a node macro to get another node to work on. Defining template in MPS is really very similar to traditional template based approaches. Again we have an abstract representation that we query inserting the results in the generated code. The main difference visible from this example is that we are able to build projectional editors for different kinds of templated output - a java class in this case. Summing Up I hope this example gives you a feel of what it's like to use a language workbench - even if it's still somewhat of an embryo. In many ways this example is mostly lacking in the respect that it's still rather like a traditional textual DSL. As I suggested in language workbench, I think the really interesting DSLs will actually by quite different. But part of the the nature of this work is that we can't really see what they'll look like yet. For articles on similar topics… …take a look at the tag:
https://martinfowler.com/articles/mpsAgree.html
CC-MAIN-2017-04
refinedweb
5,585
54.73
You can subscribe to this list here. Showing 25 50 100 250 results of 48 It appears that helix doesn't support transparency yet, nor does curve upon which helix is based. (The helix object is coded in Python, not C++; see the module "primitives.py" in the visual folder.) And I think it's the case that currently the only objects that handle textures are box and sphere. Thanks for the reminder about axes. You're right, it would be good to have this object. Bruce Sherwood Robert Beichner wrote: The Mac download instructions at vpython.org have been improved to reflect the fact that the Python 2.3 version of Visual 2.9 (Fink package visual-py23) can be installed quickly and easily. It's only if one needs to base VPython on Python 2.4 that you have to go through a very lengthy process. Bruce Sherwood Thanks much for the feedback. There are indeed a variety of bugs in the new VPython, plus one necessary "feature". The "feature" is this: the new transparency capability is an example of "go fast, be wrong". It's wonderful to have transparency, but it is not difficult to create a scene where transparency doesn't work properly. Non-opaque objects are ordered back to front for rendering purposes, so that those in front can have some of the color of those in back. The ordering is based on the center of the object. So for example a long transparent cylinder whose center is at (0,0,0) but whose axis runs along (1,0,1) is treated as though all parts of its axis were at z=0, which is incorrect. Something like this is what's happening with your cone example. I'd like to take the opportunity on behalf of the VPython community to thank Jonathan Brandmeyer for his huge contributions to the development of VPython. He carried off two major developments. The first was to create an auto-configure installation mechanism which addressed severe problems that had existed with compiling and installing Visual on Linux/Unix platforms. The second was the new capabilities of transparency, surface textures, and sophisticated lighting. These represent an enormous step up for VPython. It also represents a fundamental change in the architecture of the Visual module, and because of that it will take some time to identify and fix the bugs. What he has done is not a simple addition to Visual but a major rewrite. Jonathan just graduated from NCSU in engineering. He encountered VPython during two semesters of introductory physics (Matter & Interactions) taught by Ruth Chabay and me. He got interested in the underlying software and began working on development. Alas, he is about to get a job in the real world and won't be free to spend large amounts of time on VPython. With his help, I'm trying to come up to speed on the new VPython as quickly as possible so that I can maintain and document it, but there's an awful lot to learn, having been mostly away from working with the software for the last several years while Chabay and I were focussed on getting our physics curriculum to work at NCSU. I'm at the point now where I can compile and fix simple bugs on Linux (in CVS are some simple bug fixes for textures) but for some reason haven't yet succeeded in compiling on Windows, which is a high priority. Again, thanks, Jonathan! Bruce Sherwood Rob Salgado wrote: Thanks much! Bruce Sherwood Scott David Daniels wrote: @... In the new beta VPython, color is red, green, blue, and optionally alpha (meaning opacity, with 0 transparent and 1 opaque). The name "alpha" is historical but is the standard term in professional graphics discourse. I invite discussion of the following proposal. Proposal: Suppose we use the term "opacity" throughout VPython, including documentation (to be written), but permit the use of "alpha" to accommodate those who normally use that term, with documentation pointing out in a footnote that one can use "alpha" as a synonym for "opacity". Rationale: There was already in VPython the label object where opacity is called "opacity", which was chosen to represent the concept because it's ordinary language, whereas "alpha" is technical. In keeping with the existing label opacity attribute, it seems appropriate to use the same word elsewhere for the same concept. Minor issue: In creating a texture object, one of the types is "rgba". In this proposal the standard form would be "rgbo" but as elsewhere with a footnote saying you can use "rgba". Bruce Sherwood P.S. A separate but related point is that currently "color.white" is a triple (1,1,1). Perhaps it should be a quadruple, (1,1,1,1)? It seems a bit unlikely that this would break any existing programs. Technically, the change would consist simply of changing the definitions in the crayola.py file. And the rgb<->hsv conversions would preserve the number of components in the original color. I installed VPython-Win-Py3.4-4.beta2 then I tried this: >>> from visual import * >>> rod = sphere() >>> ================================ RESTART ==== A VPython window appeared only for a part of a second then the shell restarted. I tried this on 2 different machines equipped with Windows XP, and on a third machine with Windows 2000 - the same error message. To be sure there are no other reasons I disabled firewall and antiviral software. So, I haven' seen VPython-Win-Py3.4-4.beta2 in real time. math teacher I am about to try the latest beta... ...but from peeking at the source, I thought I'd make a feature request that I hope isn't too hard to implement. In 4.0beta2's winrender_surface.cpp render_surface::create(), there is a variable "style" which specifies the window parameters (line 436) if (!fullscreen) style = WS_OVERLAPPEDWINDOW | WS_CLIPCHILDREN | WS_CLIPSIBLINGS; My suggestion: allow the VPython program to select some alternate combinations of such properties at creation time. A while back I had fooled around with my 2003-10-05 installations. By trial and error, I made some alternative cvisual.dlls : style = WS_OVERLAPPEDWINDOW | WS_CLIPCHILDREN | WS_CLIPSIBLINGS; style = WS_DLGFRAME | WS_POPUP; style = WS_POPUP; // frameless style = WS_DLGFRAME | WS_POPUP; // border-only style = WS_POPUPWINDOW; // frameless with move/close style = WS_POPUPWINDOW | WS_CAPTION ; // bordered with move/close style = WS_POPUP | WS_THICKFRAME; // bordered resizable style = WS_POPUPWINDOW | WS_THICKFRAME; // bordered resizable with move/close //WS_OVERLAPPEDWINDOW gave the minimize/maximize/close options style = WS_POPUPWINDOW | WS_THICKFRAME; // bordered resizable with move/close, topmost At one point, I remember fooling around with a partially-transparent scene... but I can find those files now. Of course, one will lose some ability in moving, resizing, minimizing, maximizing, and terminating with the mouse. But there are alternate ways to control this (e.g. via the taskbar, via a third-party program AutoIt), if necessary. Why do this? I think some multi-scene applications (like in my relativity animations at ) look nicer and possibly more professional looking (better suited to screen capturing) without the title bar, buttons, and borders. rob salgado For those of you who are playing with Jonathan Brandmeyer's beta release of a new VPython that supports surface textures, transparency, and sophisticated lighting, there is now a texture generator program in the contributed section of vpython.org. It creates texture files (*.vpt for VPython Texture) with a wood-like character which can be applied to boxes and spheres. Bruce Sherwood I just discovered that the Linux installer for the new experimental version doesn't install texturetest.py nor labels.py. If you're playing with this version on Linux, you might get the files from the examples folder of the package. Bruce Sherwood Bruce Sherwood wrote: > Does anyone have a clue as to what could be Joe's problem? Concerning the SF problem, no idea. He has had the same problem with the Fink mailing lists, and no solution was found so far. [] >> What are the chances of getting VPython 4 as a Fink package after >> it's declared stable? I guess this depends on my schedule :-) I haven't yet had time to look at version 4. I have been busy making sure that 3.2.9 is in the Fink-0.8.1 binary distribution that was released a couple of days ago. Contrary to what the package database at <> says, visual-py24-3.2.9-1002 (also *-py23) does exist in the 10.4 binary distribution, both for ppc and for intel. -- Martin Does anyone have a clue as to what could be Joe's problem? -------- Original Message -------- Subject: [Sherwood] Fwd: 06_spectrum.py now works Date: Sun, 18 Jun 2006 20:10:20 -0400 From: Joe Heafner <heafnerj@...> To: Sherwood Bruce <basherwo@...> References: <B218E13C-22AF-4F45-BC95-15F2FEAB5618@...> Hello Bruce. I tried posting this to the visualpython-users list and once again sourceforge is rejecting my posts. I've absolutely no idea why it no longer likes me. My ISP says the problem is on sourceforge's end and I can't get anyone at sourceforge to talk to me about it. Frustrating. Anyway, could you post my message to the list? I can still receive posts fine, I just can't send them. Begin forwarded message: > From: Joe Heafner <heafnerj@...> > Date: June 18, 2006 8:03:50 PM EDT > To: Visualpython-users <visualpython-users@...> > Subject: 06_spectrum.py now works > > Last night, I wiped out my old Fink distribution and installed the > new 0.8.1 release on my iMac and built visualpy24. The program that > wouldn't previously run, 06_spectrum.py, now runs perfectly. This > confirms my suspicion that the problem was a bug in Python and not > in Visual. The program worked perfectly under Windows at work. Now > I need to put the new Fink on my MacBook Pro and make sure > everything works on it. > > Incidentally, I now have Windows XP Home running under Parallels > Desktop () but I've not yet tried VPython > under this environment. I've read that 3D graphics isn't yet > supported under Parallels so I wouldn't think VPython would run. > > What are the chances of getting VPython 4 as a Fink package after > it's declared stable? > > Joe Heafner > heafnerj(at)sticksandshadows(dot)com > www(dot)SticksAndShadows(dot)com > > > Joe Heafner heafnerj(at)sticksandshadows(dot)com www(dot)SticksAndShadows(dot)com __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around linux:/backup/src/mine/visual-3.2.9/build # make Making all in site-packages/visual make[1]: Entering directory `/backup/src/mine/visual-3.2.9/build/site-packages/visual' make[1]: Nothing to be done for `all'. make[1]: Leaving directory `/backup/src/mine/visual-3.2.9/build/site-packages/visual' Making all in src make[1]: Entering directory `/backup/src/mine/visual-3.2.9/build/src' Updating dependancy information for ../../src/xgl.cpp ... make[1]: *** [xgl.d] Error 1 make[1]: Leaving directory `/backup/src/mine/visual-3.2.9/build/src' make: *** [all-recursive] Error 1 ------------------ using boost 1.33 is this the problem?? I can see from the build log that it cannot find the boost header files. And now larger than ever lies the curse On this our time; and all that went before Keeps altering its face from bad to worse; And each of us has felt the touch of war -- War after war, and exile, dangers, fear -- And each of us is weary to the core Of seeing his own blood along a spear And being alive because it missed its aim. Some folks have lost their goods and all their gear, And everything is gone, even the name Of house and home and wife and memory. And what's the use of it? A little fame? The nation's thanks? A place in history? One day they'll write a book, and then we'll see. Garcilaso de la Vega, 1503-1535 translated by JB Trend (The Civilization of Spain, Oxford, 1952) -- Alexander Anderson mailto : speeski at alma-services dot abel dot co dot uk bud-nav : Where there is no vision, the people perish. visualpython-users-request@... wrote: Send Visualpython-users mailing list submissions to visualpython-users@... To subscribe or unsubscribe via the World Wide Web, visit or, via email, send a message with subject or body 'help' to visualpython-users-request@... You can reach the person managing the list at visualpython-users-owner@... When replying, please edit your Subject line so it is more specific than "Re: Contents of Visualpython-users digest..." Today's Topics: 1. TEST (Alexander Anderson) ---------------------------------------------------------------------- Message: 1 Date: Wed, 14 Jun 2006 23:06:40 +0100 From: Alexander Anderson Subject: [Visualpython-users] TEST To: visualpython-users@... Message-ID: TEST ------------------------------ ------------------------------ _______________________________________________ Visualpython-users mailing list Visualpython-users@... End of Visualpython-users Digest, Vol 1, Issue 821 ************************************************** --------------------------------- Do you Yahoo!? Everyone is raving about the all-new Yahoo! Mail Beta. TEST Get it from the Sourceforge download pages at Probably not more than one or two more beta releases will be pushed out before 4.0, so please test out these builds! Also, the time that I will have to work on VPython will be much more limited from here out, so the sooner we get bug reports, the better. The Windows build should work for users of Windows 98 and ME, as well as XP. This build uses the Win32 API natively, rather than using Gtk. It still uses libsigc++, which is licensed under the LGPL. Therefore, its source code is available as win32_source_deps_LGPL.tar.bz2. At this time, the native windows build does not support the toolbar that the Gtk build does. If there is sufficient interest, it may be possible for us to publish a native Windows build as well as a Gtk+-on-Win32 build. Enjoy! -Jonathan Visual 4.beta2 ================================================================================ NEW FEATURES: * The Windows build is no longer dependent on Gtk+; it uses the Win32 API directly. Windows 98,ME,XP,2K are supported. Windows 95 is _not_ supported, and will not be for the forseeable future. The official Microsoft end of life date for Windows 98 and ME is coming soon (30 June 2006). Support by VPython for those versions of Windows is deprecated. This build depends on a third-party library, libsigc++, which is licensed under the GNU Lesser General Public License, version 2.1. (which was also the case for Gtk). To comply with the terms of this license, the source code for libsigc++ is available from our download site alongside the Windows package. * Ring objects support translucency. (but it is somewhat expensive) * Graphs of points utilize the new points object * (Actually present since 4.beta0) Label objects' text supports Unicode strings. On Linux/Unix, any unicode character supported by the selected font should be displayed correctly. This feature is not implemented for Windows. Additionally, the default text font for Linux is the system font (rather than courier), rendered using Freetype 2. * The colorsliders demo includes an alpha (opacity) slider * The gdots object uses the new points object for rendering cleaner point graphs BUGS FIXED: * Programs that track UI events should exit cleanly when the user closes a window, rather than hang. * A universe consiting of only one instance of a points, curve, convex, faces, or ellipsoid will be displayed properly. * Boxes with negative dimentions are rendered correctly * renderable.shininess = 1.0 is no longer a synonym for diabling shininess (shininess is maximized in this case) * Label object color properties are applied correctly the Windows distribution. * (Linux/UNIX only) label.text returns a properly formed unicode string I see you've already decided against it, but let me take a min to explain my own reasons why I think that this is a bad idea. On Thu, 2006-06-08 at 11:36 +1000, Hugh Fisher wrote: >. This is no big deal - there are several OpenGL-capable windowing systems out there, including PyGtkGLExt and WxPython. > Several of them are not static, and must be frequently rebuilt. Arrows, rings, and labels are prominent examples. Anytime the scene requires high dynamic range (abs(pos) > 1e10 or < 1e-10 or so), significant extra computation is required. > so can be > compiled into display lists. The overhead of OpenGL calls > in Python rather than C++ would be minimal and I doubt > anyone would notice. IF you can crank everything into displaylists and/or vertex arrays, then yes. However, this is not commonly the case, as above. > GUI code is the most amenable to implementing in Python. The current 4.beta series is designed with this in mind. If you look, there is a new class called a display_kernel. It implements everything needed to run the Visual scene graph, without any GUI code. There are a handful of hooks built in so that a GUI can be wrapped around (or inherited from) this class. >? Nah - this argument is a red herring. We already package a couple of extra libs with Visual (such as Numeric and Numarray). 4.betaX includes a 30 MB GTK runtime along with it. However, doing all of the rendering in C++ offers one _huge_ advantage over Python, that is independant of all other arguments. By running the GUI/rendering thread in C++, rendering takes place concurrently with physics code. Python is not capable of concurrent threading. When a Python program is mulithreaded, only one thread can ever run at any given time, regardless of the threading support of the host. This is because the interpreter itself is not thread safe. The interpreter multiplexes back and forth by rapidly releasing and grabbing a global interpreter lock. Even on multi-core and hyperthreading hardware, only _one_ Python thread can run at any given time (within a single program, that is). -Jonathan More thoughts, this time about implementing Visual Python on top of a scene graph API. This I don't think is a good idea, for technical reasons and for a more stylistic reason of who Visual Python is meant for. I'm writing my own Python 3D toolkit for virtual reality and visualization, based on the SGI Performer scene graph API. (Yeah I know SGI have gone bankrupt, it's the reason I happen to be thinking a lot about Python and scene graphs just now.) This gives me some experience and/or bias which I think is useful. Technically, it introduces a major dependency on another piece of software. I wrote in my last message that VP is great because you can download the binary and it Just Works. I'm not at all sure this could be done if VP required a scene graph. They're large, complicated bits of software under constant development. From my experience, I could just about offer a binary version of my Python scene graph toolkit for SGI IRIX, because it's a commercial OS that always ships with the scene graph libs. For Linux, I have to tell people to first install Performer, then compile and install my Python toolkit. MacOS or Windows wouldn't be any better, because neither has a standard scene graph API. A scene graph would almost certainly require a major rewrite of Visual Python. OpenGL has succeeded because it's low level and very flexible, without forcing you to write programs in a particular way. The price is no window management, event handling, picking, etc. Scene graph APIs provide a lot more features, but they also require you to write your code to match their style. They have to, large interactive 3D systems are really complicated. As I'm finding out, once you've committed to a scene graph API, it's difficult to switch. The rewriting might even have to extend to Visual Python code, not just the implementation. In the current version, as shown by all the demo programs, the main control loop and event loop is written in Python. The scene graphs I know all have an internal frame loop with event callbacks, and you have to write the Python code in the same style as PyGTK or wxPython. It's not difficult, but it isn't how Visual Python currently works and users would have to adapt. Which brings me to my second reason, is Visual Python meant for the kind of people who use scene graphs? A scene graph like Performer is a big complicated library for people who want to build big and frequently complicated 3D systems. We want to do wierd things, to tweak and tune for maximum performance, and this means exposing all the knobs and levers. We are, in short, speed freaks who willingly put up with the extra complexity. This is why I didn't use Visual Python as the basis for my own toolkit. I think VP is a wonderful tool for scientists, mathematicians, and students. I'd hate to spoil it by adding on a ton of features that I'd find useful but they would not. Which is not to say it's impossible to move smoothly from Visual Python to a full fledged scene graph. Maybe it can be done. But I'd be very careful about it. -- Hugh Fisher DCS, ANU PS. For anyone interested in looking at my 3D Python toolkit, <> For either an introduction to just how complicated a scene graph can be even in Python, or an example of how NOT to do it :-), have a look at py3dview/py3dview/Doc/ which is the HTML reference pages and a mini tutorial for Python programmers, and py3dview/py3dview/Demo/ has a bunch of little demonstration and testing programs. Needs a Linux or IRIX system with Performer if you actually want to build and run the thing. so can be compiled into display lists. The overhead of OpenGL calls in Python rather than C++ would be minimal and I doubt anyone would notice.? -- Hugh Fisher DCS, ANU I hit a frustrating bug which took some time to identify When I tried to run a module containing from visual.controls import * it seized up with the usual message: There's an error in your program: invalid syntax. The place where the error purported could be in the middle of a remark. Further investigation suggested that the purported error was a fixed distance into the program! Yet further investigation showed that the problem was caused by having imported another module, which itself imported visual. That module needed only to import 'math'. Making this change appears to have cured problem from Dr P H Borcherds reply to: p.h.borcherds@... telephone (+44) 121 475 3029 Is there any way of controlling the size of the python shell window from python or vpython? from Dr P H Borcherds reply to: p.h.borcherds@... telephone (+44) 121 475 3029
http://sourceforge.net/p/visualpython/mailman/visualpython-users/?viewmonth=200606
CC-MAIN-2015-48
refinedweb
3,784
65.12
dmake Simple, flexible build system in Dart. Supports incremental builds, file watching, and snapshotting for faster startup. Usage dmake is a DSL that creates a simple graph of inputs and outputs. Try compiling a Dart app! In tool/all.dart: import 'package:dmake/dmake.dart'; import 'package:dmake/dart.dart'; main(List<String> args) { make(args, () { if (isRelease) { // Build to JS in release mode. dart2js('web/main.dart'); } else { // Run all web/ files through dartdevc. all(glob('web/*.dart', recursive: false), dartdevc); } }); } Then, run dart tool/all.dart. All of your .dart files in web/ will be built to JavaScript via the Dart dev compiler. Release mode It's pretty common to have different build rules in debug and release mode. To switch over to dart2js, run dart tool/all.dart --release. Snapshotting Builds Build systems are called very often, and therefore should start up quickly. Using the dmake executable, you can easily snapshot your build script. To create a snapshot of tool/all.dart, just run pub run dmake. If you had another file, say, tool/foo.dart, the command would become pub run dmake -t foo. The single caveat is that to distinguish between arguments passed to the dmake toplevel and to your actual script, you need to separate them with a --. For example, to run in release mode: pub run dmake -- --release Run pub run dmake --help for help. You can also pub global activate dmake. In this case, you can simply run dmake, dmake -t foo, etc. Infrastructure dmake includes utilities for quickly building files in different languages: package:dmake/dart.dart package:dmake/sass.dart
https://pub.dev/documentation/dmake/latest/
CC-MAIN-2020-05
refinedweb
270
69.89
When i release a simple project with JSP+strunts2. The program can run correctly in my local. but when i push to bluemix. (IBM bluemix colud)There return the error message as: Error 404: There is no Action mapped for namespace [/] and action name [] associated with context path []. Answer by CarlosFerreira (135) | Jun 03, 2015 at 08:01 AM Hi vspya. You need to go a little deeper in terms of finding out what is causing the problem. This looks like your application failed to deploy. I suggest you do the following to trouble shoot some more and find the problem. Is your application running? Go to the Bluemix dashboard application page and check to make sure the application health is started and there aren't any unusual error messages in the logs. Sometimes this information is stale so you should do step 2 also. From the Bluemix Dashboard application page click on the links for the routes to your application to make sure it is running. If it isn't then you need to trouble shoot deployment failures. See next step Troubleshoot deployment failures. Download and install Cloud Foundry CLI and login to Bluemix Logging happens at multiple levels Log Types and Their Messages: API STG DEA - Droplet Execution Agent emits DEA logs beginning when it starts or stops the app. Important for troubleshooting deployment failures RTR LGR APP - Application level - using stderr and stdout Read the docs: use command cf logs DeployDjangoBluemixClearDB --recent If you are using Pipeline Delivery Service in Bluemix DevOps you can also search the log files there for "ERR" 57 people are following this question. sample web app to struts2+bluemix+oracle 0 Answers Installed at once ? all versions of "IBM Eclipse Tools for Bluemix" and "WDT". 3 Answers Eclipse Bluemix plugin seems to be ignoring my manifest 1 Answer Eclipse plug-in failing to notice when deploy is complete 2 Answers Bluemix Eclipse Plugin deploy war to liberty 2 Answers
https://developer.ibm.com/answers/questions/194413/bluemix-struts2-1.html?smartspace=bluemix
CC-MAIN-2019-22
refinedweb
327
62.88
It’s been a while since I’ve last posted a BLOG entry. I’ve been busy working with my feature teams to complete .NET Compact Framework v3.5. The .NET Compact Framework v3.5 will be shipped with the next version of Visual Studio, code name Orcas, later this year. NETCF teams have been working hard to get all of our new features in for Orcas Beta1. We wanted to make sure you all have an opptunity to try these new features early in the cycle to allow time to address your feedback. Watch my BLOG or the NETCF team BLOG for release deliverables. (NETCF Team BLOG at) The features I’ve been working on include, Windows Communication Foundation for the .NET Compact Framework, .NET Compact Framework Language Integrated Query, Sound and updates to GUI. I will be describing the new features for each of these areas in future BLOGS. Lets off with a qucik look at the new Sound API's. Last year we looked at the current managed sound API SoundPlayer. We liked the programming model; it was simple and easy to use. It was however tied to PlaySound which only allowed for one sound to be played at a time. Devices on the other hand includes WaveOut which allows for hardware mixing of sounds which was desired for those who want to create a simple game. Our solution is to use the managed SoundPlayer API’s unchanged, but deliver the sound to WaveOut allowing more than one sound to be rendered through SoundPlayer at a time. SoundPlayer has been included in the Orcas January CTP and on, to try it grab the latest Orcas CTP. Create a new Windows Mobile PocketPC 2003, .NET Compact Framework 3.5 project. Then drop three buttons on the form and copy the code below to each button. Deploy project to the emulator and give it a try. using System.Media; ... private void button1_Click(object sender, EventArgs e) { SoundPlayer s = new SoundPlayer("\\Windows\\Windows Default .wav"); s.Play(); SoundPlayer s = new SoundPlayer("\\Windows\\Windows Default .wav"); s.Play(); } private void button2_Click(object sender, EventArgs e) SoundPlayer s = new SoundPlayer("\\Windows\\type.wav"); SoundPlayer s = new SoundPlayer("\\Windows\\type.wav"); s.Play(); private void button3_Click(object sender, EventArgs e) SystemSounds.Exclamation.Play(); SystemSounds.Exclamation.Play();
http://blogs.msdn.com/b/markprenticems/archive/2007/03/07/whats-new-with-net-compact-frameowork-3-5.aspx
CC-MAIN-2015-27
refinedweb
383
67.15
info_outline Solutions will be available when this assignment is resolved, or after a few failing attempts. Time is over! You can keep submitting you assignments, but they won't compute for the score of this quiz. Copy Content from One File to Another Write a function that receives a path to two text files as parameters and copies the content of the first file into the second, overwriting the content of the second file if it's not empty. Example: copy_file('test-file.txt', 'copy.txt') Test Cases test copy file - Run Test import tempfile def test_copy_file(): fp1 = tempfile.NamedTemporaryFile(mode="w") fp1.write('this is line 1\n') fp1.write('this is line 2\n') fp1.write('this is line 3\n') fp1.flush() fp2 = tempfile.NamedTemporaryFile(mode="w") copy_file(fp1.name, fp2.name) fp1.close() with open(fp2.name) as fp2: assert len(fp2.readlines()) == 3 fp2.seek(0) assert fp2.readlines()[2] == 'this is line 3\n'
https://learn.rmotr.com/python/base-python-track/file-management/copy-content-from-one-file-to-another
CC-MAIN-2018-22
refinedweb
160
62.44
Scanning To start scanning and for nearby BLE devices, simply call scan on the ble.Device. The provided block will be called for each scan result: import ble main: device := ble.Device.default device.scan: | remote_device/ble.RemoteDevice | print "Found $remote_device" Here the scan will run indefinitely. Collecting results A scan can be used to create a list of remote devices that match a criteria, e.g. implement a specific service. A service is identified by a UUID with a select few services being assigned by Bluetooth SIG. As an example, the 16-bit UUID 0x180F represents a battery service. The following example shows how to scan for 3 seconds for the addresses of remote devices that implement a battery service. import ble BATTERY_SERVICE ::= ble.uuid 0x180F SCAN_DURATION ::= Duration --s=3 main: device := ble.Device.default addresses := [] device.scan --duration=SCAN_DURATION: | remote_device/ble.RemoteDevice | if remote_device.data.service_classes.contains BATTERY_SERVICE: addresses.add remote_device.address print addresses Example: mobile phone as a BLE device If you want to discover and find your mobile phone as a BLE device using the above Toit program, you can download a mobile app, like nRF Connect. Download the app on your mobile phone for iOS or for Android. The nRF Connect app allows your iOS or Android device to advertise as a BLE peripheral, as well as discovering nearby peripherals, like your Toit device. When running the above Toit program for scanning, you will be able to see your mobile phone on the list of discovered BLE devices.
https://docs.toit.io/tutorials/ble/scanning/
CC-MAIN-2022-05
refinedweb
253
58.38
[ ] Dag H. Wanvik commented on DERBY-3137: -------------------------------------- So, to clarify further, the following would be equivalent: ps.setString(1, "\" SPACEADMIN \""); // Role CNF of SPACEADMIN (surrounded by spaces on each side) ps.setString(1, " \" SPACEADMIN \" "); // Role CNF of SPACEADMIN (surrounded by spaces on each side). TRIMable space in value string I am OK with moving towards stricter adherence here; I'll make an update patch. Thanks for the suggestion on reserving namespace. Since Derby uses SYS as a reserved schema name, I guess the "SYS-" prefix is as good as any. I'll add this to the specification and make an update patch. > SQL roles: add catalog support > ------------------------------ > > Key: DERBY-3137 > URL: > Project: Derby > Issue Type: New Feature > Components: Security, SQL > Reporter: Dag H. Wanvik > Assignee: Dag H. Wanvik > Fix For: 10.4.0.0 > > Attachments: DERBY-3137-2.diff, DERBY-3137-2.stat, DERBY-3137-2.txt, DERBY-3137-uuid.diff, DERBY-3137-uuid.stat, DERBY-3137.diff, DERBY-3137.diff, DERBY-3137.stat, DERBY-3137.txt > > > As a next step after adding support for the roles syntax, I intend to > make a patch which implements catalog support for roles, > cf. SYS.SYSROLES described in the specification (attached to > DERBY-2207). Also the patch should tie this support up to the parser > support, so the role statements can be executed. Any privileges > granted to roles would still have no effect at run-time. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.
http://mail-archives.apache.org/mod_mbox/db-derby-dev/200801.mbox/%3C32516396.1201705362691.JavaMail.jira@brutus%3E
CC-MAIN-2015-06
refinedweb
254
58.48
''' def reverseKGroup(self, head, k): """ :type head: ListNode :type k: int :rtype: ListNode """ Head = None # return node preTail = None # previous tail nextHead = head # next node to be reversed while nextHead: # head is the head of the reversed linked list # tail is the tail of ~~ # nnode is the next node succeed ~~ # r is the remaining length (r > 0 if insufficient) head, tail, nextHead, r = self.reverse(nextHead, k) if r > 0: # if insufficient head, tail, nextHead, r = self.reverse(head, k-r) # reverse list back to original order if preTail: preTail.next = head else: # first segment Head = head preTail = tail return Head def reverse(self, head, k): """ :type head: ListNode :type k: int :rtype head: ListNode :rtype tail: ListNode """ p0 = None p1 = head while p1 and k: p2 = p1.next p1.next = p0 p0, p1 = p1, p2 k -= 1 return p0, head, p1, k '''
https://discuss.leetcode.com/topic/65943/python-solution-simple-with-comments-beats-78
CC-MAIN-2017-39
refinedweb
143
69.86
from categorical and numerical data. - Part 2: Regression with Keras and CNNs — training a CNN to predict house prices from image data (today’s tutorial). - Part 3: Combining categorical, numerical, and image data into a single network (next week’s tutorial). Today’s tutorial builds on last week’s basic Keras regression example, so if you haven’t read it yet make sure you go through it in order to follow along here today. By the end of this guide, you’ll not only have a strong understanding of training CNNs for regression prediction with Keras, but you’ll also have a Python code template you can follow for your own projects. To learn how to train a CNN for regression prediction with Keras, just keep reading! Looking for the source code to this post? Jump right to the downloads section. Keras, Regression, and CNNs In the first part of this tutorial, we’ll discuss our house prices dataset which consists of not only numerical/categorical data but also image data as well. From there we’ll briefly review our project structure. We’ll then create two Python helper functions: - The first one will be used to load our house price images from disk - The second method will be used to construct our Keras CNN architecture Finally, we’ll implement our training script and then train a Keras CNN for regression prediction. We’ll also review our results and suggest further methods to improve our prediction accuracy. Again, I want to reiterate that you should read last week’s tutorial on basic regression prediction before continuing — we’ll be building off not only the concepts from last week but the source code as well. As you’ll find out in the rest of today’s tutorial, performing regression with CNNs and Keras is as simple as: - Removing the fully-connected softmax classifier layer typically used for classification - Replacing it with a fully-connected layer with a single node along with a linear activation function. - Training the model with a continuous value prediction loss function such as mean squared error, mean absolute error, mean absolute percentage error, etc. Let’s go ahead get started! Predicting house prices…with images? Figure 1: Our CNN takes input from multiple images of the inside and outside of a home and outputs a predicted price using Keras and regression.: - Number of bedrooms - Number of bathrooms - Area (i.e., square footage) - Zip code Four images of each house are also provided: - Bedroom - Bathroom - Kitchen - Frontal view of the house A total of 535 houses are included in the dataset, therefore there are 535 x 4 = 2,140 total images in the dataset. We’ll be pruning that number down to 362 houses (1,448 images) during our data cleaning. To download the house prices dataset you can just clone Ahmed and Moustafa’s GitHub repository: That single command will download both the numerical/categorical data along with the images themselves. Make note of where you downloaded the repository on the disk (I put it in my home folder) as you’ll need to supply the path to the repo via command line argument later in this tutorial. For more information on the house prices dataset please refer to last week’s blog post. Project structure Let’s look at the structure of today’s project: We will be updating both datasets.py and models.py from last week’s tutorial with additional functionality. Our training script, cnn_regression.py , is completely new this week and it will take advantage of the aforementioned updates. Figure 2: Our CNN accepts a single image — a montage of four images from the home. Using the montage, our CNN then uses regression to predict the value of the home with the Keras framework. As we know, our house prices dataset includes four images associated with each house: - Bedroom - Bathroom - Kitchen - Frontal view of the house But how are we going to use these images to train our CNN? We essentially have three options: - Pass the images one at a time through the CNN and use the price of the house as the target value for each image - Utilize multiple inputs with Keras and have four independent CNN-like branches that eventually merge into a single output - Create a montage that combines/tiles all four images into a single image and then pass the montage through the CNN The first option is a poor choice — we’ll have multiple images with the same target price. If anything we’re just going to end up “confusing” our CNN, making it impossible for the network to learn how to correlate the prices with the input images. The second option is also not a good idea — the network will be computationally wasteful and harder to train with four independent tensors as inputs. Each branch will then have its own set of CONV layers that will eventually need to be merged into a single output. Instead, we should choose the third option where we combine all four images into a single image and then pass that image through the CNN (as depicted in Figure 2 above). For each house in our dataset, we will create a corresponding tiled image that that includes: - The bathroom image in the top-left - The bedroom image in the top-right - The frontal view in the bottom-right - The kitchen in the bottom-left This tiled image will then be passed through the CNN using the house price as the target predicted value. The benefit of this approach is that we are: - Allowing the CNN to learn from all photos of the house rather than trying to pass the house photos through the CNN one at a time - Enabling the CNN to learn discriminative filters from all house photos at once (i.e., not “confusing” the CNN with different images with identical target predicted values) To learn how we can tile the images for each house, let’s take a look at the load_house_images function in our datasets.py file: The load_house_images function accepts two parameters: - df : The houses data frame. - inputPath : Our dataset path. Using these parameters, we proceed by initializing a list of images that will be returned to the calling function, once processed. From there we begin looping (Line 64) over the indexes in our data frame (i.e., one unique index for each house). In the loop we: - Construct the basePath to the four images for the current index (Line 67). - Use glob to grab the four image paths (Line 68). The glob function uses our input path with the wildcard and then finds all input paths that match our pattern. In the next code block we’re going to populate a list containing the four images: Continuing in the loop, we proceed to: - Initialize our inputImages list and allocate memory for our tiled image, outputImage (Lines 72 and 73). - Create a nested loop over housePaths (Line 76) to load each image , resize to 32×32, and update the inputImages list (Lines 79-81). And from there, we’ll tile the four images into one montage, eventually returning all of the montages: To finish off the loop, we: - Tile the input images using NumPy array slicing (Lines 87-90). - Update images list (Line 94). Once the process of creating the tiles is done, we go ahead and return the set of images to the calling function on Line 97. Using Keras to implement a CNN for regression Figure 3: If we’re performing regression with a CNN, we’ll add a fully connected layer with linear activation. Let’s go ahead and implement our Keras CNN for regression prediction. Open up the models.py file and insert the following code: Our create_cnn function will return our CNN model which we will compile and train in our training script. The create_cnn function accepts five parameters: - width : The width of the input images in pixels. - height : How many pixels tall the input images are. - depth : The number of channels for the image. For RGB. Let’s go ahead and define the input to the model and begin creating our CONV => RELU > BN => POOL layer set: Our model inputs are defined on Line 33. From there, on Line 36, we loop. Let’s finish building our CNN: We Flatten the next layer (Line 49) and then add a fully-connected layer with BatchNormalization and Dropout (Lines 50-53). Another fully-connected layer is applied to match the four nodes coming out of the multi-layer perceptron (Lines 57 and 58). On Line 61 and 62, a check is made to see if the regression node should be appended; it is then added it accordingly. Finally, the model is constructed from our inputs and all the layers we’ve assembled together, x (Line 65). We can then return the model to the calling function (Line 68). Implementing the regression training script Now that we’ve implemented our dataset loader utility function along with our Keras CNN for regression, let’s go ahead and create the training script. Open up the cnn_regression.py file and insert the following code: The imports for our training script are taken care of on Lines 2-9. Most notably we’re importing our helper functions from datasets and models . The locale package will help us with formatting our currencies. From there we parse a single argument using argparse: --dataset . This flag and the argument itself allows us to specify the path to the dataset from our terminal without modifying the script. Now let’s load, preprocess, and split our data: Our inputPath on Line 20 contains the path to our CSV file containing the numerical and categorical attributes along with the target price for each home. Our dataset is loaded using the load_house_attributes convenience function we defined in last week’s tutorial (Line 21). The result is a pandas data frame, df , containing the numerical/categorical attributes. The actual numerical and categorical attributes aren’t used in this tutorial, but we do use the data frame in order to load the images on Line 26 using the convenience function we defined earlier in today’s blog post. We go ahead and scale our images’ pixel intensities to the range [0, 1] on Line 27. Then our dataset training and testing splits are constructed using scikit-learn’s handy train_test_split function (Lines 31 and 32). Again, we will not be using the numerical/categorical data here today, just the images themselves. The numerical/categorical data is used in part one (last week) and part three (next week) of this series. Now let’s scale our pricing data and train our model: Here we have: - Scaled the house prices to the range [0, 1] based on the maxPrice (Lines 37-39). Performing this scaling will lead to better training and faster convergence. - Created and compiled our model using the Adam optimizer (Lines 45-47). We are using mean absolute percentage error as our loss function and we’ve set regress=True indicating that we want to perform regression. - Kicked of the training process (Lines 51 and 52). Now let’s evaluate the results! In order to evaluate our house prices model based on image data using regression, we: - Make predictions on test data (Line 56). - Compute absolute percentage difference (Lines 61-63) and use that to derive our final metrics (Lines 67 and 68). - Display evaluation information in our terminal (Lines 72-75). That’s a wrap, but… Don’t be fooled by how succinct this training script is! There is a lot going on under the hood with our convenience functions to load the data + create the CNN and the training process which tunes all the weights to the neurons. To brush up on convolutional neural networks, please refer to the Starter Bundle of Deep Learning for Computer Vision with Python. Training our regression CNN Ready to train your Keras CNN for regression prediction? Make sure you have: - Configured your development environment according to last week’s tutorial. - Used the “Downloads” section of this tutorial to download the source code. - Downloaded the house prices dataset using the instructions in the “Predicting house prices…with images?” section above. From there, open up a terminal and execute the following command: Our mean absolute percentage error starts off extremely high, in the order of 300-2,000% in the first ten epochs; however, by the time training is complete we are at a much lower training loss of 30%. The problem though is that we’ve clearly overfit. While our training loss is 30% our validation loss is at 56.91%, implying that, on average, our network will be ~57% off in its house price predictions. How can we improve our prediction accuracy? Overall, our CNN obtained a mean absolute error of 56.91%, implying, that on average, our CNN will be nearly 57% off in its predicted house value. That’s a pretty poor result given that our simple MLP trained on the numerical and categorial data obtained a mean absolute error of 26.01%, far better than today’s 56.91%. So, what does this mean? Does it mean that CNNs are ill-suited for regression tasks and that we shouldn’t use them for regression? Actually, no — it doesn’t mean that at all. Instead, all it means is that the interior of a home doesn’t necessarily correlate with the price of a home. For example, let’s suppose there is an ultra luxurious celebrity home in Beverly Hills, CA that is valued at $10,000,000. Now, let’s take that same home and transplant it to Forest Park, one of the worst areas of Detroit. In this neighborhood the median home price is $13,000 — do you think that gorgeous celebrity house with the decked out interior is still going to be worth $10,000,000? Of course not. There is more to the price of a home than just the interior. We also have to factor in the local real estate market itself. There are a huge number of factors that go into the price of a home but by in large, one of the most important attributes is the locale itself. Therefore, it shouldn’t be much of a surprise that our CNN trained on house images didn’t perform as well as the simple MLP trained on the numerical and categorical attributes. But that does raise the question: - Is it possible to combine our numerical/categorical data with our image data and train a single end-to-end network? - And if so, would our house price prediction accuracy improve? I’ll answer that question next week, stay tuned. Summary In today’s tutorial, you learned how to train a Convolutional Neural Network (CNN) for regression prediction with Keras. Implementing a CNN for regression prediction is as simple as: - Removing the fully-connected softmax classifier layer typically used for classification - Replacing it a fully-connected layer with a single node along with a linear activation function. - Training the model with continuous value prediction loss function such as mean squared error, mean absolute error, mean absolute percentage error, etc. What makes this method so powerful is that it implies that we can fine-tune existing models for regression prediction — simply remove the old FC + softmax layer, add in a single node FC layer with a linear activation, update your loss method, and start training! If you’re interested in learning more about transfer learning and fine-tuning on pre-trained models, please refer to my book, Deep Learning for Computer Vision with Python, where I discuss transfer learning and fine-tuning in detail. In next week’s tutorial, I’ll be showing you how to work with mixed data using Keras, including combining categorical, numerical, and image data into a single network. To download the source code to this post, and be notified when next week’s blog post publishes, be sure to enter your email address in the form below! Hey Adrian, thanks for the great post! Quick question about your image montage technique – do you think you would have ended up getting better accuracy with multiple inputs instead of a montage, even though the computational complexity is higher? My intuition is that the filters that the convnet would be learning would optimally be quite different for frontal vs interior views. By combining images into a montage, wouldn’t this force the same filters to be used instead, potentially decreasing potential for generalization? Hey Eddie — I would encourage you to run the experiment for yourself and examine the results. When I was writing the code, as a sanity check, I ran an experiment where I did not create the montage and instead allowed each of the four images for each house to be passed through the network independently. The results were far worse. The reasoning here is a CNN may learn filters that are really good at predicting the price of a bathroom — but those same learned filters may be extremely poor at predicting a house price from a bedroom image. You could even run into a case where a network even overfits to one of the four image classes. Instead, what we do is create a tiled montage of all four house images. Some filters may activate for expensive looking features in the bedroom. Others could activate for features that look expensive in a frontal view. The result is that all of these expensive vs. inexpensive activations can be correlated together since all four images were passed into the network at the same time, leading to higher accuracy than just supplying the images independently, one at a time (where you wouldn’t have this correlation across all four images). There is a fourth option for dealing with multiple images as input: use a shared encoder. This solves the problem of degraded representational power (and therefore also wasted computation and memory) while ensuring that your inputs are still sane and everything we understand about image networks is still valid. This is easy to do with keras if you simply create the layer object first (encoder = Conv2D(…)) and then use it multiple times later (output1 = encoder(input1), output2 = encoder(input2), …). The rest is as you would have done without shared weights where you merge via whatever strategy you want (concat, add, etc.). Great point Had, thanks for sharing! Hello Adrian, Great tutorial. But there was a small oopsie in pyimagesearch/datasets.py. These three lines must needs be removed: print(housePaths) import sys sys.exit(0) Regards, David P.S. I am curious to know if feature extraction would work well for this problem set. Thanks David! I must have missed that debug statement. I’ve uploaded a new .zip file that corrects the issue. Thanks again! Adrian, So I took the hint you made at the end of this blog and attempted to fine-tune a pre-trained network (it happened to be a 128×128 MobileNet from keras). After a little bit of struggle I managed to train the new head and was seeing an error rate of around 48%. Which is a nice improvement. I suspect further improvements will require working around the limitations of the dataset as much as any architectural improvements. Thanks again for the great blog post! Yes, you’re absolutely right — future improvements for this particular problem are more rooted in the limitations of the dataset we used here today rather than the architecture used. That said, we’ll be able to edge out a bit more accuracy when we combine the categorical, numeric, an image inputs together into a single end-to-end network. Big guy, I datasets.py on line 70, did you mean to leave that there? print(housePaths) import sys sys.exit(0) -Huguens Man, I should read the comments before commenting. LOL. My bad. It was my fault for having that in there in the first place. The new download of the .zip does not. That tiling technique is clever. What would you do if you had a variable number of images per item (including only one in some cases) and they did not fall into a consistent set of categories? Really Adrian I’ve learned so much thanks to you, you are pure love (ノ◕ヮ◕)ノ*:・゚✧, but one question, will you teach us Generative Adversarial Networks (GANs)?(or if you already did it, tell me please where), and how they works?, I’ve searched in the internet, but I can’t understand the info I’ve founded. I think you explain in a simple way and with your explanations I can star to go deeper, Thanks!!! Thanks Antonio, I appreciate that. I actually cover GANs inside the Practitioner Bundle my book, Deep Learning for Computer Vision with Python. I would suggest starting there! Excuse me, but what is the meaning and influence of batch.size (I noticed it had an influence on speed, but I cannot figure out how it works -and pydoc was of no help…) It’s a hyperparameter that you can tune. Smaller batch sizes mean more updates per epoch. Larger batch sizes mean less updates per epoch. Each time there is a weight update there is a chance for your network to “learn” and improves its results. However, there is a tradeoff to consider. If your batches are very large your network may not have enough changes to learn. If your batches are too small then training may take longer due to the number of backpropagation steps. Typically you set your batch size as a power of 2. For small datasets you’ll use a batch size of 8, 16, of 32. For large datasets, in the order of hundreds of thousands to millions of images, you’ll use a batch size of the maximum number (of power of 2) images your GPU can handle. Thanks a lot: I begin to understand why things were somewhat faster when I increase (8->18 ->28) batch size in the previous example (house prices with 4 variables, not 4 images…). And I was lucky enough to get same results (in termes of mean average relative absolute error). And I missed that batch size would increase RAM greediness…. thanks for hinting this feature. Should one train with the log(x+1) as target where x is the price? Objective function in the examples given here is minimizing the absolute sum of a relative error. (target is not only the thing to be considered; objective function is important, too) . Maybe log(x+alpha) would be a good target, objective function being mean absolute error ; but people who buy houses seldom remember log values; an 20% increase/decrease has some meaning for them). Once you train and predict log values you apply exp() to bring it back to normal values in $ I would suggest you try that as an experiment and note your results. Do results improve after log scaling? Run your own experiments and examine the results — it’s one of the best ways to learn. Well, if errors are small : both objective functions are the same … (and else, why should one minimize?) The idea of using an easy to understand ((bankers, buyers do not know logs nor exponentials) criterium is not a bad idea… (and changing to log(x+1) adds another button, “1” : depends on the units; -say, change US$ to australian ones or to bolivares : you wonot minimize the same criterium- keeping as log(x) is currency unit invariant, and houses are expensive enough to avoid numerical accidents , At least I hope…) Very interesting, thanks! Do you have any recommendations on how to deal with datasets where the number of pictures for each house can vary? I have seen houses on sale online showing from just 1 to around 60 pictures. What should I do if I wanted to try your architecture on this kind of data? For this exact tutorial you would need select the four images for the kitchen, bedroom, bathroom, and frontal view of the house. However, an experiment worth running would be to create an MxN grid of all images for your house. Any empty spaces in the grid, meaning that there is no image of the house, could be left black. From there, tile all your images and train a CNN. The problem here is that you could end up with a very large input image if you don’t make your tiles small enough. But if you make them too small you might lose too much detail. Again, it would be an experiment worth running. hi Adrian Nice and clear tutorial! Thanks! One perhaps stupid question: when you create the cnn model create_cnn() in model.py, you added another FC layer L57-58. You said this is to match the number of nodes coming out of the MLP x = Dense(4)(x) x = Activation(“relu”)(x) But I can’t figure our how the cnn model is related to the MLP model? Did i miss anything? QL That comment will make *a lot* more sense when you read next week’s tutorial on combining a CNN and MLP into a single end-to-end architecture. I wrote the code for the entire project first before I wrote the tutorials, hence why that comment is in there. Hey dr, thank you very much for your great guide. Thanks, I’m glad you’re enjoying the tutorials! Hi. I’ve been reading your posts since my last semester for my computer vision paper, and these have really helped a lot! I’ve been trying to follow this series to make use of regression for a project. And I’ve been stuck at a place. I was wondering if I could find some help here. The dataset that I have is: – a directory of images – a csv file with the target continuous values I have read that CNN classification can use ImageDataGenerator to correctly feed the train and test data, however it’s format is unsuitable for the given dataset. If I make a numpy array of “all” (there’s quite a few of those) the images, and another one for the corresponding target values and feed them as parameters for the ‘model.fit’, will it work or pose some computation issues? Thank you! Have you tried following this tutorial to see how the ImageDataGenerator class can be used to load images from an input directory? That would be my suggestion. Hi I want to do regression without combining some photo just with single image how should I do it? I have 4 classes and I want just do the regression for these classifications is it possible? Just remove the code where the montage is formed. Return the original image via “cv2.imread”. If you need more help training your own custom CNNs I would suggest reading through Deep Learning for Computer Vision with Python where I cover the topic in more detail. Be sure to take a look! could you send the tutorial for Combining categorical, numerical, and image data into a single network (next week’s tutorial). how to give the 2d array(image represented as 2d array) into csv files The tutorial is already online and has been since February 4th. You can find it here. i am working on my project for predicting house price from images,model is created 200 epochs are being scanned and an avg price is being displayed ,now what i want to do is predict the house price with the using four images that is kitchen,bathroom,frontal image and zipcode , how shall i apply the input? and how shall i call method.predict(),please can u write me down the code? Hi Adrian, Thanks for this fantastic tutorial. I am trying to develop a CNN (in Python) that predicts multiple continuous variables, and am having trouble importing the images in a format that is acceptable as input to a CNN. I can’t seem to find any examples online of people importing raw images for this kind of task; most programs seem to be for classification and use ImageDataGenerator which is not applicable for my problem. Any help would be greatly appreciated. What do you mean by “raw images”? How are your images different than the images we used in this tutorial? Why was “stacking the input channels” not an option you mentioned? Seems most obvious to me. Rather than an 4n x 4n x 3 input volume, you could stack the images and input an n x n x 12 input volume. Never tried this, but would like your opinion on it. Hey Ian, I’m happy to provide my tutorials (and my help) for free, but one thing I ask of PyImageSearch readers is to test their assumptions, develop an experiment, and run it — it’s truly the best way to learn. You have an idea, great! Now give it a try. Andrian, Thanks for a great tutorial. A quick question about model training. model.fit(trainImagesX, trainY, validation_data=(testImagesX, testY) Is the Images data in trainImagesX and tabular data in trainY mapped? If not, How does the model map same house attributes for given image montage? Do you think it’s better to use one dataframe with images and house attributes mapped together? Make sure you’re reading this tutorial as well as the previous one as it shows you how the image data and house attributes are linked together. Hello, Thank you for providing such a nice tutorial. I want to train this same model with my own dataset. I have images of parking space at different steereng angle captured by car camera. And i have 10 classes for different steering angle containing images of it. now I want to train this network on this dataset as regression problem to get prediction of steering angle. Please suggest me that what changes I have to make. Thank you. That’s a pretty neat dataset. Is it publicly available? It would be fun to play with and hack around with. Let me know if you can share it. Hi Adrian, thank you for the great post! It’s really helpful! I would like to get a confidence score of each of the predictions that it makes, showing on how sure the regression model is on its prediction that it is correct. Is there any ways to calculate the confidence score of the prediction values? Thank you. Hi Adrian: Is it possible when the prediction is maded it show the image? In other words knowing what is the house over the prediction is computed? I believe I already answered this question in my email reply to you, Enrique. Thanks Adrian Maybe the low accuracy is because the convloution filters when applied to the montage image will span 2 photos at the intersection and so the information returned by them will not be valid? Best Regards, Walid Hi Adrian: This is an amazing tutorial. However, I noticed that you need to train the model every time you want to make a predictions, right?. It could be interested saving weights to use in other images or something like that. You can follow this tutorial if you need to save/load your Keras model. Hi Adrian: I was thinking how to show the predict value. If I multiply preds*Maxprice I obtain a value. Is this the predict value? That is correct. Hi Adrian, thanks a lot for your tutorial, which is very helpful. Have you tried the second option of building a model with four independent tensors as inputs, which you said is not good? I am wondering how to deal with a case if there are only three pictures which can not combined into a single image? Thanks a lot. Best regards, Bojie
https://www.pyimagesearch.com/2019/01/28/keras-regression-and-cnns/
CC-MAIN-2020-05
refinedweb
5,319
62.27
); } } I'd assume this is C, since it works in GCC as well. Where is this defined in the standard, and where has it come from? That's not an operator -->. That's two separate operators, -- and >. Your condition code is decrementing x, while returning xs original (not decremented) value, and then comparing the original value with 0 using the > operator. while( (x--) > 0 ) It's equivalent to while (x-- > 0) It's #include <stdio.h> int main(void) { int x = 10; while( x-- > 0 ) // x goes to 0 { printf("%d ", x); } return 0; } Just the space make the things look funny, -- decrements and > compares. while( x-- > 0 ) is how that's parsed. That's a very complicated operator, so even the C++ Standard committee placed its description in two different parts of the Standard. Joking aside, they are two different operators: -- and > described respectively in §5.2.6/2 and §5.9 of the C++03 Standard. Anyway, we have a "goes to" operator now. "-->" is easy to be remembered as a direction, and "while x goes to zero" is meaning-straight. Furthermore, it is a little more efficient than "for (x = 10; x > 0; x --)" on some platforms. The usage of --> has historical relevance. Decrementing was (and still is in some cases), faster than incrementing on the x86 architecture. Using --> suggests that x is going to 0, and appeals to those with mathematical backgrounds. This code first compares x and 0 and then decrement x. (Also said in the first answer: You're post-decrementing x and then comparing x and 0 with the > operator.) See the output of this code: 9 8 7 6 5 4 3 2 1 0 We now first compare and then decrement by see 0 in ?. This is exactly the same as while (x--){ printf("%d ", x); } for non-negative numbers ^^ Utterly geek, but I will be using this: #define as ;while int main(int argc, char* argv[]) { int n = atoi(argv[1]); do printf("n is %d\n", n) as ( n --> 0); return 0; } Or for something completely different... x slides to 0 while (x --\ \ \ \ > 0) printf("%d ", x); Not so mathematical, but... every picture paints a thousand... Actually, x is post-decrementing and with that condition is being checked. It's not -->, it's (x--) > 0 Note: value of x is changed after the condition is checked, because it post-decrementing. Some similar cases can also occur, for example: --> x-->0 ++> x++>0 -->= x-->=0 ++>= x++>=0 It's a combination of two operators. First -- is for decrementing the value, and > is for checking whether the value is greater than the right-hand operand. #include<stdio.h> int main() { int x = 10; while (x-- > 0) printf("%d ",x); return 0; } The output will be: 9 8 7 6 5 4 3 2 1 0 It should a not organized code --> \ Doesn't exist -> \ member access operator (one kind) -- \ Decrements (per or pos fix) > \ Comparison Grater than Your code says that is -- > decrements then comparison... The confusion comes on the hierarchy of the operators... The comparison needs 2 comparable arguments, but if they are expressions, it needs to evaluate them first, so... and the decrements symbol is part of the first expression... A clean code organize that kind of things in a clear way, Code is for human read, not machine... x can go to zero even faster in opposite direction int x = 10; while( 0 <---- x ) { printf("%d ", x); } 8 6 4 2 You can control speed with an arrow! int x = 100; while( 0 <-------------------- x ) { printf("%d ", x); } 90 80 70 60 50 40 30 20 10 ;) Why all the complication? The simple answer to the original question is just: #include <stdio.h> int main() { int x = 10; while (x > 0) { printf("%d ", x); x = x-1; } } Does the same thing. Not saying you should do it like this, but it does the same thing and would have answered the question in one post. The x-- is just shorthand for the above, and > is just a normal greater-than operator. No big mystery! There's too much people making simple things complicated nowadays ;) It means work until it remains zero int x = 10; while (x -- -0) // x goes to 0 { printf("%d ", x); } Try it with any positive value. while (x -- -3) // x goes to 3 { printf("%d ", x); } Try it with any negative value. while (x --> -3) // x goes to -3 { printf("%d ", x); } Conventional way we define condition in while loop parenthesis"()" and terminating condition inside the braces"{}", but this -- & > is a way one defines all at once. For e.g: int abc(){ int a = 5 while((a--) > 0){ //decrement and comparison both at once //code } } It says, decrement a and run the loop till the time 'a' is greater than '0' Other way it should have been like: int abc(){ int a = 5 while(a> 0){ //code a = a -1 //decrement inside loop } } both ways, we do same thing and achieve same goals. :) This is not a single operator in C++, they are 2 separate operators in C++, which are -- and >. x-- get x value and -- (decrement 1) it, this will not effects at compression line, -- decrement apply in the next line, so it's just the current value at this stage: while (x-- > 0) // x goes to 0 > this is just the greater operator So basically it's a while loop which check x-- is bigger than 0 and loop keep executing while the condition is true. Similar Questions
http://ebanshi.cc/questions/6/what-is-the-name-of-the-operator
CC-MAIN-2017-43
refinedweb
924
69.21
SAS, it’s just another token February 21, 2015 5 Comments Note: Please bear with me, I authored this post in Markdown. 🙂 I’ve been trying to finish this post since September of 2014. But I kept getting distracted away from it. Well this lovely (its 5 degrees Fahrenheit here in Minnesota) Saturday morning, it IS my distraction. I’ve been focused the last few weeks on my new love, the Simple Cloud Manager Project, as well as some internal stuff I can’t talk about just yet. Digging into things like Ember, Jekyll, Broccoli, GitHub Pages, git workflows, etc… has been great. But it’s made me keenly aware of how much development has leapfrogged my skills as a Visual Studio/.NET centric cloud architect. With all that learning, I needed to take a moment and get back to something I was a little more comfortable with. Namely, Azure services and specifically Service Bus and Shared Access Signatures (SAS). I continue to see emails, forum posts, etc… regarding to SAS vs ACS for various scenarios. First off, I’d like to state that both approaches have their merit. But something we all need to come to terms with is that at their heart, both approaches are based around a security token. So as the name of this blog article points out, SAS is just a token. What is in a SAS token? For the Azure Service Bus, the token is simply a string that looks like the following SharedAccessSignature sr=https%3a%2f%2fmynamespace.servicebus.windows.net%2fvendor-&sig=AQGQJjSzXxECxcz%2bbT2rasdfasdfasdfa%2bkBq%2bdJZVabU%3d&se=64953734126&skn=PolicyName Within this string, you see a set of URL encoded parameters. Let’s break them down a bit… SharedAccessSignature – used to identify the type of Authorization token being provided. ACS starts with “WRAP” sr – this is the resource string we’re sharing access to. In the example above, the signature is for anything at or under the path “-“ sig – this is a generated, HMAC-SHA256 hash of the resource string and expiry that was created using a private access key. se – the expiry date/time for the signature expressed in the number of seconds since epoch (00:00:00 UTC on January 1st, 1970) skn – the policy/authorization rule who’s key is was used to generate the signature and who’s permissions determine what can be done The token, or signature, is created by using the resource path (the url that we want to access) and an expiry date/time. A HMAC-SHA256 hash using the key of a specify authorization policy/access rule is then generated off of those parameters. In its own way, using the policy name and its key is not that different then using an identity and password. And like an ACS token, we have an expiry value that helps ensure the token we receive can only be used for a given period of time. Generating our own Token So the next logical question is how to generate our own token. If we opt to use .NET and have the ability to leverage the Azure Service Bus SDK, we can pull together a simple console application to do this for us. Start by creating a Console application, and adding some prompts for a parameter to it so that the main method looks like this… static void Main(string[] args) { Console.WriteLine("What is your service bus namespace?"); string sbNamespace= Console.ReadLine(); Console.WriteLine("What is the path?"); string sbPath = Console.ReadLine(); Console.WriteLine("What existing policy would you like to use to generate your signature?"); string sbPolicy = Console.ReadLine(); Console.WriteLine("What is the policy's secret key?"); string sbKey = Console.ReadLine(); Console.WriteLine("When should this expire (MM/DD/YY HH, GMT)?"); string sbExpiry = Console.ReadLine(); Console.WriteLine("Press any key to exit..."); Console.ReadKey(); } The first parameter we’re going to capture and save is the namespace we’re wanting to access without the “servicebus.windows.net” part. Next, is the path to the Service Bus Entities we want provide access to. This can be a specific entity such as a queue name or as I mentioned last time, a partial path to grant access to multiple resources. Then we need to provide a named policy (which you can set up via the portal), and one of its secret keys. Finally, you will specify when this signature will need to expire. Next, we need to transform the expiration time that was entered as a string into a Timespan (how long does the SAS need to ‘stay alive’. We’ll insert this right after we read the expiration value… // convert the string into a timespan... DateTime tmpDT; bool gotDate = DateTime.TryParseExact(sbExpiry, "M/dd/yy HH", enUS, DateTimeStyles.None, out tmpDT); if (!gotDate) Console.WriteLine("'{0}' is not in an acceptable format.", sbExpiry); Now we have all the variables we need to create a signature, so it’s time to generate it. For that, we’ll use a couple classes contained in the .NET Azure Service Bus SDK. We’ll start by adding it to our project using the instructions available on MSDN (so I don’t have to retype them all here). With the proper references added to the project, we add a simple using clause at the top.. using Microsoft.ServiceBus; Then add the code that will create the SAS token for us right after the code that created out TimeSpan. var serviceUri = ServiceBusEnvironment.CreateServiceUri("https", sbNamespace, sbPath).ToString().Trim('/'); string generatedSaS = SharedAccessSignatureTokenProvider.GetSharedAccessSignature(sbPolicy, sbKey, serviceUri, expiry); And there we have it. A usable SAS token that will automatically expire after a given period of time, or that we can revoke immediately by removing the policy on which its based. But what about doing this without the SDK? Lets start by looking at what the Service Bus SDK is doing for us. Fortunately, Sreeram Garlapati has already written some code to generate a signature. static string CreateSasToken(string uri, string keyName, string key) { // Set token lifetime to 20 minutes. When supplying a device with a token, you might want to use a longer expiration time. DateTime origin = new DateTime(1970, 1, 1, 0, 0, 0, 0); TimeSpan diff = DateTime.Now.ToUniversalTime() - origin; uint tokenExpirationTime = Convert.ToUInt32(diff.TotalSeconds) + 20 * 60; string stringToSign = HttpUtility.UrlEncode(uri) + "n" + tokenExpirationTime; HMACSHA256 hmac = new HMACSHA256(Encoding.UTF8.GetBytes(key)); string signature = Convert.ToBase64String(hmac.ComputeHash(Encoding.UTF8.GetBytes(stringToSign))); string token = String.Format(CultureInfo.InvariantCulture, "SharedAccessSignature sr={0}&sig={1}&se={2}&skn={3}", HttpUtility.UrlEncode(uri), HttpUtility.UrlEncode(signature), tokenExpirationTime, keyName); return token; } This example follows the steps available at this SAS Authentication with Service Bus article. Namely: – use the time offset from UTC time January 1st, 1970 in seconds to set when the SAS should expire – create the string to be signed using the URI and the expiry time – sign that string via HMACSHA256 and the key for the policy we’re basing our signature on – base64 encode the signature – create the fully signed URL with the appropriate parameters With the steps clearly laid out, its just a matter of converting this into the language of your choice. Be it javascript, objective c, php, ruby… it doesn’t really matter as long as you can perform these same steps. In the future, its my sincere hope that we’ll actually see something in the Azure Service Bus portal that will make this even easier. Perhaps even a callable API that could be leveraged. But what about “Connection Strings” This is something I’ve had debates about. If you look at most of the Service Bus examples out there, they all use a connection string. I’m not sure why this is except that it seems simpler because you don’t have to generate the SAS. The reality is that the connection string you get from the portal works much like a SAS, except that is lacks an expiry. The only way to revoke a connection string is by revoking the policy on which its based. This seems fine, until you realize you only get a handful of policies per entity to creating hundreds of policies to be used by customers is a tricky proposition. So what are you to do when you want to use a SAS, but all the examples use a connection string? Lets start by looking at a connection string example. First with the connection string. Endpoint=sb:///;SharedAccessKeyName=;SharedAccessKey= This string has several parameters: sb – the protocol to be used. In this case its ‘sb’ which is Service Bus shorthand for “use AMQP”. namespace – the URL we’re trying to access SharedAccessRuleName – the policy we’re using SharedAccessKey – the policy’s secret key The common approach is to put this string into a configuration setting and with the .NET SDK, load it as follows… EventHubClient client = EventHubClient.Create(ConfigurationManager.AppSettings["eventHubName"]); or string connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString"); var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString); These are common examples you’ll see using the connection string. But what if you want to use the SAS instead… For that, we go up a level to the Service Bus MessagingFactory. Both of the examples above abstract away this factory. But we can still reach back and use it. We start by creating the URI we want to access, and a SAS token object: Uri runtimeUri = ServiceBusEnvironment.CreateServiceUri("sb", , string.Empty); TokenProvider runtimeToken = TokenProvider.CreateSharedAccessSignatureTokenProvider()); Alternatively, we can create the token provider using a policy and its secrete key. But here we want to use a SAS token. Now its just a matter of using the message factory to create a client object… QueueClient sendClient = mf.CreateQueueClient(qPath); Simple as that. A couple quick line changes and we’ve gone from a dependency on connection strings (ick!), to using SAS tokens. So back to SAS vs ACS So even with all this in mind, there’s one argument that gets brought up. The SAS tokens expire. Yes, they do. But so do ACS and nearly all other claims based tokens. The only real difference is that most of the “common” security mechanisms support a way of renewing the tokens. Be this asking the user to log in again, or some underlying bits which store the identity/key and use them to request new tokens when the old is about to expire. The reality is that this same pattern needs to be implemented when you’re doing a SAS token. Given what I’ve shown you above, you can now stand up your own simple token service that accepts a couple parameters (identity/key), authenticates them, selects the appropriate policy and URL for the provided identity, and then creates and returns the appropriate SAS. The client then implements the same types of patterns we can already see for things like mobile notification services. Namely to store tokens locally until they’re about to expire. When they are approaching their expiry, reach back out using our credentials and ask for an updated token. Finally, use the token we been provided to perform the required operations. All that’s really left up to you is to determine the expiry for the token. You can have one set value for all tokens. You may also opt to have tokens for more sensitive operations expire faster. You have that flexibility. So until next time, enjoy SAS’s. Just please…. don’t be afraid of them. Hi Brent ! Very good article … just to be clear for other readers … On the following line : uint tokenExpirationTime = Convert.ToUInt32(diff.TotalSeconds) + 20 * 60; You are assuming a token who has 20 minutes as duration (20 * 60 seconds) starting from token request. Last thing ,,, here string stringToSign = HttpUtility.UrlEncode(uri) + “n” + tokenExpirationTime; we have to use “\n” instead of “n”. Paolo Patierno Yes. Thank you for pointing those out! How long we can set the expiry of a SAS token? I know ACS maximum of 24 hours but can we have SAS Tokens for much longer periods. I’m not aware of a exact limit. I have used SAS tokens that were more than 24hrs out, with the longest I’ve personal used being several weeks out. Very nice article. But want to know is there a way of creating SAS token using https post. Esp. the way we create token using tokenProvider.GetTokenAsync(“ResourceStringURL”, “POST”, true, NoofDaysValid). So there must be a http way retrieving the token.
https://brentdacodemonkey.wordpress.com/2015/02/21/sas-its-just-another-token/
CC-MAIN-2019-09
refinedweb
2,074
57.47
Registered users can ask their own questions, contribute to discussions, and be part of the Community! Registered users can ask their own questions, contribute to discussions, and be part of the Community! I already posted this question earlier , but perhaps it wasn't clear, so I'll try to be more precise here. Custom python trigger code uses t.fire() to trigger the scenario when the condition is met. What is the equivalent of that t.fire() in the SQL query change trigger? Thank you from the UI, Administration > Maintenance > Logs is the way. Otherwise, they're in the run/ folder of your DSS data directory. Hi, a "sql query change" trigger initiates a run of the scenario if it detects a change in the data returned by the query, be it in the number of rows returned or the values returned. Typically, such a trigger is used with a query that aggregates a table, like computing a row count, or a max of a timestamp column. @fchataigner2 thank you for your reply. I don't like creating duplicate entries, but here is what I posted in the first one (link above): I have two triggers: a SQL query change trigger and a python Custom trigger. They both check the same table dataiku_poc.CMR_CAMP_COPY Here is what I have in the SQL query change trigger select count(*) from dataiku_poc.CMR_CAMP_COPY; Here is what I have in the python Custom Trigger import dataiku from dataiku import pandasutils as pdu import pandas as pd from dataiku.scenario import Trigger mydataset = dataiku.Dataset("CMR_CAMP_COPY") mydataset_df = mydataset.get_dataframe() p = dataiku.Project() variables = p.get_variables() CMR_count = int(variables["local"]["CMR_count"]) t = Trigger() new_count = len(mydataset_df) if new_count != CMR_count: variables["local"]["CMR_count"] = new_count p.set_variables(variables) t.fire() Both triggers have Run every 10 seconds, Grace period 0 seconds. Every time I delete or insert rows in the table the python Custom trigger is triggered and the SQL never triggers. What I am doing wrong? Is there a way to troubleshoot it? Thank you the setup of the sql trigger looks fine, so maybe 1) check that the query runs in a notebook and returns results and 2) check the backend.log of the instance for exceptions in case some arise in the context of ActiveTriggerLifecycleThread threads (there is no proper debug of these triggers). You can also check the scenario's charts to see if the only triggers firing are the python ones I did make sure the SQL runs in the notebook, and I've been constantly checking Last Runs to see if the SQL trigger triggered. Where do I find backend.log? from the UI, Administration > Maintenance > Logs is the way. Otherwise, they're in the run/ folder of your DSS data directory.
https://community.dataiku.com/t5/Using-Dataiku/Scenario-SQL-query-change-trigger/td-p/13943
CC-MAIN-2022-40
refinedweb
460
66.23
Key setll file Key reade file dow not eq %eof if condition Update endif key reade file enddoIn this snippet if the record is not being updated then file read is going to EOF. If a record is updated then the reade is causing to read the same record which is being updated again. Could you please help me in solving this problem? Thanks for the help in advance. Thanks, Soundariya Kumaran Discuss This Question: 14  Replies dow not eq %eof I'm not familiar with the eq in this example. I've always seen dow not %eof or dow not %eof( filename) OK, then use the 2nd part of my answer: Before you do the update, save all the fields in a file to a DS. Then on a reade, compare the input data to the data in the DS, if it is the same, go get the next record. I would save the key values of each field as you read it. Before the secondary READE, use a setgt based on old key values. Doing SETGT may not work as there can be multiple child records with the same key and you would not process them all, Whatever you do .. it seems like if you change a key value to a higher value and it's still in the subset (of READE values) you're going to read it again.... I would read all records in the set .. writing them without change to a work file. Then loop through the work file .. chain into the PF to get the correct record and update it then. If you would rather, you could substitute an array or multi-occurring data structure for the work file. I ran into a similar issue recently. I switched the loop to do the initial SETLL using the full key and READE using a partial key (one less field). After each update, do a SETGT using the full key and READE using a partial key. I hate that structure but it got rid of the loop.
https://itknowledgeexchange.techtarget.com/itanswers/update-causing-reade-read-record-rpgle/
CC-MAIN-2019-47
refinedweb
341
88.26
TestEvents Assert and Check xUnit++ has a robust suite of test assert methods. While most C++ testing frameworks use preprocessor macros to implement only a few checks, xUnit++ provides two class instances with many built-in methods. And since these tests are not implemented with macros, you get much nicer feedback while editing (assuming your editor can provide such detail) and compiling. There are three ways to assert test events: Assert, Check, and Warn. With one exception, they offer the same check methods. Assert will halt the test immediately if any check fails, Check will log the failure but continue, and Warn will not cause the test to fail by itself, but the "error" will be logged. Warn.Fail(); // warning message logged, test is not a failure (yet) Check.Fail(); // failure is logged, test is marked as failing, test execution continues Assert.Fail(); // stops the test here Check.Fail(); // test never executes this line The Methods As stated before, the test objects share the same check methods, with one exception: Assert offers Throws while Check and Warn do not. Containsasserts that a container contains some value. Overloads exist for raw strings and std::string. ContainsPredis an alternative form of Containswhich takes a predicate instead of a value. DoesNotContainasserts that a container does not contain some value. Overloads exist for raw strings and std::string. DoesNotContainPredis an alternative form of DoesNotContainwhich takes a predicate instead of a value. DoesNotThrowasserts that the given code does not throw any exceptions. Anything that resolves to the equivalent of void (*)()will work. Equaltests object values for equality. There are overloads for raw strings, std::string, floating point types (with precision, instead of tolerance), and iterator ranges. Emptywill assert that the container object is empty. TContainer::empty()will be used if it exists, otherwise the container will be converted to a range using std::beginand std::end. Failwill automatically fail the test. Falsewill fail the test if the supplied parameter resolves to true. InRangechecks that the given value fits within the range [ min, max). NotEmptyis the opposite of Empty. NotEqualis the opposite of Equal. NotInRangechecks that the given value does not fit within the range [ min, max). NotNullchecks that the value is not equal to nullptr. NotSameverifies that the supplied objects are not the same object instance. Nullchecks that the value is equal to nullptr. Samechecks that the supplied objects are the same object instance. Throwsis an Assert-only member that checks that the supplied code throws a specific exception. If it succeeds, it will return the exception to your test. auto ex = Assert.Throws<std::exception>([]() { throw std::runtime_error(""); }); Printing Values Some methods may try to print the values of the objects using to_string with argument-dependent lookup (Koenig lookup). To take advantage of this, implement a to_string function within your object's namespace. namespace NS { class A {}; std::string to_string(const A &a) { return "A"; } } If a corresponding to_string can't be found, xUnit++ falls back to printing the object's type with typeid(obj).name(). Custom Messages If you want to add a custom message to failing tests, use the overloaded operator <<. Assert.Fail() << "This is an example message " << some_value; File and Line Info Normally, when a test fails the test runner will report the file and line number for the test itself. This is typically sufficient for most tests, as tests should really only assert one thing at a time. However, if you want to be specific about which check failed, each check method accepts an optional xUnitpp::LineInfo object. The easiest way to do this is to pass the LI macro as the final parameter to the test. Assert.Equal(0, 1, LI) << "0 is never equal to 1!"; Extra Logging If you need more logging output within the tests, use the Log object. Three levels of logging are implemented: Debug, Information, and Warning: Log.Debug << "This is a debug-level message: " << some_value; Log.Info << "This is an info-level message: " << some_value; Log.Warn << "This is a warning-level message, and will automatically mark the running test has having a warning status. " << some_value; The log levels will normally print the file and line of the test (like the check methods), but they also accept the LI macro: Log.Info(LI) << "This message will include the exact line number."; Updated
https://bitbucket.org/moswald/xunit/wiki/TestEvents.wiki
CC-MAIN-2014-15
refinedweb
718
57.87
Investors in Nucor Corp. (Symbol: NUE) saw new options become available today, for the July 5th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the NUE options chain for the new July 5th contracts and identified one put and one call contract of particular interest. The put contract at the $47.00 strike price has a current bid of 16 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $47.00, but will also collect the premium, putting the cost basis of the shares at $46.84 (before broker commissions). To an investor already interested in purchasing shares of NUE, that could represent an attractive alternative to paying $51.43/share today. Because the $47.34% return on the cash commitment, or 2.89% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Nucor Corp., and highlighting in green where the $47.00 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $57.00 strike price has a current bid of 4 cents. If an investor was to purchase shares of NUE stock at the current price level of $51.43/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $57.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 10.91% if the stock gets called away at the July 5th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if NUE shares really soar, which is why looking at the trailing twelve month trading history for Nucor Corp., as well as studying the business fundamentals becomes important. Below is a chart showing NUE's trailing twelve month trading history, with the $57.00 strike highlighted in red: Considering the fact that the $57.00 88%..08% boost of extra return to the investor, or 0.66% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 69%, while the implied volatility in the call contract example is 38%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $51.43) to be 25%..
https://www.nasdaq.com/articles/july-5th-options-now-available-nucor-2019-05-23
CC-MAIN-2020-34
refinedweb
419
66.13
Import outlook express contacts to outlook import outlook express to outlook import outlook express data to outlook import outlook express dbx files into outlook import eml outlook express outlook 2007 Import outlook express files to outlook 2010 Outlook 2010 Import Outlook Express import nsf files to dbase thunderbird lotus mail nsf file import Filemaker Import Lotus Notes nsf mac import lotus nsf mail to thunderbird Outlook Lotus Notes nsf Connector lotus notes nsf file to outlook express Are you unable to Import NSF to Outlook? Don''t take tension just take NSF to PST conversion tool to get back their entire Lotus Notes data. NSF to PST data conversion utility is most reliable and useful tool which smartly shift multiple NSF file to PST file. and convert only specific NSF items which one you selected into new PST format. Using this user can straightforwardly transfer Lotus Notes attachments and inline images into Outlook. . import nsf to outlook , nsf file conversion to pst , nsf to pst conversion , nsf to pst data conversion , multiple nsf file to pst 4 version of Export Notes removes all the queries of how to import NSF file into Outlook 2007. Easily without any discomfort import NSF into Outlook. Import Notes NSF files into Outlook all the versions of Lotus Notes and Outlook. Use its filtration option that allows you to import NSF into Outlook folder wise or date. . how to import nsf file into outlook 2007 , import nsf into outlook , lotus notes to outlook , nsf file into outlook , notes to outlook , import notes nsf files into If you using Lotus Notes email client for communication and need to import NSF to Outlook 2010 then grab our expert designed Export Notes software which convert Lotus Notes to Outlook 2007 without any data deletion. Software convert bulk NSF into PST ANSI and Unicode PST format. NSF conversion tool is one of the best tool which easy to understand and convert unlimited data. Basic aim of the software is to Migrate Lotus Notes to Outlook. Software is one of the creations of our research result that helps users in Import NSF to Outlook and Convert Lotus Notes to PST. . import nsf to outlook 2010 , import lotus notes mail to outlook , convert lotus notes to outlook 2007 , nsf conversion , nsf to pst , migrate lotus notes to outlook , convert lotus notes to pst , lotus notes to outlook convert Are you finding best NSF email 2013 converter? Download NSF file converter 2013 tool that will helps you to recover NSF 2013 file and import NSF into Outlook 2013 quickly. It is the safe tool that permits you to export NSF file to Outlook 2013 without any trouble. Without any damages all consumers can successfully get fast recovery with NSF to PST migration 2013 file within seconds. Import NSF file to Outlook. import nsf file to outlook 2013 , import nsf to outlook 2013 , import nsf file into outlook 2013 , import nsf into outlook 2013. . Lotus notes database export , convert pst to nsf , lotus notes migration tool , import nsf to outlook , nsf to pst conversion Are you working on IBM Lotus Notes email client and want to transfer Lotus Notes to Outlook format. There are many options to import NSF to Outlook but we recommended you one of the best Online NSF to PST converter tool which is design by hi-technical method. This utility converts all emails. task etc from Lotus Notes environment to Outlook format. If user wants to convert limit less data then use our powerful NSF to PST tool. Import NSF to Outlook software has easy to use those users who do not have technical knowledge. . online nsf to pst converter , import nsf to outlook , nsf to pst software , transfer lotus notes to outlook , nsf to pst converter So how can a person waste his/her time for Export from Lotus Notes to PST Outlook? Use cost effective & reliable solution for Export from Lotus Notes to PST. SysTools Export Notes software works like time saver and fulfills all your desire of Import NSF into Outlook. Lotus Notes in Outlook utility helps you in many type of situation like job change. email platform migration decision etc. Newest version of Export Notes converts encrypted NSF file also. Quickly Import NSF into Outlook and support NSF files created in v8. . Thinking of How to Open Lotus Notes Database in Outlook? SysTools Export Notes software helps you in Import Lotus Notes Emails into Outlook & also convert emails. text) etc from Lotus Notes to Outlook. Using Lotus Notes to PST Converter you can smoothly Import NSF into Outlook and Move Lotus Notes to Outlook. Software supports flawless NSF to PST Conversion result and supports All the versions of Lotus Notes and Microsoft Outlook. . who wants to change their NSF data to MS outlook because here is the software which gives best results of data conversion. Now converting Lotus Notes emails to Outlook is not complicated with the help of external software. If user of Lotus notes willing to change their NSF data into MS Outlook need not to wonder here and there in search of any convert tool for NSF to PST. They can use Export Notes file migrator to import NSF to Outlook PST file. Notes user can import NSF file in a very short time period with very ease. Download or purchase this software from our official website and get avails of this conversion tool which is capable of import NSF File to Outlook in few steps. By using this tool a user is able to exclude/include many folders which are not very important for his/her. . Converting lotus notes emails to outlook , Import nsf files to outlook , Import nsf to outlook pst file , Convert tool for nsf to pst Download best NSF to PST file converter software which is a perfect solution available at eSoftTools and designed by experts for user’s sake so that they can easily convert NSF file to Outlook PST. EML and MSG format. After converting Lotus Notes to Outlook PST file users can split PST file if they find their PST file files heavy in size. Software is designed with simple user interface that makes convenient for users to export Lotus Notes data completely into Outlook PST file. With the use of this master application successfully import NSF emails. journals etc. NSF to PST converter tool effortlessly moves all emails and email attachments to PST file effortlessly. Features * Convert Lotus Notes to Outlook * Export NSF to PST. MSG and EML * Move contacts from NSF to CSV file * Simple user interface to effortlessly import NSF to Outlook * Migrate Lotus Notes to PST and after conversion split PST * Software runs gracefully on all Lotus Notes versions and Outlook versions Free download Lotus Notes to Outlook converter software to preview complete functionality of this application before purchasing it. . nsf to pst file converter , convert nsf file to pst , convert nsf file to outlook , nsf to pst converter For conversion of Lotus Notes mailbox in Outlook PST file. acquire the best Lotus Notes export utility. This is the finest solution available to export Lotus Notes database into PST file in just few seconds. Available Lotus Notes to Outlook export utility is efficient to migrate every single data from Lotus Notes mailbox to PST and restore entire data in exact format. It supports all Outlook versions and enables users to import NSF to Outlook upto 2016. Software features * Lotus Notes mailbox recovery * Convert NSF to PST within few seconds * Export emails from NSF to multiple files * Import mailbox to PST with exact format * Easy to use for effortless NSF mailbox conversion * Software works with excellence to import Lotus Notes data * Safe and secure method for exporting Lotus Notes to PST file * Use this application if you want to export Lotus Notes to Office 365 Demo is available. export lotus notes database , export lotus notes to outlook , lotus notes export , notes to outlook MS Outlook & Outlook Express etc for communication purpose. Regarding too many causes they are switching their email platforms from Lotus Notes to MS Outlook. If you are one of them and have desire to Export Data from Lotus Notes to Outlook. Then Choosing SysTools Export Notes Software is ultimate option for you that sufficiently Export Data from Lotus Notes into Outlook. It transfer whole Lotus Notes database such as emails. attachment) etc to MS Outlook (Ansi as well as Unicode) PST format. Lotus Converter support to convert Numerous NSF file into PST. NSF Conversion tool fruitfully Import NSF in Outlook; Any of the Lotus Notes v8. . Safely Migrate Notes Archive or other database into Outlook format by Lotus Notes NSF Converter. User does not face any difficulties to Import NSF into Outlook. Lotus Notes NSF Converter helps user to Migrate Notes Archive to Outlook having with entire attributes such as emails (to. to-do list. Some of the prominent characteristics we listed below which makes NSF to PST Conversion easy and simple:- 1. It makes 2 PST file if NSF data cross 20 GB space of Outlook Export Notes software converts Encrypted NSF files into PST & easily convert image as attachment into PST. . lotus notes nsf converter , migrate notes archive to outlook , export notes , export lotus notes , lotus notes migration , convert nsf to pst , import nsf into outlook Lotus Notes creates its file in NSF format. MS Outlook email application creates its file in PST format. If it becomes important for user to convert Lotus Notes Mail into Outlook as a result of job change etc. When became necessary to Export Lotus to PST. select SysTools Export Notes that is paramount NSF to PST Conversion program. It converts whole Lotus Notes element like emails. task etc to Microsoft Outlook. NSF to PST Migration successfully do the task of Import NSF into Outlook & convert Lotus Notes Mail into Outlook created in Any of the Lotus Notes v8. . Open Lotus Notes NSF in Outlook file with robust email conversion third party tool which now available into new edition 9. Fly all worries related to Lotus Notes export NSF file in Outlook process. Import NSF in Outlook tool has convert unlimited database like mailbox items (send/receive items. calendars entries etc from Lotus Notes to PST file format. 0) and Outlook v97. 2010 edition. Some file is in encrypted format so user can move this data into PST file format with online third party tool. Open Lotus Notes NSF in Outlook format as limited mode after applying free of cost demo edition. . open lotus notes nsf in outlook , import nsf in outlook , convert lotus notes archive to outlook pst , lotus notes export nsf file in Outlook Export Notes Data into Outlook format having safe migration results and no any internal data loses through Notes to PST Migration Tool. As the day passing this utility gaining popularity in NSF Conversion task through their awesome performance. must take the advantage of this opportunity and get proper NSF Conversion results. Our Company provides a reliable and securable Export Data from Lotus Notes program which converts ALL items such as emails. journals etc in PST files unlike NSF files in Notes. If you want to perform Import NSF into Outlook with batch conversion then does not worry and Export Data from Lotus Notes support the Batch Conversion means it can Export Notes Data to PST with UNLIMITED Lotus Notes files. We know that Outlook is the best alternative for Lotus Notes but for this you should have one of the Lotus Notes version like 8. 0 or Outlook versions like 2010. export notes data , export data from lotus notes , notes into outlook , import nsf into outlook , lotus notes in outlook email , lotus notes calendar export , nsf conversion , notes to pst migration , migrate lotus notes to outlook , nsf to pst Export Mail Data from Lotus Notes to Outlook format through the use of NSF to PST Tool in effortless manner; it gives you 100% assured results to Export Mail Database containing whole items . nsf format to . Export Notes is sufficiently faster utility to Import NSF to Outlook without giving any error messages. Then we inform you that NSF to PST Free Tool easily to handles Lotus Notes Archive to Outlook task. Well equipped Lotus Archive Viewer have various qualities as it easily Export Mail Database with all of its components like Emails. To-do list to Outlook with subfolders. Encrypted files are smoothly transfer to another email clients and Manage the Group of Folders and Sub folders structure. For removing doubts Free Download the Trail version of Export Notes utility that has limit to Export 16 mail items of NSF files to PST files. . export mail data from lotus notes , notes to outlook conversion trial , export mail database , export lotus notes mailbox , import nsf to outlook , nsf to pst free tool , lotus archive viewer , lotus notes archive to outlook , lotus conversion Third Party software is proficient to switching from Lotus Notes to Outlook. Using this software. users can normally transfer files from Lotus Notes to Outlook in short span of time. Switching from Lotus Notes to Outlook is often required in an organization. but now we developed a software which perform Lotus Notes in PST conversion professional. Export data from Lotus Notes to Outlook software is also available in free mode and it can be installed on All Windows systems which is the plus point of this application. This helps you to import NSF into Outlook different versions of 32 bit. You may select particular data to import Lotus Notes NSF into Outlook by using filter option the permit you to choose data from Calendar. switching from lotus notes to outlook , transfer files from lotus notes to outlook , export data from lotus notes to outlook , import nsf into outlook , lotus notes in pst , import lotus notes nsf to outlook Download NSF to PST migration to recover NSF file and migrate NSF file to PST file in easy manner. Using this NSF converter to PST all users can nicely recover lotus notes file mailbox and convert lotus notes file to Outlook format. Superb way to know how to recover NSF emails and convert NSF file to PST file instantly. With the help of Lotus Notes to Outlook converter all users can speedily import lotus notes to Outlook file quickly. To know how recover NSF mailbox and convert NSF to PST file. nsf converter to pst , convert nsf file to pst , convert lotus notes to outlook , nsf to pst conversion , nsf converter Filter: All / Freeware only / Title OS: Mac / Mobile / Linux Sort by: Download / Rating / Update
http://freedownloadsapps.com/s/import-nsf-into-outlook/
CC-MAIN-2018-30
refinedweb
2,427
58.92
We have a set of shared, static content that we serve up between our websites at. Unfortunately, this content is not currently load balanced at all -- it's served from a single server. If that server has problems, all the sites that rely on it are effectively down because the shared resources are essential shared javascript libraries and images. We are looking at ways to load balance the static content on this server, to avoid the single server dependency. I realize that round-robin DNS is, at best, a low end (some might even say ghetto) solution, but I can't help wondering -- is round robin DNS a "good enough" solution for basic load balancing of static content? There is some discussion of this in the [dns] [load-balancing] tags, and I've read through some great posts on the topic. I am aware of the common downsides of DNS load balancing through multiple round-robin A records: But, is round robin DNS good enough as a starter, better than nothing, "while we research and implement better alternatives" form of load balancing for our static content? Or is DNS round robin pretty much worthless under any circumstances? Jeff, I disagree, load balancing does not imply redundancy, it's quite the opposite in fact. The more servers you have, the more likely you'll have a failure at a given instant. That's why redundancy IS mandatory when doing load balancing, but unfortunately there are a lot of solutions which only provide load balancing without performing any health check, resulting in a less reliable service. DNS roundrobin is excellent to increase capacity, by distributing the load across multiple points (potentially geographically distributed). But it does not provide fail-over. You must first describe what type of failure you are trying to cover. A server failure must be covered locally using a standard IP address takeover mechanism (VRRP, CARP, ...). A switch failure is covered by resilient links on the server to two switches. A WAN link failure can be covered by a multi-link setup between you and your provider, using either a routing protocol or a layer2 solution (eg: multi-link PPP). A site failure should be covered by BGP : your IP addresses are replicated over multiple sites and you announce them to the net only where they are available. From your question, it seems that you only need to provide a server fail-over solution, which is the easiest solution since it does not involve any hardware nor contract with any ISP. You just have to setup the appropriate software on your server for that, and it's by far the cheapest and most reliable solution. You asked "what if an haproxy machine fails ?". It's the same. All people I know who use haproxy for load balancing and high availability have two machines and run either ucarp, keepalived or heartbeat on them to ensure that one of them is always available. Hoping this helps! As load-balancing, it's ghetto but more-or-less effective. If you had one server that was falling over from the load, and wanted to spread it to multiple servers, that might be a good reason to do this, at least temporarily. There are a number of valid criticisms of round-robin DNS as load "balancing," and I wouldn't recommend doing it for that other than as a short-term band-aid. But you say your primary motivation is to avoid a single-server dependency. Without some automated way of taking dead servers out of rotation, it's not very valuable as a way of preventing downtime. (With an automated way of pulling servers from rotation and a short TTL, it becomes ghetto failover. Manually, it's not even that.) If one of your two round-robined servers goes down, then 50% of your customers will get a failure. This is better than 100% failure with only one server, but almost any other solution that did real failover would be better than this. If the probability of failure of one server is N, with two servers your probability is 2N. Without automated, fast failover, this scheme increases the probability that some of your users will experience failure. If you plan to take the dead server out of rotation manually, you're limited by the speed with which you can do that and the DNS TTL. What if the server dies at 4 AM? The best part of true failover is getting to sleep through the night. You already use HAProxy, so you should be familiar with it. I strongly suggest using it, as HAProxy is designed for exactly this situation. The best part of true failover is getting to sleep through the night. Round robin DNS is not what people think it is. As an author of DNS server software (namely, BIND) we get users who wonder why their round robin stops working as planned. They don't understand that even with a TTL of 0 seconds there will be some amount of caching out there, since some caches put a minimum time (often 30-300 seconds) no matter what. Also, while your AUTH servers may do round robin, there is no guarantee the ones you care about -- the caches your users speak to -- will. In short, round robin doesn't guarantee any ordering from the client's point of view, only what your auth servers provide to a cache. If you want real failover, DNS is but one step. It's not a bad idea to list more than one IP address for two different clusters, but I'd use other technology there (such as simple anycast) to do the actual load balancing. I personally despise hardware load balancing hardware which mucks with DNS as it usually gets it wrong. And don't forget DNSSEC is coming, so if you do choose something in this area ask your vendor what happens when you sign your zone. I've said it several times before, and I'll say it again - if resiliency is the problem then DNS tricks are not the answer. The best HA systems will allow your clients to keep using the exact same IP address for every request. This is the only way to ensure that clients don't even notice the failure. So the fundamental rule is that true resilience requires IP routing level trickery. Use a load-balancer appliance, or OSPF "equal cost multi-path", or even VRRP. DNS on the other hand is an addressing technology. It exists solely to map from one namespace to another. It was not designed to permit very short term dynamic changes to that mapping, and hence when you try to make such changes many clients will either not notice them, or at best will take a long time to notice them. I would also say that since load isn't a problem for you, that you might just as well have another server ready to run as a hot standby. If you use dumb round-robin you have to proactively change your DNS records when something breaks, so you might just as well proactively flip the hot standby server into action and not change your DNS. Windows Vista & Windows 7 implement client support for round robin differently as they backported the IPv6 address selection to IPv4. (RFC 3484) So, if you have significant numbers of Vista, Windows 7, and Windows 2008 users, you're likely going to find behavior inconsistent to your planned thinking in your ersatz load balancing solution. I've read through all answers and one thing I didn't see is that most modern web browsers will try one of the alternative IP addresses if a server is not responding. If I remember correctly then Chrome will even try multiple IP addresses and continue with the server that responds first. So in my opinion DNS Round Robin Load balancing is always better then nothing. BTW: I see DNS Round Robin more as simple load distribution solution. I'm late to this thread, so my answer will probably just hover alone at the bottom, neglected, sniff sniff. First off, the right answer to the question is not to answer the question, but to say: NLB is mature, well suited to the task, and pretty easy to set up. Cloud solutions come with their own pros and cons, which are outside the scope of this question. Question is round robin DNS good enough as a starter, better than nothing, "while we research and implement better alternatives" form of load balancing for our static content? is round robin DNS good enough as a starter, better than nothing, "while we research and implement better alternatives" form of load balancing for our static content? Between, say, 2 or 3 static web servers? Yes, it is better than nothing, because there are DNS providers who will integrate DNS Round Robin with server health checks, and will temporarily remove dead servers from the DNS records. So in this way you get decent load distribution and some high availability; and it all takes less than 5 minutes to set up. But the caveats outlined by others in this thread do apply: Other solutions HAProxy is fantastic, but since Stack Overflow is on the Microsoft technology stack, maybe using the Microsoft load balancing & high availability tools will have less admin overhead. Network Load Balancing takes care of one part of the problem, and Microsoft actually has a L7 HTTP reverse proxy / load balancer now. I have never used ARR myself, but given that its on its second major release, and coming from Microsoft, I assume it has been tested well enough. It has easy to understand docs, here is one on how they see distribution of static and dynamic content on webnodes, and here is a piece on how to use ARR with NLB to achieve both load distribution and high availability. I do not think it is a good enough solution because let's say you have two servers now and you round robin using DNS to each server's IP address. When one server goes down, the DNS servers have no knowledge that it went down and will continue to serve that IP address, as part of the RR process. Then 50% of your audience will get a broken site missing javascript or images. Perhaps it is easier to point to a common IP address that is handled by Windows NLB representing two servers behind. Unless you are using a Linux server for your static content, if i remember reading that somewhere? It has very marginal use, enough to get you by while you put a real solution in place. Like you say, the TTL's have to be set quite low. This has the side benefit, though, of pulling out a problematic machine from DNS while it's having issues. Say you have SvrA, SvrB and SvrC handing out your content and SvrA goes down. You pull it out of DNS and after the short time period defined by your low TTL, resolvers will figure out a different server (SvrB or SvrC) that are up. You get SvrA back online and put it back into DNS. A short downtime for some folks, none for others. Not great, but workable. The more static servers you put in the mix the less likely you will be to have majority groups of users down. You certainly will not get the true balanced distribution that a real load balancing solution will provide due to the topology of the Internet. I'd still watch the load on all of the servers involved. Round-robin load balancing only works when you are also in control of the DNS Zone so that you can change the list of servers and push it to the zone masters in a timely manner. As mentioned in one of the other answers, the hidden evil of round-robin is DNS caching which can happen anywhere between your servers and the client which completely negate the small benefit of this solution. Even with DNS TTL set to a very low value you have little control over how long ISP's or even the client's DNS cache will keep the now-dead IP address active. It's an improvement over a SPOF for sure, but only marginal. I would take a look at who ever is hosting your server and see what they have to offer, many have some sort of basic load balancer service they can provide. You may as well have a single server with the static content duplicated in S3 and switch to the S3 CNAME when your primary goes down. You will end up with the same delay but without the multiple server cost. This really depends on what you're talking about and how many servers you're rotating through. I once had a site that ran on several servers, and I used DNS round robin on that due to mainly my novice at the time, and it really wasn't a big issue. It wasn't a big issue because it didn't crash. It was a really stupid non-complicated system, so it held up, and had a pretty constant traffic level. If it did crash from traffic, it was during the day and something I could easily take care of. I'd say your static content qualifies as simple enough to not cause crashes on its own. Outside of hardware failure etc., how stable has your server been? How "spikey" is your traffic on this content? Assuming straight up Apache or something and relatively flat traffic, it's not going to crash a lot, and I would say round-robin is "good enough". I'm sure I'll get down voted because I'm not preaching a 100% HA solution, but that's not what you asked for. It comes down to what you're willing to accept as a solution vs. effort spent. If you were using RR DNS for load balancing, it would be fine, but you aren't. You're using it to enable a redundant server, in which case it is not fine. As a previous post said, you need something to detect heartbeat and stop hitting it until it comes back. The good news is heartbeat is available really cheaply, either in switches or in Windows. Dunno about other OSs but I assume it's there as well. I suggest that you assign an additional IP address to each of your servers (in addition to the static IP that you use for, say, ssh), and you take that into the DNS pool. And then you use some software to switch around these IP addresses in case a server fails. Heartbeat or CARP can do that, for example, but there are other solutions out there. This has the advantage that for the clients of your service, nothing has to change in the setup, and you don't have to worry about DNS caching or TTL, but you can still take advantage of the DNS round-robin "load balancing". It'll probably do the job, especially if you can have multiple IPs on your static boxes. have one "serve static content" IP and one "manage machine" IP. If a box then goes down, you can either use an existing HA solution or manual intervention to bring the IP from the failed machine up on either one of the other "cluster members" or a completely new machine (depending on how fast it would be to get that up and running). However, such a solution will have some small issues. The load balancing will not be anywhere close to perfect and if you're relying on manual intervention you may have outages for some visitors. A hardware load balancer can probably do a better job of both sharing the load and providing "cluster uptime" than DNS round-robin will. On the flip side, that is one (or two, since ideally you have teh LBs in a HA cluster) pieces of hardware that will need buying, power and cooling and (possibly) some time to get acquainted with (if you do not already have dedicated load balancers). To succinctly answer the question (is round robin DNS good enough as a starter, better than nothing, "while we research and implement better alternatives" form of load balancing for our static content?), I would say that it is better than nothing, but you should definitely continue to research other forms of load balancing. When researching Windows Load Balancing several years ago, I saw a document that stated that Microsoft's web farm was configured as multiple load-balancing groups, with DNS round robin between them. Since you can have multiple DNS servers responding in each namespace, and since Microsoft's load balancing is self-healing, this provides both redundancy and load balancing. Downside: you need at least 4 servers (2 servers x 2 groups). Answering Jeff's comment on Schof's answer, is there a way to DNS round-robin between HAProxy servers? By posting your answer, you agree to the privacy policy and terms of service. asked 5 years ago viewed 36028 times active 1 year ago
http://serverfault.com/questions/101053/is-round-robin-dns-good-enough-for-load-balancing-static-content/101055
CC-MAIN-2015-32
refinedweb
2,876
67.69
Suppose. We strongly recommend avoiding adding a new library to a project. Please don’t get it wrong. We are, we have seen quite a lot of problems caused by a large number of third-party libraries. We will probably enumerate only some of the issues, but this list should already provoke some thoughts: - Adding new libraries promptly increases the project size. In our era of fast Internet and large SSD drives, this is not a big problem, of course. But, it’s rather unpleasant when the download time from the version control system turns into 10 minutes instead of 1. - Even if you use just 1% of the library capabilities, it is usually included in the project as a whole. As a result, if the libraries are used in the form of compiled modules (for example, DLL), the distribution size grows very fast. If you use the library as source code, then the compile time significantly increases. - Infrastructure connected with the compilation of the project becomes more complicated. Some libraries require additional components. A simple example: we need Python for building. As a result, in some time you’ll need to have a lot of additional programs to build a project. So the probability that something will fail increases. It’s hard to explain, you need to experience it. In big projects something fails all the time, and you have to put a lot of effort into making everything work and compile. - If you care about vulnerabilities, you must regularly update third-party libraries. It would be of interest to violators, to study the code libraries to search for vulnerabilities. Firstly, many libraries are open-source, and secondly, having found a weak point in one of the libraries, you can get a master exploit to many applications where the library is used. - One the libraries may suddenly change the license type. Firstly, you have to keep that in mind, and track the changes. Secondly, it’s unclear what to do if that happens. For example, once, a very widely used library softfloat moved to BSD from a personal agreement. - You will have troubles upgrading to a new version of the compiler. There will definitely be a few libraries that won’t be ready to adapt for a new compiler, you’ll have to wait, or make your own corrections in the library. - You will have problems when moving to a different compiler. For example, you are using Visual C++, and want to use Intel C++. There will surely be a couple of libraries where something is wrong. - You will have problems moving to a different platform. Not necessarily even a totally different platform. Let’s say, you’ll decide to port a Win32 application to Win64. You will have the same problems. Most likely, several libraries won’t be ready for this, and you’ll wonder what to do with them. It is especially unpleasant when the library is lying dormant somewhere, and is no longer developing. - Sooner or later, if you use lots of C libraries, where the types aren’t stored in namespace, you’ll start having name clashes. This causes compilation errors, or hidden errors. For example, a wrong enum constant can be used instead of the one you’ve intended to use. - If your project uses a lot of libraries, adding another one won’t seem harmful. We can draw an analogy with the broken windows theory. But consequently, the growth of the project turns into uncontrolled chaos. - And there could be a lot of other downsides in adding new libraries, which I’m probably not aware of. But in any case, additional libraries increase the complexity of project support. Some issues can occur in a fragment where they were least expected to. Again, we should emphasize; we us give you an example from our own practice. In the process of developing the PVS-Studio analyzer, we needed to use simple regular expressions in a couple of diagnostics. In general, we are convinced that static analysis isn’t the right place for regular expressions. one developer was reading a book “Beautiful Code” (ISBN 9780596510046). This book is about simple and elegant solutions. And there he came across an extremely simple implementation of regular expressions. Just a few dozen strings. And that’s it! We we are not talking about several months, we have happily used it for more than five years. This case really convinced us we: - Have a look if the API of your system, or one of the already used libraries has a required functionality. It’s a good idea to investigate this question. - If you plan to use a small piece of functionality from the library, then it makes sense to implement it yourself. The argument to add a library “just in case” is no good. Almost certainly, this library won’t be used much in the future. Programmers sometimes want to have universality that is actually not needed. - If there are several libraries to resolve your task, choose the simplest one, which meets your needs. As I have stated before, get rid of the idea “it’s a cool library – let’s take it just in case” - Before adding a new library, sit back and think. Maybe even take a break, get some coffee, discuss it with your colleagues. Perhaps you’ll realise that you can solve the problem in a completely different way, without using third-party libraries. Written by Andrey Karpov. This error was found with PVS-Studio static analysis tool. 2 thoughts on “Avoid adding a new library to the project” Thank you for the post! I have learned a lot about the tit-bits of why not to use a new lib to project. Great! We are glad, it helped you! LikeLiked by 1 person
https://hownot2code.com/2016/07/10/avoid-adding-a-new-library-to-the-project/?replytocom=19
CC-MAIN-2019-39
refinedweb
968
65.93
A Windows Form is a tool for building a Windows application. The .NET Framework offers extensive support for Windows application development, the centerpiece of which is the Windows Forms framework. Not surprisingly, Windows Forms use the metaphor of a form. This idea was borrowed from the wildly successful Visual Basic (VB) environment and supports Rapid Application Development (RAD). Arguably, C# is the first development environment to marry the RAD tools of Visual Basic with the object-oriented and high-performance characteristics of a C-family language. Visual Studio .NET provides a rich set of drag-and-drop tools for working with Windows Forms. It is possible to build a Windows application without using the Visual Studio Integrated Development Environment (IDE), but it is far more painful and takes a lot longer. However, just to prove the point, you'll use Notepad to create a simple Windows Form application that displays text in a window and implements a Cancel button. The application display is shown in Figure 13-1. You start by adding a using statement for the Windows Forms namespace: using System.Windows.Forms; The key to creating a Windows Form application is to derive your form from System.Windows.Forms.Form: . public class HandDrawnClass : Form The Form object represents any window displayed in your application. You can use the Form class to create standard windows, as well as floating windows, tools, dialog boxes, and so forth. Microsoft apparently chose to call this a form rather than a window to emphasize that most windows now have an interactive component that includes controls for interacting with users. All the Windows widgets you'll need (labels, buttons, listboxes, etc.) are found within the Windows.Forms namespace. In the IDE, you'll be able to drag and drop these objects onto a designer, but for now you'll declare them right in your program code. To get started, declare the two widgets you need, a label to hold the Hello World text, and a button to exit the application: private System.Windows.Forms.Label lblOutput; private System.Windows.Forms.Button btnCancel; You're now ready to instantiate these objects, which takes place in the Form's constructor: this.lblOutput = new System.Windows.Forms.Label( ); this.btnCancel = new System.Windows.Forms.Button( ); Next you can set the Form's title text to Hello World: this.Text = "Hello World"; Set the label's location, text, and size: lblOutput.Location = new System.Drawing.Point (16, 24); lblOutput.Text = "Hello World!"; lblOutput.Size = new System.Drawing.Size (216, 24); The location is expressed as a System.Drawing.Point object, whose constructor takes a horizontal and vertical position. The size is set with a Size object, whose constructor takes a pair of integers that represent the width and height of the object. Next, do the same for the button object, setting its location, size, and text: btnCancel.Location = new System.Drawing.Point (150,200); btnCancel.Size = new System.Drawing.Size (112, 32); btnCancel.Text = "&Cancel"; The button also needs an event handler. As described in Chapter 12, events (in this case the cancel button-click event) are implemented using delegates. The publishing class (Button) defines a delegate (System.EventHandler) that the subscribing class (your form) must implement. The delegated method can have any name but must return void and take two parameters, an object (sender) and a SystemEventArgs object (typically named e): protected void btnCancel_Click ( object sender, System.EventArgs e) { //... } Register your event-handler method in two steps. First, create a new System.EventHandler delegate, passing in the name of your method as a parameter: new System.EventHandler (this.btnCancel_Click); Then add that delegate to the button's click event-handler list with the += operator. The following line combines these steps into one: btnCancel.Click += new System.EventHandler (this.btnCancel_Click); Now you must set up the form's dimensions. The form property AutoScaleBaseSize sets the base size used at display time to compute the scaling factor for the form. The ClientSize property sets the size of the form's client area, which is the size of the form excluding borders and titlebar. (When you use the designer, these values are provided for you interactively.) this.AutoScaleBaseSize = new System.Drawing.Size (5, 13); this.ClientSize = new System.Drawing.Size (300, 300); Finally, remember to add the widgets to the form: this.Controls.Add (this.btnCancel); this.Controls.Add (this.lblOutput); Having registered the event handler, you must supply the implementation. For this example, clicking Cancel will exit the application, using the static method Exit( ) of the Application class: protected void btnCancel_Click ( object sender, System.EventArgs e) { Application.Exit ( ); } That's it; you just need an entry point to invoke the constructor on the form: public static void Main( ) { Application.Run(new HandDrawnClass( )); } The complete source is shown in Example 13-1. When you run this application, the window is opened and the text is displayed. Pressing Cancel closes the application. using System; using System.Windows.Forms; namespace ProgCSharp { public class HandDrawnClass : Form { // a label to display Hello World private System.Windows.Forms.Label lblOutput; // a cancel button private System.Windows.Forms.Button btnCancel; public HandDrawnClass( ) { // create the objects this.lblOutput = new System.Windows.Forms.Label ( ); this.btnCancel = new System.Windows.Forms.Button ( ); // set the form's title this.Text = "Hello World"; // set up the output label lblOutput.Location = new System.Drawing.Point (16, 24); lblOutput.Text = "Hello World!"; lblOutput.Size = new System.Drawing.Size (216, 24); // set up the cancel button btnCancel.Location = new System.Drawing.Point (150,200); btnCancel.Size = new System.Drawing.Size (112, 32); btnCancel.Text = "&Cancel"; // set up the event handler btnCancel.Click += new System.EventHandler (this.btnCancel_Click); // Add the controls and set the client area this.AutoScaleBaseSize = new System.Drawing.Size (5, 13); this.ClientSize = new System.Drawing.Size (300, 300); this.Controls.Add (this.btnCancel); this.Controls.Add (this.lblOutput); } // handle the cancel event protected void btnCancel_Click ( object sender, System.EventArgs e) { Application.Exit( ); } // Run the app public static void Main( ) { Application.Run(new HandDrawnClass( )); } } } Although hand coding is always great fun, it is also a lot of work, and the result in the previous example is not as elegant as most programmers would expect. The Visual Studio IDE provides a design tool for Windows Forms that is much easier to use. To begin work on a new Windows application, first open Visual Studio and choose New Project. In the New Project window, create a new C# Windows application, and name it ProgCSharpWindowsForm, as shown in Figure 13-2. Visual Studio responds by creating a Windows Form application and, best of all, putting you into a design environment, as shown in Figure 13-3. The Design window displays a blank Windows Form (Form1). A Toolbox window is also available, with a selection of Windows widgets and controls. If the Toolbox is not displayed, try clicking the word "Toolbox," or selecting View Toolbox on the Visual Studio menu. You can also use the keyboard shortcut Ctrl-Alt-X to display the Toolbox. With the Toolbox displayed, you can drag a label and a button directly onto the form, as shown in Figure 13-3. Before proceeding, take a look around. The Toolbox is filled with controls that you can add to your Windows Form application. In the upper-right corner, you should see the Solution Explorer, a window that displays all the files in your projects. In the lower-right corner is the Properties window, which displays all the properties of the currently selected item. In Figure 13-4, the label (label1) is selected, and the Properties window displays its properties. You can use the Properties window to set the static properties of the various controls. For example, to add text to label1, you can type the words "Hello World" into the box to the right of its Text property. If you want to change the font for the lettering in the HelloWorld label, click the Font property shown in the lower-right corner of Figure 13-5. (You can provide text in the same way for your buttonbutton1by selecting it in the Property window and typing the word "Cancel" into its Text property.) Any one of these steps is much easier than modifying these properties in code (though that is certainly still possible). Once you have the form laid out the way you want, all that remains is to create an event handler for the Cancel button. Double-clicking the Cancel button will create the event handler, register it, and put you on the code-behind page (the page that holds the source code for this form), in which you can enter the event-handling logic, as shown in Figure 13-6. The cursor is already in place; you have only to enter the one line of code: Application.Exit( ); Visual Studio .NET generates all the code necessary to create and initialize the components. The complete source code is shown in Example 13-2, including the one line of code you provided (shown in bold in this example) to handle the Cancel button-click event. using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; namespace ProgCSharpWindowsForm { /// <summary> /// Summary description for Form1. /// </summary> public class Form1 : System.Windows.Forms.Form { private System.Windows.Forms.Label lblOutput; private System.Windows.Forms.Button btnCancel; /// .lblOutput = new System.Windows.Forms.Label( ); this.btnCancel = new System.Windows.Forms.Button( ); this.SuspendLayout( ); // // lblOutput // this.lblOutput.Font = new System.Drawing.Font("Arial", 15.75F, System.Drawing.FontStyle.Bold, System.Drawing.GraphicsUnit.Point, ((System.Byte)(0))); this.lblOutput.Location = new System.Drawing.Point(24, 16); this.lblOutput.Name = "lblOutput"; this.lblOutput.Size = new System.Drawing.Size(136, 48); this.lblOutput.TabIndex = 0; this.lblOutput.Text = "Hello World"; // // btnCancel // this.btnCancel.Location = new System.Drawing.Point(192, 208); this.btnCancel.Name = "btnCancel"; this.btnCancel.TabIndex = 1; this.btnCancel.Text = "Cancel"; this.btnCancel.Click += new System.EventHandler( this.btnCancel_Click); // // Form1 // this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.ClientSize = new System.Drawing.Size(292, 273); this.Controls.AddRange(new System.Windows.Forms.Control[] { this.btnCancel, this.lblOutput}); this.Name = "Form1"; this.Text = "Form1"; this.ResumeLayout(false); } #endregion /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main( ) { Application.Run(new Form1( )); } private void btnCancel_Click(object sender, System.EventArgs e) { Application.Exit( ); } } } There is quite a bit of code in this listing that didn't appear in Example 13-1, though most of it is not terribly important. When Visual Studio creates the application, it must add some boilerplate code that is not essential for this simple application. A careful examination reveals that the essentials are the same, but there are some key differences worth examining. The listing starts with special comment marks: /// <summary> /// Summary description for Form1. /// </summary> These marks are used for creating documentation; they are explained in detail later in this chapter. The form derives from System.Windows.Forms.Form, as did our earlier example. The widgets are defined as in the previous example: public class Form1 : System.Windows.Forms.Form { private System.Windows.Forms.Label lblOutput; private System.Windows.Forms.Button btnCancel; The designer creates a private container variable for its own use: private System.ComponentModel.Container components = null; In this and in every Windows Form application generated by Visual Studio .NET, the constructor calls a private method, InitializeComponent( ). This is used to define and set the properties of all the controls. The properties are set based on the values you've chosen (or on the default values you've left alone) in the designer. The InitializeComponent( ) method is marked with a comment that you should not modify the contents of this method: making changes to this method might confuse the designer. This program will behave exactly as your earlier handcrafted application did.
http://etutorials.org/Programming/Programming+C.Sharp/Part+II+Programming+with+C/Chapter+13.+Building+Windows+Applications/13.1+Creating+a+Simple+Windows+Form/
CC-MAIN-2018-05
refinedweb
1,966
52.76
In today’s Programming Praxis, our goal is to find the base of the three-sided pyramid that has 169179692512835000 spheres in it. Let’s get started, shall we? A quick import: import Data.List The tetrahedral numbers are based on the triangular numbers, so let’s start with those. triangular :: [Integer] triangular = scanl1 (+) [1..] The tetrahedral numbers are formed in much the same way as the triangular ones. tetrahedral :: [Integer] tetrahedral = scanl1 (+) triangular All that’s left to do is to is to find the base of the pyramid. main :: IO () main = print . maybe 0 succ $ findIndex (== 169179692512835000) tetrahedral Tags: bonsai, code, Haskell, kata, numbers, praxis, programming, tetrahedral, triangular
http://bonsaicode.wordpress.com/2011/09/13/programming-praxis-tetrahedral-numbers/
CC-MAIN-2014-15
refinedweb
110
59.7
When pushing using a matching refspec or a pattern refspec, each ref in the local repository must be paired with a ref advertised by the remote server. This is accomplished by using the refspec to transform the name of the local ref into the name it should have in the remote repository, and then performing a linear search through the list of remote refs to see if the remote ref was advertised by the remote system. Advertising Each of these lookups has O(n) complexity and makes match_push_refs() be an O(m*n) operation, where m is the number of local refs and n is the number of remote refs. If there are many refs 100,000+, then this ref matching can take a significant amount of time. Let's prepare an index of the remote refs to allow searching in O(log n) time and reduce the complexity of match_push_refs() to O(m log n). We prepare the index lazily so that it is only created when necessary. So, there should be no impact when _not_ using a matching or pattern refspec, i.e. when pushing using only explicit refspecs. Dry-run push of a repository with 121,913 local and remote refs: before after real 1m40.582s 0m0.804s user 1m39.914s 0m0.515s sys 0m0.125s 0m0.106s The creation of the index has overhead. So, if there are very few local refs, then it could take longer to create the index than it would have taken to just perform n linear lookups into the remote ref space. Using the index should provide some improvement when the number of local refs is roughly greater than the log of the number of remote refs (i.e. m >= log n). The pathological case is when there is a single local ref and very many remote refs. Dry-run push of a repository with 121,913 remote refs and a single local ref: before after real 0m0.525s 0m0.566s user 0m0.243s 0m0.279s sys 0m0.075s 0m0.099s. Note, we refrain from using an index in the send_prune block since it is expected that the number of refs that are being pruned is more commonly much smaller than the number of local refs (i.e. m << n, and particularly m < log(n), where m is the number of refs that should be pruned and n is the number of local refs), so the overhead of creating the search index would likely exceed the benefit of using it. Signed-off-by: Brandon Casey <[email protected]> --- Here is the reroll with an updated commit message that hopefully provides a little more detail to justify this change. I removed the use of the search index in the send_prune block since I think that pruning many refs is an uncommon operation and the overhead of creating the index will more commonly exceed the benefit of using it. This version now lazily builds the search index in the first loop, so there should be no impact when pushing using explicit refspecs. e.g. pushing a change for review to Gerrit $ git push origin HEAD:refs/for/master I suspect that this is the most common form of pushing and furthermore will become the default once push.default defaults to 'current'. The remaining push cases can be distilled into the following: ref-count impact m >= log n improved with this patch m < log n regressed with this patch roughly ~6-7% So, I think what we have to consider is whether the improvement to something like 'git push --mirror' is worth the impact to an asymmetric push where the number of local refs is much smaller than the number of remote refs. I'm not sure how common the latter really is though. Gerrit does produce repositories with many refs on the remote end in the refs/changes/ namespace, but do people commonly push to Gerrit using matching or pattern refspecs? Not sure, but I'd tend to think that they don't. -Brandon remote.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/remote.c b/remote.c index 6f57830..8bca65a 100644 --- a/remote.c +++ b/remote.c @@ -1302,6 +1302,14 @@ static void add_missing_tags(struct ref *src, struct ref **dst, struct ref ***ds free(sent_tips.tip); } +static void prepare_ref_index(struct string_list *ref_index, struct ref *ref) +{ + for ( ; ref; ref = ref->next) + string_list_append_nodup(ref_index, ref->name)->util = ref; + + sort_string_list(ref_index); +} + /* * Given the set of refs the local repository has, the set of refs the * remote repository has, and the refspec used for push, determine @@ -1320,6 +1328,7 @@ int match_push_refs(struct ref *src, struct ref **dst, int errs; static const char *default_refspec[] = { ":", NULL }; struct ref *ref, **dst_tail = tail_ref(dst); + struct string_list dst_ref_index = STRING_LIST_INIT_NODUP; if (!nr_refspec) { nr_refspec = 1; @@ -1330,6 +1339,7 @@ int match_push_refs(struct ref *src, struct ref **dst, /* pick the remainder */ for (ref = src; ref; ref = ref->next) { + struct string_list_item *dst_item; struct ref *dst_peer; const struct refspec *pat = NULL; char *dst_name; @@ -1338,7 +1348,11 @@ int match_push_refs(struct ref *src, struct ref **dst, if (!dst_name) continue; - dst_peer = find_ref_by_name(*dst, dst_name); + if (!dst_ref_index.nr) + prepare_ref_index(&dst_ref_index, *dst); + + dst_item = string_list_lookup(&dst_ref_index, dst_name); + dst_peer = dst_item ? dst_item->util : NULL; if (dst_peer) { if (dst_peer->peer_ref) /* We're already sending something to this ref. */ @@ -1355,6 +1369,8 @@ int match_push_refs(struct ref *src, struct ref **dst, /* Create a new one and link it */ dst_peer = make_linked_ref(dst_name, &dst_tail); hashcpy(dst_peer->new_sha1, ref->new_sha1); + string_list_insert(&dst_ref_index, + dst_peer->name)->util = dst_peer; } dst_peer->peer_ref = copy_ref(ref); dst_peer->force = pat->force; @@ -1362,6 +1378,8 @@ int match_push_refs(struct ref *src, struct ref **dst, free(dst_name); } + string_list_clear(&dst_ref_index, 0); + if (flags & MATCH_REFS_FOLLOW_TAGS) add_missing_tags(src, dst, &dst_tail); -- 1.8.1.1.252.gdb33759 -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to [email protected] More majordomo info at
https://www.mail-archive.com/[email protected]/msg31595.html
CC-MAIN-2017-30
refinedweb
976
59.43
CGI::Application::Plugin::TT - Add Template Toolkit support to CGI::Application use base qw(CGI::Application); use CGI::Application::Plugin::TT; sub myrunmode { my $self = shift; my %params = ( email => '[email protected]', menu => [ { title => 'Home', href => '/home.html' }, { title => 'Download', href => '/download.html' }, ], session_obj => $self->session, ); return $self->tt_process('template.tmpl', \%params); }. It also provides a few extra features than just the ability to load a template. This is a simple wrapper around the Template Toolkit process method. It accepts zero, one or two parameters; an optional template filename, and an optional hashref of template parameters (the template filename is optional, and will be autogenerated by a call to $self->tt_template_name if not provided). The return value will be a scalar reference to the output of the template. package My::App::Browser sub myrunmode { my $self = shift; return $self->tt_process( 'Browser/myrunmode.tmpl', { foo => 'bar' } ); } sub myrunmode2 { my $self = shift; return $self->tt_process( { foo => 'bar' } ); # will process template 'My/App/Browser/myrunmode2.tmpl' } This method can be used to customize the functionality of the CGI::Application::Plugin::TT module, and the Template Toolkit module that it wraps. The recommended place to call tt_config is as a class method in the global scope of your module (See SINGLETON SUPPORT for an explanation of why this is a good idea). If this method is called after a call to tt_process or tt_obj, then it will die with an error message. It is not a requirement to call this method, as the module will work without any configuration. However, most will find it useful to set at least a path to the location of the template files ( or you can set the path later using the tt_include_path method). our $TEMPLATE_OPTIONS = { COMPILE_DIR => '/tmp/tt_cache', DEFAULT => 'notfound.tmpl', PRE_PROCESS => 'defaults.tmpl', }; __PACKAGE__->tt_config( TEMPLATE_OPTIONS => $TEMPLATE_OPTIONS ); The following parameters are accepted: This allows you to customize how the Template object is created by providing a list of options that will be passed to the Template constructor. Please see the documentation for the Template module for the exact syntax of the parameters, or see below for an example. This allows you to provide your own method for auto-generating the template filename. It requires a reference to a function that will be passed the $self object as it's only parameter. This function will be called everytime $self->tt_process is called without providing the filename of the template to process. This can standardize the way templates are organized and structured by making the template filenames follow a predefined pattern. The default template filename generator uses the current module name, and the name of the calling function to generate a filename. This means your templates are named by a combination of the module name, and the runmode. This options allows you to specify a directory (or an array of directories) to search when this module is loaded and then compile all files found into memory. This provides a speed boost in persistant environments (mod_perl, fast-cgi) and can improve memory usage in environments that use shared memory (mod_perl). This option allows you to specify exactly which files will get compiled when using the TEMPLATE_PRECOMPILE_DIR option. You can provide it with one of 3 different variable types: A filename extension that can specify what type of files will be loaded (eg 'tmpl'). Filenames that match the regular expression will be precompiled ( eg qr/\.(tt|tmpl|html)$/ ). A code reference that will be called once for each filename and directory found, and if it returns true, the template will be precompiled (eg sub { my $file = shift; ... } ). This method will return the underlying Template Toolkit object that is used behind the scenes. It is usually not necesary to use this object directly, as you can process templates and configure the Template object through the tt_process and tt_config methods. Every call to this method will return the same object during a single request. It may be useful for debugging purposes. This method will accept a hash or hashref of parameters that will be included in the processing of every call to tt_process. It is important to note that the parameters defined using tt_params will be passed to every template that is processed during a given request cycle. Usually only one template is processed per request, but it is entirely possible to call tt_process multiple times with different templates. Everytime tt_process is called, the hashref of parameters passed to tt_process will be merged with the parameters set using the tt_params method. Parameters passed through tt_process will have precidence in case of duplicate parameters. This can be useful to add global values to your templates, for example passing the user's name automatically if they are logged in. sub cgiapp_prerun { my $self = shift; $self->tt_params(username => $ENV{REMOTE_USER}) if $ENV{REMOTE_USER}; } This method will clear all the currently stored parameters that have been set with tt_params. This is an overridable method that works in the spirit of cgiapp_prerun. The method will be called just before a template is processed, and will be passed the template filename, and a hashref of template parameters. It can be used to make last minute changes to the template, or the parameters before the template is processed. sub tt_pre_process { my ($self, $file, $vars) = @_; $vars->{user} = $ENV{REMOTE_USER}; return; } If you are using CGI::Application 4.0 or greater, you can also register this as a callback. __PACKAGE__->add_callback('tt_pre_process', sub { my ($self, $file, $vars) = @_; $vars->{user} = $ENV{REMOTE_USER}; return; }); This, like it's counterpart cgiapp_postrun, is called right after a template has been processed. It will be passed a scalar reference to the processed template. sub tt_post_process { my ($self, $htmlref) = shift; require HTML::Clean; my $h = HTML::Clean->new($htmlref); $h->strip; my $newref = $h->data; $$htmlref = $$newref; return; } If you are using CGI::Application 4.0 or greater, you can also register this as a callback (See tt_pre_process for an example of how to use it). This method will generate a template name for you based on two pieces of information: the method name of the caller, and the package name of the caller. It allows you to consistently name your templates based on a directory hierarchy and naming scheme defined by the structure of the code. This can simplify development and lead to more consistent, readable code. If you do not want the template to be named after the method that called tt_template_name, you can pass in an integer, and the method used to generate the template name will be that many levels above the caller. It defaults to zero. For example: package My::App::Browser sub dummy_call { my $self = shift; return $self->tt_template_name(1); # parent callers name } sub view { my $self = shift; my $template; $template = $self->tt_template_name; # returns 'My/App/Browser/view.tmpl' $template = $self->dummy_call; # also returns 'My/App/Browser/view.tmpl' return $self->tt_process($template, { var1 => param1 }); } To simplify things even more, tt_process automatically calls $self->tt_template_name for you if you do not pass a template name, so the above can be reduced to this: package MyApp::Example sub view { my $self = shift; return $self->tt_process({ var1 => param1 }); # process template 'MyApp/Example/view.tmpl' } Since the path is generated based on the name of the module, you could place all of your templates in the same directory as your perl modules, and then pass @INC as your INCLUDE_PATH parameter. Whether that is actually a good idea is left up to the reader. $self->tt_include_path(\@INC); This method will allow you to set the include path for the Template Toolkit object after the object has already been created. Normally you set the INCLUDE_PATH option when creating the Template Toolkit object, but sometimes it can be useful to change this value after the object has already been created. This method will allow you to do that without needing to create an entirely new Template Toolkit object. This can be especially handy when using the Singleton support mentioned below, where a Template Toolkit object may persist across many request. It is important to note that a call to tt_include_path will change the INCLUDE_PATH for all subsequent calls to this object, until tt_include_path is called again. So if you change the INCLUDE_PATH based on the user that is connecting to your site, then make sure you call tt_include_path on every request. my $root = '/var/www/'; $self->tt_include_path( [$root.$ENV{SERVER_NAME}, $root.'default'] ); When called with no parameters tt_include_path returns an arrayref containing the current INCLUDE_PATH. By default, the TT. Hello [% c.session.param('username') || 'Anonymous User' %] <a href="[% c.query.self_url %]">Reload this page</a> Another useful plugin that can use this feature is the CGI::Application::Plugin::HTMLPrototype plugin, which gives easy access to the very powerful prototype.js JavaScript library. [% c.prototype.define_javascript_functions %] <a href="#" onclick="javascript:[%). In a CGI::Application module: package My::App use CGI::Application::Plugin::TT; use base qw(CGI::Application); # configure the template object once during the init stage sub cgiapp_init { my $self = shift; # Configure the template $self->tt_config( TEMPLATE_OPTIONS => { INCLUDE_PATH => '/path/to/template/files', POST_CHOMP => 1, FILTERS => { 'currency' => sub { sprintf('$ %0.2f', @_) }, }, }, ); } sub cgiapp_prerun { my $self = shift; # Add the username to all templates if the user is logged in $self->tt_params(username => $ENV{REMOTE_USER}) if $ENV{REMOTE_USER}; } sub tt_pre_process { my $self = shift; my $template = shift; my $params = shift; # could add the username here instead if we want $params->{username} = $ENV{REMOTE_USER}) if $ENV{REMOTE_USER}; return; } sub tt_post_process { my $self = shift; my $htmlref = shift; # clean up the resulting HTML require HTML::Clean; my $h = HTML::Clean->new($htmlref); $h->strip; my $newref = $h->data; $$htmlref = $$newref; return; } sub my_runmode { my $self = shift; my %params = ( foo => 'bar', ); # return the template output return $self->tt_process('my_runmode.tmpl', \%params); } sub my_otherrunmode { my $self = shift; my %params = ( foo => 'bar', ); # Since we don't provide the name of the template to tt_process, it # will be auto-generated by a call to $self->tt_template_name, # which will result in a filename of 'Example/my_otherrunmode.tmpl'. return $self->tt_process(\%params); } Creating a Template Toolkit object can be an expensive operation if it needs to be done for every request. This startup cost increases dramatically as the number of templates you use increases. The reason for this is that when TT loads and parses a template, it generates actual perlcode to do the rendering of that template. This means that the rendering of the template is extremely fast, but the initial parsing of the templates can be inefficient. Even by using the builting caching mechanism that TT provides only writes the generated perl code to the filesystem. The next time a TT object is created, it will need to load these templates from disk, and eval the sourcecode that they contain. So to improve the efficiency of Template Toolkit, we should keep the object (and hence all the compiled templates) in memory across multiple requests. This means you only get hit with the startup cost the first time the TT object is created. All you need to do to use this module as a singleton is to call tt_config as a class method instead of as an object method. All the same parameters can be used when calling tt_config as a class method. When creating the singleton, the Template Toolkit object will be saved in the namespace of the module that created it. The singleton will also be inherited by any subclasses of this module. So in effect this is not a traditional Singleton, since an instance of a Template Toolkit object is only shared by a module and it's children. This allows you to still have different configurations for different CGI::Application modules if you require it. If you want all of your CGI::Application applications to share the same Template Toolkit object, just create a Base class that calls tt_config to configure the plugin, and have all of your applications inherit from this Base class. package My::App; use base qw(CGI::Application); use CGI::Application::Plugin::TT; My::App->tt_config( TEMPLATE_OPTIONS => { POST_CHOMP => 1, }, ); sub cgiapp_prerun { my $self = shift; # Set the INCLUDE_PATH (will change the INCLUDE_PATH for # all subsequent requests as well, until tt_include_path is called # again) my $basedir = '/path/to/template/files/', $self->tt_include_path( [$basedir.$ENV{SERVER_NAME}, $basedir.'default'] ); } sub my_runmode { my $self = shift; # Will use the same TT object across multiple request return $self->tt_process({ param1 => 'value1' }); } package My::App::Subclass; use base qw(My::App); sub my_other_runmode { my $self = shift; # Uses the TT object from the parent class (My::App) return $self->tt_process({ param2 => 'value2' }); } Cees Hek <[email protected]> Please report any bugs or feature requests to [email protected], or through the web interface at. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. Patches, questions and feedback are welcome. CGI::Application, Template, perl(1) This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~ceeshek/CGI-Application-Plugin-TT-1.05/lib/CGI/Application/Plugin/TT.pm
CC-MAIN-2014-52
refinedweb
2,162
51.18
Processing Image Pixels, Color Intensity, Color Filtering, and Color Inversion Java Programming, Notes # 406 - Preface - Background Information - Preview - Discussion and Sample Code - Communication between the Programs - Run the Programs - Summary - What's Next - Complete Program Listings Preface Fourth in a series The first lesson in the series was entitled Processing Image Pixels using Java, Getting Started. The previous lesson was entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness.. A framework or driver program The lesson entitled Processing Image Pixels using Java, Getting Started provided and explained a program named ImgMod02 that makes it easy to: - Manipulate and modify the pixels that belong to an image. - Display the processed image along with the original image. The lesson entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness provided an upgraded version of that program named ImgMod02a. ImgMod02a serves as a driver that controls the execution of a second program that actually processes the pixels.The program that I will explain in this lesson runs under the control of ImgMod02a. In order to compile and run the program that I will provide in this lesson, you will need to go to the lessons entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness and Processing Image Pixels using Java, Getting Started to get copies of the program named ImgMod02a and the interface named ImgIntfc02. Purpose of this lesson The purpose of this lesson is to teach you how to write a Java program that can be used to: - Control color intensity - Apply color filtering - Apply color inversion. Sample program output I will begin this lesson by showing you three examples of the types of things that you can do with this program. I will discuss the examples very briefly here and will discuss them in more detail later in the lesson. Color intensity control Figure 1 shows an example of color intensity control. The bottom image in Figure 1 is the result of reducing the intensity of every color pixel to fifty-percent of its original value. As you can see, this basically caused the intensity of the entire image to be reduced resulting in a darker image where the colors were somewhat washed out. The user interface GUI Figure 2 shows the state of the user interface GUI that produced Figure 1. Each of the three sliders in Figure 2 controls the intensity of one of the colors red, green, and blue. The intensity of each color can be adjusted within the range from 0% to 100% of its original value. Each of the sliders in Figure 2 was adjusted to a value of 50, causing the intensity of every color in every pixel to be reduced to 50% of its original value. (Note that the check box at the top was not checked. I will explain the purpose of this checkbox later.) Color filtering Figure 3 shows an extreme example of color filtering. (I elected to provide an extreme example so that the results would be obvious.) In Figure 1, there was no modification of any color relative to any other color. (The value of every color was adjusted to 50% of its original value.) However, in Figure 3, the relative intensities of the three colors were modified relative to each other. There was no change to the color values for any of the red pixels in Figure 3. The color values for all of the green pixels were reduced to 50% of their original values. The color values for all blue pixels were reduced to zero. Thus, the color blue was completely eliminated from the output. As you can see, modifying the pixel color values in this way caused the overall color of the processed image to be more orange than the original. (Some would say that the processed image in Figure 3 is warmer than the original image in Figure 3 because it emphasizes warm colors rather than cool colors.) The user interface GUI for Figure 3 Figure 4 shows the state of the user interface GUI that produced Figure 3. The red slider in Figure 4 is positioned at 100, causing the red color values of all the pixels to remain unchanged. The green slider is positioned at 50, causing the green color values of all the pixels to be reduced to 50% of their original values. The blue slider is positioned at 0 causing the blue color values of all pixels to be reduced to 0. Once again the checkbox at the top of Figure 4 is not checked. I will explain the purpose of this checkbox in the next section. Color inversion Figure 5 shows an example of color inversion with no color filtering. (Note that it is also possible to apply a combination of color filtering and color inversion.) What is color inversion? I will have a great deal to say about color inversion later in this lesson. For now, suffice it to say that color inversion causes a change to all the colors in an image. That change is computationally economical, reversible, and usually obvious to the viewer. As you can readily see, the colors in the processed image in Figure 5 are obviously different from the colors in the original image. The user interface GUI for Figure 5 Figure 6 shows the state of the user interface GUI that produced Figure 5. The check box at the top of Figure 6 is checked, sending a message to the image-processing program to implement color inversion. Each of the sliders in Figure 6 is positioned at 100. As a result, no color filtering was applied. As mentioned earlier, however, it is possible to combine color filtering with color inversion. In fact, by using comment indicators to enable and disable different blocks of code and recompiling, the program that I will discuss later makes it possible to combine color filtering and color inversion in two different ways: - Filter first and then invert. - Invert first and then filter. The two different approaches can result in significantly different results. Display format The images shown in Figures 1, 3, and 5 were produced by the driver program named ImgMod02a. The user interface GUIs in Figures 2, 4, and 6 were produced by the program named ImgMod15. As in all of the graphic output produced by the driver program named ImgMod02a, the original image is shown at the top and the processed image is shown at the bottom. An interactive image-processing program The image-processing program named ImgMod15 illustrated by the above figures allows the user to interactively - Control the color intensity - Apply color filtering - Apply color inversion Color intensity and color filtering are controlled by adjusting the three sliders where each slider corresponds to one of the colors red, green, and blue. Color inversion is controlled by checking or not checking the check box near the top of the GUI. After making adjustments to the GUI, the user presses the Replot button shown at the bottom of Figures 1, 3, and 5 to cause the image to be reprocessed and replotted.. File formats The earlier lesson introduced and explained the concept of a pixel. In addition, the lesson provided a brief discussion of image files, and indicated that the program named ImgMod02a is compatible with gif files, jpg files, and possibly some other file formats as well.. Display of processed image results When the image-processing program completes its work, the driver program named ImgMod02a: - Receives a reference to a three-dimensional array object containing processed pixel data from the image-processing program. - Displays the original image and the processed image in a stacked display as shown in Figure 1. Reprocessing with different parameters In addition, the way in which the two programs work together makes it possible for the user to: - Provide new input data to the image-processing program. - Invoke the image-processing program again. - Create a new display showing the newly-processed image along with the original image. The manner in which all of this communication between the programs is accomplished was explained in the earlier lesson entitled Processing Image Pixels using Java, Getting Started. Will concentrate on the three-dimensional array of type int This lesson will show you how to write an image-processing program that receives raw pixel data in the form of a three-dimensional array of type int, and returns processed pixel data in the form of a three-dimensional array of type int. The program is designed to achieve the image-processing objectives described. Preview Three programs and one interface The program that I will discuss in this lesson requires the program named ImgMod02a and the interface named ImgIntfc02 for compilation and execution. I provided and explained that material in the earlier lessons entitled Processing Image Pixels using Java, Getting Started and Processing Image Pixels Using Java: Controlling Contrast and Brightness. I will present and explain a new Java program named ImgMod15 in this lesson. This program, when run under control of the program named ImgMod02a, will produce outputs similar to those shown in Figures 1, 3, and 5. (The results will be different if you use a different image file or provide different user input values.) I will also provide, (but will not explain) a simple program named ImgMod27. This program can be used to display (in 128 different panels) all of the 16,777,216 different colors that can be produced using three primary colors, each of which can take on any one of 256 values. The different colors are displayed in groups of 131,072 colors in each panel. The processImg method The program named ImgMod15, method. The processImg method must return (see Figure 1 for an example of the display format). Usage information for ImgMod02a and ImgMod15 To use the program named ImgMod02a to drive the program named ImgMod15, enter the following at the command line: java ImgMod02a ImgMod15 images in Figures 16, 17, and 18 to download and save the images used in this lesson. Then you should be able to replicate the results shown in the various figures in this lesson.) Image display format When the program is started, the original image and the processed image are displayed in a frame with the original image above the processed image. The two images are identical when the program first starts running.15 provides a GUI for user input, as shown in Figure 2. The sliders on the GUI make it possible for the user to provide different filter values for red, green, and blue each time the image-processing method is rerun. The check box near the top of the GUI makes it possible for the user to request that the colors in the image be inverted. To rerun the image-processing method with different parameters, adjust the sliders, optionally check the check box in the GUI, and then press the Replot button at the bottom of the main display. Discussion and Sample Code The program named ImgMod15 This program illustrates how to control color intensity, apply color filters, and apply color inversion to an image. The program is designed to be driven by the program named ImgMod02a. The before and after images The program places two GUIs on the screen. One GUI displays the "before" and "after" versions of an image that is subjected to color intensity control, color filtering, and color inversion. The image at the top of this GUI is the "before" image. The image at the bottom is the "after" image. An example is shown in Figure 1. The user interface GUI The other GUI provides instructions and components by which the user can control the processing of the image. An example of the user interface GUI is shown in Figure 2. A check box appears near the top of this GUI. If the user checks the check box, color inversion is performed. If the check box is not checked, no color inversion is performed. This GUI also provides three sliders that make it possible for the user to control color intensity and color filtering. Each slider controls the intensity of a single color. The intensity control ranges from 0% to 100% of the original intensity value for each color for every pixel. Controlling color intensity If all three sliders are adjusted to the same value and the replot button is pressed, the overall intensity of the image is modified with no change in the relative contribution of each color. This makes it possible to control the overall intensity of the image from very dark (black) to the maximum intensity supported by the original image. This is illustrated in Figure 1. Color filtering If the three sliders are adjusted to different values and the replot button is pressed, color filtering occurs. In this case, the intensity of each color is changed relative to the intensity of the other colors. This makes it possible, for example to adjust the "warmth" of the image by emphasizing red over blue, or to make the image "cooler" by emphasizing blue over red. This is illustrated in Figure 3. A greenscale image It is also possible to totally isolate and view the individual contributions of red, green, and blue to the overall image as illustrated in Figure 7. The values for red and blue were set to zero for all of the pixels in the processed image in Figure 7. This leaves only the differing green values for the individual pixels, producing what might be thought of as a greenscale image (in deference to the use of the term grayscale for a common class of black, gray, and white images). The user interface GUI for Figure 7 Figure 8 shows the state of the user interface GUI that produced the processed image in Figure 7. As you can see, the sliders for red and blue were set to zero causing all red and blue color values to be set to zero. The slider for green was set to 100 causing the green value for every pixel to remain the same as in the original image. The checkbox was not checked. Therefore, color inversion was not performed. Which comes first, the filter or the inversion? As written, the program applies color filtering before it applies color inversion. As you will see later, sample code is also provided that can be used to modify the program to cause it to provide color inversion before it applies color filtering. There is a significant difference in the results produced by these two approaches, and you may want to experiment with them. A practical example of color inversion As a side note, Microsoft Word and Microsoft FrontPage appear to use color inversion to change the colors in images that have been selected for editing. I will have more to say about this later. Beware of transparent images This program illustrates the modification of red, green, and blue values belonging to all the pixels in an image. It works best with an image that contains no transparent areas. The pixel modifications performed in this program have no impact on transparent pixels. Therefore, if you don't see what you expect when you process an image, it may be because your image contains transparent pixels. Will discuss in fragments I will break the program down into fragments for discussion. A complete listing of the program is provided in Listing 8 near the end of the lesson. The ImgMod15 class The ImgMod15 class begins in Listing 1. In order to be suitable for being driven by the program named ImgMod02a, this class must implement the interface named ImgIntfc02. The class extends Frame, because an object of this class is the user interface GUI shown in Listings 2, 4, 6, and 8. The code in Listing 1 declares four instance variables that will refer to the check box and the three sliders in Figure 8. The constructor for ImgMod15 The constructor is shown in its entirety in Listing 2. Because of the way that an object of the class is instantiated by ImgMod02a, the constructor is not allowed to take any parameters. Although the code in Listing 2 is rather long, all of the code in Listing 2 is straightforward if you are familiar with the construction of GUIs in Java. If you are not familiar with such constructions, you should study some of my other lessons on this topic. As mentioned earlier, you will find an index to all of my lessons at. The processImg method To be compatible with ImgMod02a,. The beginning of the processImg method is shown in Listing 3. It's best to make and modify a copy Normally the processImg method should make a copy of the incoming array and process the copy rather than modifying the original. Then the method should return a reference to the processed copy of the three-dimensional pixel array. The code in Listing 3 makes such a copy. Get the slider values The code in Listing 4 gets the current values of each of the three sliders. This information will be used to scale the red, green, and blue pixel values to new values in order to implement color intensity control and color filtering. The new color values can range from 0% to100% of the original values Process each color value The code in Listing 5 is the beginning of a for loop that is used to process each color value for every pixel. The boldface code in Listing 5 is executed for the case where the check box near the top of Figure 2 has not been checked. In this case, each color value for every pixel is multiplied by a scale factor that is determined by the position of the slider corresponding to that color. In effect, the product of the color value and the scale factor causes the processed color value to range from 0% to 100% of the original color value. Note that the code in Listing 5 is the first half of an if-else statement. Apply color inversion In the event that the color-inversion check box is checked, the boldface code in Listing 6 is executed instead of the boldface code in Listing 5. The code in Listing 6 first applies color filtering using the slider values and then applies color inversion. The formula for color inversion Recall that an individual color value can fall anywhere in the range from 0 to 255. The code in Listing 6 performs color inversion by subtracting the scaled color value from 255. Therefore, a scaled color value of 200 would be inverted into a value of 55. Likewise, a scaled color value of 55 would be inverted into a value of 200. Thus, the inversion process can be reversed simply by applying it twice in succession. Since it may not be obvious what the results of such an operation will be, I will discuss the ramifications of color inversion in some detail. An experiment Let's begin with an experiment. You will need access to either Microsoft Word or Microsoft FrontPage to perform this experiment. Get and save the image Figure 5 shows the result of performing color inversion on an image of a starfish. The original image is shown at the top of Figure 5 and the color-inverted image is shown at the bottom of Figure 5. Begin the experiment by right-clicking the mouse on the image in Figure 5 and saving the image locally on your disk. Insert the image into a Word or FrontPage document Now create a new document in either Microsoft Word or Microsoft FrontPage and type a couple of paragraphs of text into the new document. Insert the image that you saved between the paragraphs in your document. It should be the image with the tan starfish at the top and the blue starfish at the bottom. Select the image Now use your mouse and select some of the text from both paragraphs. Include the image between the paragraphs in the selection. If your system behaves like mine, the starfish at the top should turn blue and the starfish at the bottom should turn tan. In other words, the two images should be exactly the same except that their positions should be reversed. What does this mean? Whenever an image is selected in an editor program like Microsoft Word or Microsoft FrontPage, some visual change must be made to the image so that the user will know that the image has been selected. It appears that Microsoft inverts the colors in selected images in Word and FrontPage for this purpose. (Note, however, that the Netscape browser, the Netscape Composer, and the Internet Explorer browser all use a different method for indicating that an image has been selected, so this is not a universal approach.) Why use inverted colors? Color inversion is a very good way to change the colors in a selected image. The approach has several very good qualities. Computationally economical To begin with, inverting the colors is computationally economical. All that is required computationally to invert the colors is to subtract each color value from 255. This is much less demanding of computer resources than would be the case if the computation required multiplication or division, for example. Overflow is not possible Whenever you modify the color values in a pixel, you must be very careful to make sure that the new color value is within the range from 0 to 255. Otherwise, serious overflow problems can result. The inversion process guarantees that the new color value will fall within this range, so overflow is not possible. A reversible process The process is guaranteed to be reversible with no requirement to maintain any information outside the image regarding the original color values in the image. All that is required to restore the inverted color value back to the original color value is to subtract the inverted color value from 255. The original color value is restored after two successive inversions. Thus, it is easy and economical to switch back and forth between original color values and inverted color values. Given all of the above, I'm surprised that the color-inversion process isn't used by programs other than Word and FrontPage. Another example of color inversion The color values in a digitized color film negative are similar to (but not identical to) the inverse of the colors in the corresponding color film positive. Therefore, some photo processing programs begin the process of converting a digitized color film negative to a positive by inverting the colors. Additional color adjustments must usually be made after inversion to get the colors just right. You will find an interesting discussion of this process in an article entitled Converting negative film to digital pictures by Phil Williams. What will the inverted color be? Another interesting aspect of color inversion has to do with knowing what color will be produced by applying color inversion to a pixel with a given color. For this, let's look at another example shown in Figures 9 and 10. Figure 9 shows the result of applying color inversion to the pure primary colors red, green, and blue. The color bar at the top in Figure 9 shows the three primary colors. The color bar at the bottom shows the corresponding inverted colors. No color filtering was applied Figure 10 shows that no color filtering was involved. The colors shown in the bottom image of Figure 9 are solely the result of performing color inversion on the top image in Figure 9. Experimental results From Figure 9, we can conclude experimentally that applying color inversion to a pure red pixel will cause the new pixel color to be aqua. Similarly, applying color inversion to a pure green pixel will cause the new pixel color to be fuchsia. Finally, applying color conversion to a pure blue pixel will cause the new pixel color to be yellow. To summarize: - Red inverts to aqua - Green inverts to fuchsia - Blue inverts to yellow An explanation of the results Consider why the experimental results turn out the way that they do. Consider the case of the pure blue pixel. The red, green, and blue color values for that pixel are as shown below: - R = 0 - G = 0 - B = 255 Let the inverted color values be given by R', G', and B'. Looking back at the code in Listing 6 (with no color filtering applied), the color values for the pixel following the inversion will be: - R' = 255 - 0 = 255 - G' = 255 - 0 = 255 - B' = 255 - 255 = 0 The inverted color is yellow Thus we end up with a pixel having full color intensity for red and green and no intensity for blue. What do we get when we mix red and green in equal amounts? The answer is yellow. Adding equal amounts of red and green produces yellow. Hence, the inverted color for a pure blue pixel is yellow, as shown in Figure 9 and explained on the basis of the arithmetic. We could go through a similar argument to determine the colors resulting from inverting pure red and pure green. The answers, of course, would be aqua for red and fuchsia for green. A more difficult question What colors are produced by inverting pixels that are not pure red, green, or blue, but rather consist of weighted mixtures of red, green, and blue? The answer to this question requires a bit of an extrapolation on our part. First, let's establish the colors that result from mixing equal amounts of the three primary colors in pairs. - red + green = yellow (bottom right in Figure 9) - red + blue = fuchsia (bottom center in Figure 9) - green + blue = aqua (bottom left in Figure 9) A simple color wheel Now let's construct a simple color wheel. Draw a circle and mark three points on the circle at 0 degrees, 120 degrees, and 240 degrees. Label the first point red, the second point green, and the third point blue. Now mark three points on the circle half way between the three points described above. Label each of these points with the color that results from mixing equal quantities of the colors identified with that point's neighbors. For example, the point half way between red and green would be labeled yellow. The point half way between green and blue would be labeled aqua, and the point half way between blue and red would be labeled fuchsia. Look across to the opposite side Now note the color that is on the opposite side of the circle from each of the primary colors. Aqua is opposite of red. Fuchsia is opposite of green, and yellow is opposite of blue. Comparing this with the colors shown in Figure 9, we see that the color that results from inverting one of the primary colors on the circle is the color that appears on the opposite side of the color wheel. A reversible process Earlier I told you that the inversion process is reversible. For example, if we have a full-intensity yellow pixel, the color values for that pixel will be: - R = 255 - G = 255 - B = 0 If we invert the colors for that pixel, the result will be: - R' = 255 - 255 = 0 - G' = 255 - 255 = 0 - B' = 255 - 0 = 255 Thus, the color of the inverted yellow pixel is blue, which is the color that is opposite yellow on the circle. General conclusion In general, we can conclude that if we invert a pixel whose color corresponds to a color at a point on the color wheel, (such as the color wheel shown in Figure 11), the color of the inverted pixel will match the color at the corresponding point on the opposite side of the color wheel. Experimental confirmation We can demonstrate this experimentally by inverting the image of the color wheel without performing any color filtering. The result of such an inversion is shown in the bottom half of Figure 12. Once again, the original image of the color wheel is shown at the top, and the inverted image of the color wheel is shown at the bottom. As you can see in Figure 12, each of the colors in the original image moved to the opposite side of the wheel when the color wheel was inverted. Also, you can see from Figure 12 that white pixels turn into black pixels and black pixels turn into white pixels when they are inverted. You should be able to explain that by considering the color values for black and white pixels along with the inversion formula. Another exercise Another exercise might be useful. It might be possible to use the color wheel in Figure 11 to explain what happened to the colors when the starfish image was inverted in Figure 5. Pick a point on the starfish in the original image in Figure 5 and note the color of that point. Then find a point on the color wheel of Figure 11 whose color matches that point. Then find the corresponding point on the opposite side of the color wheel. The color of that point should match the color of the corresponding point on the inverted starfish image at the bottom of Figure 5. May not have found the matching point A potential problem here is that you may not be able to find a point on the color wheel that matches the color of a point on the starfish. That is because any individual pixel on the starfish can take on any one of 16,777,216 different colors. The colors shown on the color wheel are a small subset of that total and may not include the color of a specific point on the starfish. Difficulty of displaying 3-dimensional data The problem that we have here is the classic problem of trying to represent a three-dimensional entity in a two-dimensional display medium. Pixel color is a three-dimensional entity, with the dimensions being red, green, and blue. Any of the three color values belonging to a pixel can take on any one of 256 different values. It is very difficult to represent that on a flat two-dimensional screen, and a color wheel is just one of many schemes that have devised in an attempt to do so. Could display as a cube One way to represent these 16,777,216 colors is as a large cube having eight corners and six faces. Consider the large cube to be made up of 16,777,216 small cubes, each being a different color. Arrange the small cubes so as to form the large cube with 256 cubes (colors) along each edge. Thus, each face is a square with 256 small cubes along each side. Arrange the small cubes so that the colors of the cubes at the corners on one face are black, blue, green, and aqua as shown in the top half of Figure 13. Arrange the remaining cubes on that face to contain the same colors in the same order as that shown in the top half of Figure 13. (The colors in the bottom half of Figure 13 are the inverse of the colors shown in the top half.) The opposite face Arrange the small cubes such that the diagonal corners on the opposite face are set to white, yellow, red, and fuchsia as shown in the top half of Figure 14. Recall that these colors are the inverse of black, blue, green, and aqua. Arrange additional small cubes such that the colors on that face progress in an orderly manner between the colors at the corners as shown in the top half of Figure 14. Inverse colors Each of the colors in the top half of Figure 14 is the inverse of the color at the diagonally opposite location on the face shown in Figure 13. For example, the yellow hues near the bottom left corner of Figure 14 are the inverse of the blues hues near the upper right corner in Figure 13. (Also, the colors in the bottom half of Figure 14 are the inverse of the colors at the corresponding locations in the upper half of Figure 14.) Can't show all 16,777,216 colors In order for me to show you all 16,777,216 colors, I would have to display 128 panels like those shown in Figures 13 and 14. Each panel would represent two slices cut through the cube parallel to the two faces shown in Figures 13 and 14. (The top half of the panel would represent one slice and the bottom half would represent the other slice.) Each slice would represent the colors produced by combining a different value for red with all possible combinations of the values for green and blue. Obviously, it would be impractical for me to attempt to display 128 such panels in this lesson. (Because each panel shows the raw colors at the top and the inverse colors at the bottom, only 128 such panels would be required. If only the raw colors were shown in each panel, 256 panels would be required to show all 16,777,216 colors.) Two slices from inside the cube The top half of Figure 15 shows a slice through the cube for a red value of 50 combined with all possible values for green and blue. The bottom half shows a slice for the inverse red value given by (255 - 50) or 205. (Once again, the colors in the bottom half of Figure 15 are the inverse of the colors in the top half of Figure 15.) You can generate the colors yourself Since it is impractical for me to show you all 16,777,216 colors and their inverse, I am going to do the next best thing. Listing 9 contains the program named ImgMod27 that I used to produce the output shown in Figures 13, 14, and 15. You can compile this program and run it yourself for any value of red from 0 to 255. Just enter the red value as a command-line parameter. The top half of the output produced by the program displays the 65,536 colors represented by a single slice through the cube parallel to the faces shown in Figures 13 and 14. The bottom half of the output in each case represents the inverse of the colors shown in the top half. Most colors don't have names Most of the different colors don't have names, and even if they all did have names, most of us wouldn't have them all memorized. Therefore, it is impossible for me to describe in a general sense the color that will be produced by inverting a pixel having one of the 16,777,216 possible colors. Contribution of red, green, and blue By doing a little arithmetic, I can describe the inverse color numerically by indicating the contribution of red, green, and blue, but most of us would probably have difficulty seeing the color in our mind's eye even if we knew the contribution of red, green, and blue. The colors that result from some combinations of red, green, and blue are intuitive, and others are not. For example, I have no difficulty picturing that red plus blue produces fuchsia, and I have no difficulty picturing that green plus blue produces aqua. However, I am unable to picture that red plus green produces yellow. That seems completely counter-intuitive to me. I don't see anything in yellow that seems to derive from either red or green. Of course, things get even more difficult when we start thinking about mixtures of different contributions of all three of the primary colors. Back to experimentation So, that brings us back to experimentation. The program in Listing 9 can be used to produce any of the 16,777,216 colors in groups of 65,536 colors, along with the inverse of each color in the group. Perhaps you can experiment with this program to produce the color that matches a point on the starfish at the top of Figure 5. If so, the inverse color shown in your output will match the color shown in the corresponding point on the starfish at the bottom of Figure 5. And that is probably more than you ever wanted to hear about color inversion. The remaining code Now back to the main program named ImgMod15. The remaining code in the program is shown in Figure 7. (Note that the boldface code in Listing 7 is inside a comment block.) As I mentioned earlier, the boldface code in Listing 6 filters (scales) the pixel first and then inverts the pixel. In some cases, it might be useful to reverse this process by replacing the boldface code in Listing 6 with the boldface code in Listing 7. This code inverts the color of the pixel first and then applies the filter. If you filter and you also invert, the order in which you perform these two operations can be significant with respect to the outcome. The remaining code in Listing 7 signals the end of the processImg method and the end of the ImgMod15 class. Communication between the Programs In case you are interested in the details, this section describes how the program named ImgMod02a communicates with the image-processing program. If you aren't interested in this much detail, just skip to the section entitled Run the Program. Instantiate an image-processing object During execution, the program named ImgMod02aa: - Has the pixel data in the correct format - Has an image-processing object that will process those pixels and will return an array containing processed pixel values All that the ImgMod02a program needs to do at this point is to invoke the processImg method on the image-processing object passing the pixel data along with the number of rows and columns of pixels as parameters. Posting a counterfeit ActionEvent The ImgMod02a processed pixel data, which is displayed as an image below the original image as shown in Figure 1. Run the Programs I encourage you to copy, compile, and run the programs named ImgMod15 and ImgMod27 provided in this lesson. Experiment with them, making changes and observing the results of your changes. Process a variety of images Download a variety of images from the web and process those images with the program named ImgMod15. (Be careful of transparent pixels when processing images that you have downloaded from the web. Because of the quality of the data involved, you will probably get better results from jpg images than from gif images. Remember, you will also need to copy the program named ImgMod02a and the interface named ImgIntfc02 from the earlier lessons entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness and Processing Image Pixels using Java, Getting Started.) View a large number of different colors Compute and observe the colors and their inverse for various slices through the color cube as provided by the program named ImgMod27. Change the order of filtering and inversion Run some experiments to determine the difference in results for various images based on filtering before inverting and on inverting before filtering. (Of course, if you don't filter, it won't matter which approach you use.) Write an advanced filter program Write an advanced version of the program that applies color filtering by allowing you to control both the location and the width of the distribution for each of the three colors separately. You can get some ideas on how to do this from the program entitled Processing Image Pixels Using Java: Controlling Contrast and Brightness. Replicate the results To replicate the results shown in this lesson, right-click and download the jpg image files in Figures 17, 18, and 19 below. Have fun and learn Above all, have fun and use this program to learn as much as you can about manipulating images by modifying image pixels using Java. Test images Figures 17, 18, and 19 contain the jpg images that were used to produce the results shown in this lesson. You should be able to right-click on the images to download and save them locally. Then you should be able to replicate the results shown in this lesson. Figure 17 Figure 18 Figure 19 Summary In this lesson, I showed you how to write a Java program that can be used to: - Control color intensity - Apply color filtering - Apply color inversion I provided several examples of these capabilities. In addition, I explained some of the theory behind color inversion and showed you how to relate the colors on original and inverted pixels to points on a color wheel as well as pixels in a color cube. What's Next? Future lessons will show you how to write image-processing programs that implement many common special effects as well as a few that aren't so common. This will include programs to do the following: - Blur all or part of an image. - Deal with the effects of noise in an image. - Sharpen all or part of an image. - Perform edge detection on an image. - Morph one image into another image. - Rotate an image. - Change the size of an image. - Other special effects that I may dream up or discover while doing the background research for the lessons in this series. Complete Program Listings Complete listings of the programs discussed in this lesson are provided in Listings 8 and 9. A disclaimer The programs that I will provide and explain_3<<
http://www.developer.com/java/other/article.php/3512456/Processing-Image-Pixels-Color-Intensity-Color-Filtering-and-Color-Inversion.htm
CC-MAIN-2015-48
refinedweb
6,929
59.03
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. how to set default date to my date_start and date_end Hello Expert, how to set the date_end field to current day +6days, for instance the value would be date_start = 09/25/2014 then date_end = 10/01/2014. Thanks python---- from datetime import datetime from dateutil.relativedelta import relativedelta from osv import osv, fields from openerp import tools class test_product(osv.Model): _name = "test.product" _columns = { 'date_start': fields.date('Date End'), 'date_end': fields.date('Date End'), } _defaults= { 'date_start': lambda *a: time.strftime('%Y-%m-%d'), 'date_end': lambda *a: (datetime.today() + relativedelta(days=6)).strftime('%Y-%m-%d'), } XML--- <field name="date_start"/> <field name="date_end"/> Also, we can use timedelta For the manipulation of date, we can use timedelta to add the day by 5. For more info: from datetime import datetime, timedelta _defaults={ 'date_start': lambda *a:datetime.now().strftime('%Y-%m-%d'), 'date_end': lambda *a:(datetime.now() + timedelta(days=(6))).strftime('%Y-%m-%d'), } Hello, Please make sure that you've made following imports: from datetime import datetime from dateutil.relativedelta import relativedelta -than change your code to: _defaults= { 'date_start': lambda *a: time.strftime('%Y-%m-%d'), 'date_end': lambda *a: (datetime.today() + relativedelta(days=6)).strftime('%Y-%m-%d'), } regards, Hi Temur, no default value when heating create, even the date_start field, no value appeared. Please see above code modified Thanks for the response and time really appreciate your help You're missing one more import: "import time" you should either add this import, OR use "datetime.today().strftime('%Y-%m-%d')" instead of "time.strftime('%Y-%m-%d')". As you've already used "time" in your question code I didn't mention this import in my answer, I assumed that you had it already. and more correct import for osv will be: "from openerp.osv import osv" instead of: "from osv import osv,!
https://www.odoo.com/forum/help-1/question/how-to-set-default-date-to-my-date-start-and-date-end-63771
CC-MAIN-2016-50
refinedweb
335
59.09
Dealing With CORS What is CORS Let’s start, what is CORS. and why do I care. Quick over view before we get into the details of how to deal with it. CORS stands for Cross Origin Resource Sharing. (CORS) is an HTTP-header based mechanism that allows a server to indicate any other origins (domain, scheme, or port) than its own from which a browser should permit loading of resources. CORS also relies on a mechanism by which browsers make a “preflight” request to the server hosting the cross-origin resource, in order to check that the server will permit the actual request. In that preflight, the browser sends headers that indicate the HTTP method and headers that will be used in the actual request. Without making these changes this is the error you will see within the Developer Console in a browser when trying to access the API end points with the react application were going to write. In this case the end point was being called from the developers laptop where they were developing locally. Essentially your a weapons grade plutonium manufacturer in the US. and have a license to sell outside of the US. When China makes a request for 2Kg’s of plutonium you validate your ability to sell your plutonium by looking at your license and fortunately the license says ‘*’ meaning that it doesn’t matter what country wants to purchase your plutonium your allowed to sell it, probably not a great idea in the current climate. So you talk to the US. government and restrict the ability to sell the plutonium to anybody down to a few trusted country’s. This is also true of CORS we don’t want any Origin to be able to make data requests, so what well do once we have it working well restrict the CORS origin down to a specific set of Origins. In our case we want to create a React webpage that can display the data in our RDS MySQL database. The Amplify web address is ‘ and this web page want to make a request from ‘ Since both URL’s are completely different the Resources being Shared or data being request is being made Cross Origins. If we allows CORS requests from anywhere ‘*’ we don’t know who is making that request and for what reason so we want to only accept CORS requests from ‘ our Amplify website. So how do we implement any of this? Lambda Code Changes The first change we need to make is in the Lambda project, specifically within the Startup.cs file. According to Microsoft there are a few things we have to do. Caution ensure that the CORS pieces are in the same sequence as shown below during my testing I found that certain things have to be executed in a specific sequence. First we set the policy name _myAllowSpecificOrigins, the policy name is arbitrary. readonly string MyAllowSpecificOrigins = "_myAllowSpecificOrigins"; Within the ConfigureServices method, the AddCors method call adds CORS services to the app’s service container. This is where we would define specific URL paths, but for now we will just use ‘*‘. services.AddCors(options => { options.AddPolicy( name: MyAllowSpecificOrigins, builder => { builder.WithOrigins("*"); }); }); Finally, we tell the app to use CORS and then enable the _myAllowSpecificOrigins CORS policy for all controller endpoints. app.UseCors(); app.UseAuthorization(); //Not used for CORS, but is still required. app.UseEndpoints(endpoints => { endpoints.MapControllers().RequireCors(MyAllowSpecificOrigins); }); Full Code. using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.Extensions.Configuration; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using MyFirstLambdaProject.Processors; using Serilog; namespace MyFirstLambdaProject { public class Startup { readonly string MyAllowSpecificOrigins = "_myAllowSpecificOrigins"; public const string AppS3BucketKey = "AppS3Bucket"; public Startup(IConfiguration configuration) { Configuration = configuration; Log.Logger = new LoggerConfiguration() .Enrich.FromLogContext() .MinimumLevel.Information() .WriteTo.Console() .CreateLogger(); } public static IConfiguration Configuration { get; private set; }>(); } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseHttpsRedirection(); app.UseRouting(); app.UseCors(); app.UseAuthorization(); app.UseEndpoints(endpoints => { endpoints.MapControllers().RequireCors(MyAllowSpecificOrigins); }); } } } API Gateway Changes Next well do the some what painful changes in the API Gateway, so go ahead and log into the AWS Console and navigate to the API Gateway. Within the API Gateway console select the Lambda Project. Adding Method Response 200 For each of the end points, we have to add a Method Response 200, to do this select on the Delete endpoint. Then select the Method Response link. To add the 200 Method Response select the Add Response link. Within the HTTP Status key in 200, and then Select the Check Mark to approve the change. Once completed repeat these steps for the other end points (Get, Patch, Post). Enabling CORS Next we need to enable CORS for all the end points, select on the DataModel part of the end point. To enable CORS select the Actions dropdown and then select Enable CORS from the contextual menu. Before applying CORS, we must enable ‘Default 4XX’ and ‘Default 5XX’, then select Enable CORS and Replace Existing CORS Headers. To stop accidental application of CORS headers, a warning dialogue will be displayed select Yes, Replace Existing Values in the dialogue window to continue. The first time we enable CORS we will get some errors, four of these are fine, these are the four Integration Responses. This is normal when working and deploying a Lambda function with an API Gateway. Notice that we now have an additional end point Options, and the first three errors messages are in connection with this, so we have one more thing to fix. Select the Options end point and add a 200 Method Response as we did before. Then we will again Enable CORS, this time we should only get the four Integration Response Errors. Deploying the API To update the API Gateway with our changes, we must redeploy the API. To do this select on the DataModel part of the end point. Then select Deploy API from the contextual menu. Within the Deploy API dialogue window change the Deployment Stage to ‘Prod’ and select the Deploy button, to initiate the API deployment. Testing If we now go to our webpage, and refresh it we can now see that the data loads into the React component and is displayed on the web page. However we are still using ‘*’ to define which origins are allowed to request resources. Allowed Origin End Point Back within the Starup.cs file we will replace the “*” with the end point of the Amplify website (do not put the end slash in place), then, Publish the Lambda C# Code>(); } Enable CORS on the API Gateway, this time we wont use ‘*’ we will use the Amplify website URL again without the end slash. Finally, redeploy the API, we can now ensure that only our Amplify website can request data across origins. React Code Below is the React Component Code I used to build my data table. import React, { Component} from 'react'; import axios from 'axios' import Table from "./Table"; import './Table.css'; class DataModelViewComponent extends Component { constructor(props) { super(props) this.state = { users: [], loading: true } } async getUsersData() { var config = { method: 'get', url: ' headers: { 'x-api-key': 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX' } }; const res = await axios(config) console.log(res.data) this.setState({ loading: false, users: res.data }) } componentDidMount() { this.getUsersData() } render() { const columns = [ { Header: 'First Name', accessor: 'firstName', }, { Header: 'Last Name', accessor: 'lastName', }, { Header: 'Age', accessor: 'age', className: 'HeaderAge', } ] return ( <div className="App"> <Table columns={columns} data={this.state.users} /> </div> ) } } export default DataModelViewComponent;
http://www.catiawidgets.net/2021/04/17/8-dealing-with-cors/
CC-MAIN-2022-21
refinedweb
1,243
54.42
#include <SPI.h>#include <Ethernet.h>#include <SoftwareSerial.h>SoftwareSerial mySerial(2, 3);// Enter a MAC address and IP address for your controller below.// The IP address will be dependent on your local network:byte mac[] = { 0x90, 0xA2, 0xDA, 0x00, 0x5E, 0xF4 };IPAddress ip(192,168,112, 31);// Initialize the Ethernet server library// with the IP address and port you want to use // (port 80 is default for HTTP):EthernetServer server(80);void setup(){ // start the Ethernet connection and the server: Ethernet.begin(mac, ip); server.begin(); mySerial.begin(9600); mySerial.print("webserver");}void loop(){ // listen for incoming clients EthernetClient client = server.available(); if (client) { // an http request ends with a blank line boolean currentLineIsBlank = true; while (client.connected()) { if (client.available()) { char c = client.read(); mySerial.print(); }} Trying to understand what happens I added a print instruction to see where the program stops. SoftwareSerial mySerial(2, 3); mySerial.begin(9600); mySerial.print("webserver"); // output the value of each analog input pin for (int analogChannel = 0; analogChannel < 6; analogChannel++) { client.print("analog input "); client.print(analogChannel); client.print(" is "); client.print(analogRead(analogChannel)); client.println("<br />"); } or if I change the instruction to mySerial.print(client.read()); webserver135980202151723551163616664147128352011369512340numbers! What is connected to pins 2 and 3? Where is this data supposed to go? Do you have something connected to these pins? Or is this just garbage? Why can't you be bothered to separate the values being printed? I've just used the example. mySerial.print(client.read()); mySerial.print(" "); I only connect the tx y rx ports to pins 2 y 3. No. The example doesn't use mySerial and it doesn't print the value of client.read().If you are going to modify the example, feel free to do a good job. *My operating system is Ubuntu 11 and since a week I 'm using arduino 1.0 IDE. And sorry for this, because at the end It was a version of the IDE problem. Now It works!
http://forum.arduino.cc/index.php?topic=129849.msg976775
CC-MAIN-2015-35
refinedweb
331
51.85
Select a Certificate by its name in drop down By Raj0813, in AutoIt General Help and Support Recommended Posts Similar Content - By Psyllex I have a super simple login screen I'm trying to access that is written in java. My java testing tools can't access the login screen because it's a modal window. So I figured I'd see if AutoIt can manipulate 'something' on it. I can enter text within the text boxes for user name and password. But I can't see to click on the login button. I've tried just tabbing to it and hitting the enter key (as I really wouldn't have to be completely interacting with the frame). But that didn't work. I was hoping to throw it some coordinates and just double click in that relative area, but when I get the whole " ==> Subscript used on non-accessible variable.:" when I attempt to use ControlGetPos() I'm assuming because it can't truly interact with the Java frame. So I'm kind of stuck here...can't use AutoIt, can't use a Java automation testing tool to do this due to the modal issues. Does anyone have any ideas? My code is below though I think it's less to do with code and more what AutoIt can and can't do. #include <EditConstants.au3> #include <GUIConstantsEx.au3> #include <StaticConstants.au3> #include <WindowsConstants.au3> Local $hWnd = WinActivate("[CLASS:sunawtframe]", "Login") Local $aPos = ControlGetPos($hWnd, "[CLASS:SUNAWTFRAME]", "Login") Local $myXPos = $aPos[0] + 420 Local $myYPos = $aPos[1] + 270 Send("guest") Send("{TAB}") Send("guest") Send("{TAB}") ;Tried Control Click it failed ControlClick($hWnd, "", "Login") ;Tried Mouse Click and that failed MouseClick("Left", $myXPos, $myYPos, 2) Thanks for any help! - By Yash91 Hi Experts, I want to integrate AutoIT with Eclipse to write my code in java for automating the desktop base application, i have integrate jacob 1.18 and verify the dll's also but i am getting How to fix java.lang.UnsupportedClassVersionError: Unsupported major.minor version 51 issue. I am using 32 bit windows xp with java 1.6 version. Java 1.7 is unsupported in 32 bit windows xp. is there any solution for the same. -
https://www.autoitscript.com/forum/topic/195638-select-a-certificate-by-its-name-in-drop-down/
CC-MAIN-2020-16
refinedweb
371
65.93
Like this, just why? Bad enough every word in all my open documents are suggested before the obvious tag/attribute suggestion, but this one is extreme. This is caused by a plugin, it's not the default behavior OK, I have removed all my plugins and themes (except for a couple of plugins I need). It seems to have cleared up, for now. Note, however, that when I re-launched Sublime, it hung and the crashed with the attached...crash log.txt.zip (12.8 KB) Then what about completions like these, which are using words and phrases from my document? How can I set it to ONLY use language syntax for suggestions, NOT any dictionary words or numbers in my doc? This is the one last frustrating thing about using ST2. The completions in that second screenshot will also not be suggested by a vanilla install of Sublime Text. You may want to consider sublimetext.com/docs/2/revert.html This bugs the heck out of me too. I wish there was a global setting to not include this noise. I posted about this exact same thing here: @handycam. I'm 90% sure those are zencoding bugs. @ninjaroll import sublime, sublime_plugin class InhibitWordsListener(sublime_plugin.EventListener): def on_query_completions(self, view, prefix, locations): return (], sublime.INHIBIT_WORD_COMPLETIONS | sublime.INHIBIT_EXPLICIT_COMPLETIONS) Save that to your User Folder. This will disable the default completions so only plugin completions will show. I have re-set Sublime Text 2 and will not be installing Zencoding if it is so buggy. Much as I love the convenience of Zencoding, I cannot stand these random bogus autocompletes clogging up the menu. Kinda defeats the purpose of autocomplete and negates the convenience of using zencoding.
https://forum.sublimetext.com/t/i-still-dont-get-some-of-the-bizarre-autocompletes/5989/4
CC-MAIN-2016-18
refinedweb
285
58.58