text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The opinions expressed herein are my own personal opinions and do not represent my employer's view in anyway. Recently, I was playing with WatiN to do some integration and web testing, and I have "discovered" the concept of session merging in the browsers. What is this ? By default, the browsers (Internet Explorer, Firefox, ...) are sharing the session information of a website (for instance the authentication information) between its tabs and different instances. What does implies in practice ? Just launch one instance of your favorite browser and log into an application (mail, ...). Then launch another instance (for instance another process) of the same browser and go to the same website. You should be logged in, even if you had not checked the magic checkbox "Remember me". Why are they doing this ? Just because many websites expect this behavior. Just imagine how surfing would behave without session merging in an application when it open popups ? Just note that all website do NOT manage their authentication in the same way, and so, some of them may not be affected by session merging. So, that's a feature ok. But when you do some web testing and that you would like to launch several browser logged in with different users to check simultaneous actions or things like that, this feature can be very very annoying. But Internet Explorer 8 has added a new feature, that allow you to surf in a new session, meaning you just deactivate explicitely the session merging. How to do that ? Simple: Let's go back to my testing session. I was manipulating the DLL "Interop.SHDocVw.dll" to drive my browser and to create a new instance of Internet Explorer, I was just using the syntax new InternetExplorerClass. So how can I use this feature using the COM component ? Well, so far, I haven't found any way to do that directly so I'm waiting for your comments ! :-) Anyway, I am using a workaround, that has been highly inspired of the source code of WatiN, just updating it to take into account the notion of session merging, and giving enough time to Internet Explorer to start. To make this compile, you will need two references: using System; using System.Diagnostics; using SHDocVw; using WatiN.Core.Native.InternetExplorer; using WatiN.Core.UtilityClasses; namespace ClassLibrary1 { public class IEFactory { /// <summary> /// Start Internet Explorer without session merging /// </summary> /// <returns>The IE process</returns> private Process StartIEProcess() { string argument = "-nomerge about:blank"; var process = Process.Start("IExplore.exe", argument); if ( process == null ) throw new InvalidOperationException("The IExplore.exe process can't be started"); return process; } /// Start a new instance of Internet Explorer and attach it to a IWebBrowser2 object /// <returns>The COM object corresponding to the running instance</returns> public IWebBrowser2 DemarreInstanceCOM() //1. Let's count the current number of IE instances int count = new ShellWindows2().Count; //2. Start the process Process process = StartIEProcess(); //3. Let's find the handle of the new running IE // We will try for a maximum of 5 seconds, with trial every 100ms // if we had not been successful var getWindowHandle = new TryFuncUntilTimeOut(5) { SleepTime = 100 }; int windowHandle = getWindowHandle.Try<int>(() => process.MainWindowHandle.ToInt32()); //4. If we got the handle, let's get the browser var allBrowsers = new ShellWindows2(); if ( windowHandle != 0 ) { foreach ( var browser in allBrowsers ) if ( ( (IWebBrowser2)browser ).HWND == windowHandle ) return browser as IWebBrowser2; } //5. Otherwise, let's take the first instance on a blank page else //for that we'll wait until a new instance of IE is running var action = new TryFuncUntilTimeOut(5) { SleepTime = 100 }; action.Try<bool>(() => new ShellWindows2().Count > count); foreach ( var browser in new ShellWindows2() ) if ( ( (IWebBrowser2)browser ).LocationURL == "about:blank" ) //6. We had not been able to start the instance throw new InvalidOperationException("Internet Explorer could not be started"); } } Here is the link to the blog I have found explaining about session merging : Beautiful Code Leading Programmers Explain How They Think By Andy Oram and Greg Wilson Summary of the editor This unique and insightful book is a collection of master classes in software design. In each chapter, today's leading programmers walk through elegant solutions to hard problems, and explain what make those solutions so appealing. This is not simply another design. My criticsm and comments This book is an interesting reading to do, but to me, a bit inegal. Some chapters are very good and interesting and let's say I was less sensitive to other ones. What i will do here, is just giving a word about some chapters I found among the ones of best added values. Chapter 2 : "Subversion's delta-editor interface as ontology" by Karl Fogel This chapter will show a concrete example of how a classical problem can be solve : tree comparison. It uses as example one core API of Subversion, written in 2000 and still unchanged until now. The main interesting part is showing how you can do the choice of implementing a restrictive API to force users to use it in a given manner. As a consequence, all the future code becomes very predictible and the current porgram is much more robust. Chapter 7 : "Beautiful tests" by Alberto Savoia I'm working in TDD for years now and this chapter just bring some testing approach to you. It starts with a very simple code example : the dichotomic search. This algorithm seem to be very simple but 12 years have been necessary to provide the first implementation proven to be bug-free, even with dealing with very large arrays. What I like in this chapter ? The explanation of the testing strategy : which test you can write and until which point you go to be almost sure your code is bug-free. Chapter 8 : "On-the-fly code generation for image processing" by Charles Petzold Charles Petzold is delivering here a very interesting chapter, showing two meanings of the word "beautiful". It starts with a simple - and elegant - code to apply raster operation on images. It works good but is not so performant. As a consequence, he shows us how can generate IL code that will do the image transformation, and above all will be fast. His strategy ? Just generate minimalistic code, that won't have any loop or predicate, limiting the numboer of access to array element, ... Chapter 14 : "How elegant code evolves with hardwares : The case of Gaussian elimination" by Jack Dongarra and Piotr Luszczek They present here the evolution of one of the core method of the system LINPACK, LAPACK and ScaLAPACK. What is very interesting is how they made evolve the code to adapt it to the successive computer architecture. Chapter 17 : "Another level of indirection" by Diomidis Spinellis A nice explanation of the implementation of the core of FreeBSD dedicated to the IO operation: how an additionnal level of indirection can elegantly support the different type of file system : FAT-32, ISO-9660, NTFS, ... and can even simply support their specificities like security checks. Chapter 23 : "Distributed programming with MapReduce" by Jeff Dean and Sanjay Ghemawat MapReduce is used internally at Google to execute processing of a huge number of data on a huge number of computers. They describe here how we can take advantage of this framework / API to let our programs being executed on a large scale of clusters, and then re-collecting the data into the main application. One of the introduction sentence is clear enough : "Suppose that you have 20 billion documents, and you want to generate a count of how often each unique word occurs in the documents. With an average document size of 20 KB, just reading through the 400 terabytes of data on one machine will take roughly four months." Chapter 31 : "Emacspeak : the complete audio desktop" by T. V. Raman He explains here how he took the Emacs system to provide him with an audio system. What was the most interesting ? The way he does. Indeed he didn't modified the original code but worked in AOP (Aspect oriented Programmation) using the advice functionality of Lisp to provide Emacs with new aspects : the ability to speak. Conclusion This book shows interesting code in various languages. Maybe some chapters will be harder to read because you are so unfamiliar with Haskell or Lisp. But anyway, go for them ! They will show interesting thoughts and may point out some functionality of the used language that will let you think "How I would do that in .NET ?". No doubt that this book will inspire you and give you many new ideas ! Go for it ! Want to find more about the books I have read ? Go to my bookshelf to find the few criticism I have already made !
http://www.pedautreppe.com/2009/07/default.aspx
CC-MAIN-2017-43
refinedweb
1,446
53.81
Personalization Personalize the Mercury starter site as you wish You can quickly personalize the Mercury starter site by: - Changing the colors in use (especially the primary brand color) - Adding your own logo - Inserting new page blocks - Adding Jumbotron slides - Showing device-specific content - Adding projects to the portfolio - Changing texts and links in the page footer You can also manage the looks of components such as the service menu or page footer via CMS variables. How to change the colors You can change the Mercury site's colors (for example, the primary brand color) by editing respective variables in the variables.less: - On your website, edit ~/Frontend/Styles/bootstrap/variables.less. - Locate and change one or more of the following variables. @gray-base: #000; @gray-darker: lighten(@gray-base, 13.5%); // #222 @gray-dark: lighten(@gray-base, 20%); // #333 @gray: lighten(@gray-base, 33.5%); // #555 @gray-light: lighten(@gray-base, 46.7%); // #777 @gray-lighter: lighten(@gray-base, 93.5%); // #eee @brand-primary: #4189ff; @brand-success: #5cb85c; @brand-info: #5bc0de; @brand-warning: #f0ad4e; @brand-danger: #d9534f;and @body-bg: #f0efed; //** Background color for `<body>`. @text-color: @gray-dark; //** Global text color on `<body>`. @link-color: @brand-primary; //** Global textual link color. @link-hover-color: darken(@link-color, 15%); //** Link hover color set via `darken()` function. @link-hover-decoration: underline; //** Link hover decoration. - Save the variables.lessfile. - Refresh the website (F5). How to change the logo The logo displayed in the top left corner of the Mercury starter site can be found at: ~/Frontend/Images/logo.png To change it: - replace the logo.png with your own file using the same file name. The maximum sizes of the logo image can be 85px (height) by 200 px (width). The logo in use is 85 x 85. How to insert page blocks To quickly add well-formatted content to a page, you can make use of page blocks. You can see them in use on the front page and other pages of the Mercury starter site. There are a few types of page blocks available on the Mercury starter site: - Appear Animation: Shows content appearing with animation - Device Specific Content: Shows different versions of content on mobile devices and desktop devices - Iconfied Box: Shows content with an icon, title, read more URL and text - Iconified Text: Shows content with an icon - Image and Text: Shows content with an image with a configurable position - Image Box: Shows content with an image - Jumbotron: Shows a statement on an image as the background, normally used on front pages - Jumbotron Slides: - Animated Content Item: Shows content with animation (appearing and disappearing effects and delay) - Cycle: Shows jumbotron slides (see "How to insert Jumbotron slides" below) - Statement: Shows a statement (normally but not necessarily one-sentence motto-like text) - Three Columns in Row: Shows content in three columns Each page block is represented by a CMS Function of the 'PageBlocks" namespace: e.g. PageBlocks.Statement. You may change its looks by applying one of the pre-defined themes where available, or apply none. Each type of the page block may have its own set of themes. To insert a page block: - From the "Content" perspective, edit a page. - Insert > Function > All Functions > PageBlocks > select a page block of the type you need, for example, Statement. - Set the parameters. For example, in the PageBlocks.Statement, you should type in the statement content and select the background style. - Save and publish the page. How to insert Jumbotron slides You can also add content split between a few page blocks that will change each other automatically - Jumbotron slides. They normally include a background image and within it content part you can also add a link or a button link to another page on the website. To add Jumbotron slides to a page, you need: - Add a "Jumbotron Slides" page folder to the page. - Add slides (images) to the page folder. - Add the rendering CMS Function "PageBlocks.JumbotronSlides.Cycle" onto the page. To add the "Jumbotron Slides" page folder to a page: - From the "Content" perspective, right-click the page where you want the slides. - Click "Add Datafolder". - In the window that pops up, select "Jumbotron Slides" from the drop-down list. - Click "Finish". To add slides: - Select "Jumbotron Slides" under the page and click "Add Data". - In the data form that opens, on the " Slide" tab fill out the fields: - Title: The title of the slide - Background Image: The image to use as the slide's background - Background Overlay: The color style for the silde's background overlay: light, dark (default), or none. - List priority: The slide's priority in the list - On the "Slide content" write the content for the slide. - Save and publish the slide. - Repeat Steps 1-4 for as many slides as you need. To add the rendering function: - Edit the page where you've just added the slides to. - Insert > Function > All Functions > PageBlocks > JumbotronSlides > Cycle. - Save and publish the page. For a static single-slide content, select the PageBlocks.Jumbotron function. Showing device-specific content You can choose what content to show on desktop and mobile devices respectively by using a dedicated CMS Function: PageBlocks.DeviceSpecificContent. - Edit the page where you want different "mobile" and "desktop" content. - Insert > Function > All Functions > PageBlocks > DeviceSpecificContent. - In the "Mobile Content" parameter, write content to be seen on mobile devices. - In the "Desktop Content" parameter, write content to be seen on desktop devices. - Save the changes. On the "Mercury" starter site, you can see this function in use within the "Frontpage Start" content placeholder on the Frontpage. How to add projects to the portfolio You can present your portfolio of projects on a dedicated page. To add the portfolio to a page, you need: - Add the "Portfolio" application to the page. - Add projects and, if needed, project categories. - Add the rendering CMS Function "Composite.Lists.Portfolio.List" onto the page. To add the "Portfolio" application to a page: - From the "Content" perspective, right-click the page where you want the slides. - Click "Add Application". - In the window that pops up, select "Portfolio" from the "Application" drop-down list. - Click "Finish". To add project categories: - Select "Categories" under the page and click "Add Category". - In the data form that opens, fill out the fields: - Title: The title of the category - Ordering: The category position in the list - Save the category. - Repeat Steps 1-3 for as many categories as you need. To add projects: - Select "Projects" under the page and click "Add Portfolio Item". - In the data form that opens, on the " Project" tab fill out the fields: - Category: The category the project belongs to. - Project Title: The title of the project - Project Description: A short description of the project - Teaser Image: The image to use as the title image of the project - Images Folder: The media folder with the images related to the project (screenshots, photos, etc) - YouTube or Vimeo URL: The URL to a video hosted on YouTube or Vimeo, relate to the project - On the " Project Info" tab fill out the fields: - Client: The of the project if any - Date: The date of the project if any - Project Place: The project's place if any - Project URL: The URL to the project's web page or website - On the "Project Description" tab, write the description for the project. - Save and publish the project. - Repeat Steps 1-5 for as many projects as you need. To add the rendering function: - Edit the page where you've just added the slides to. - Insert > Function > All Functions > Composite > Lists > Portfolio > List. - If necessary, write some introduction to the portfolio in the "Intro Text" parameter. - Save and publish the page. How to change links and text in the footer The footer on each page of the Neptune starter site contains some dummy / placeholder text as well as a few iconified links that points to URLs related to C1 CMS. You can replace the text and links by editing the content of the page footer-related template features. There are 3 template features, each standing for the left, center and right column of the footer: - Footer Column 1: The left footer column with iconified social media link buttons. - Footer Column 2: The center footer column with iconified contact information. - Footer Column 3: The right footer column with some "About the company" information with a "Read More" link. To change the information in the page footer: - From the "Layout" perspective, expand "Page Template Features". - For the social media links: - Select and edit the "Footer Column 1" template feature. - Edit the "Icon Links" function (Composite.Social.IconLinks) - Make changes in the parameters where needed. - Save your changes. - For the contact information: - Select and edit the "Footer Column 2" template feature. - Edit a corresponding "Iconified Text" page block (PageBlocks.IconifiedText). - Change the icon and the text where needed. - Save your changes. - For "About the company" information: - Select and edit the "Footer Column 3" template feature. - Change the text and links where needed. - Save your changes. C1 CMS variables In addition to the standard Bootstrap variables, which you can change to personalize the website (see "How to change the colors" above), stored in ~/Frontend/Styles/bootstrap/variables.less there are a number of C1 CMS specific variables, you can also change to adapt the website to your requirements. These variables are stored in ~/Frontend/Styles/includes/c1-variables.less. Here you can manage colors, fonts, links of the following components used on the Neptune starter site: - header - the main menu - the service menu - the page footer For example, if you want to change the default white color of the heading in the page footer: - Edit ~/Frontend/Styles/includes/c1-variables.less. - Locate the @pagefooter-heading-colorvariable under //page footer. - Change its current value of @gray-lighter(defined in variables.less) to the one you prefer. - Save the changes.
https://docs.c1.orckestra.com/Getting-started/Starter-Sites/Mercury/Personalization
CC-MAIN-2017-39
refinedweb
1,644
64.41
I Just Gave A Load of Code to Microsoft I Just Gave A Load of Code to Microsoft Announcing a new open source project for making XAML development easier. Check it out here! Join the DZone community and get the full member experience.Join For Free What's This About Giving Away Code? I had an idea for something, built a proof-of-concept that impressed and excited some people, and we decided that the best way to get it to lots of people quickly was for Microsoft to release it as an Open Source project. So, that means I've "given" the code to Microsoft on the basis that they'd make it open source and thereby giving it away to everyone. In practice, it means it's in a Microsoft owned repository but I'm an admin and main contributor. What Is This Thing, Then? Introducing the Rapid XAML Toolkit. It's a Visual Studio extension that provides functionality to accelerate XAML development. It's rare for developers to build the XAML UI for an app before having some sort of data model that the UI will represent and allow interaction with. It might be from a database, web service, or somewhere else, but if you've got some code that describes what should be there, why not let the tool create it for you? Of course, it's more than just a file. It contains suitable XAML that represents the ViewModel, wires up the bindings, and sets the data context. It's not going to be the final XAML you need but it's going to do in two clicks what could otherwise take a few minutes. Given the choice, would you rather have a blank page or something that works and you can just tweak to your needs? As an example, given a class that looks like this: public class OrderDetailsViewModel : ViewModelBase { public int OrderId { get; private set; } public Guid CustomerId{ get; set; } public DateTimeOffset OrderDate { get; set; } public string OrderNotes { get; set; } public decimal OrderTotal { get; } public ObservableCollection<OrderLineItem> Items { get; set; } } It could produce this: <StackPanel> <TextBlock Text="{x:Bind ViewModel.OrderId}" /> <TextBlock Text="{x:Bind ViewModel.CustomerId}" /> <DatePicker Date="{x:Bind ViewModel.OrderDate, Mode=TwoWay}" /> <TextBox Text="{x:Bind ViewModel.OrderNotes, Mode=TwoWay}" /> <TextBlock Text="{x:Bind ViewModel.OrderTotal}" /> <ListView ItemsSource="{x:Bind ViewModel.Items}"> <ListView.ItemTemplate> <DataTemplate x: <StackPanel> <TextBlock Text="{x:Bind OrderLineId}" /> <TextBlock Text="{x:Bind ItemId}" /> <TextBlock Text="{x:Bind ItemDescription}" /> <TextBlock Text="{x:Bind Quantity}" /> <TextBlock Text="{x:Bind UnitPrice}" /> <TextBlock Text="{x:Bind LineTotal}" /> </StackPanel> </DataTemplate> </ListView.ItemTemplate> </ListView> </StackPanel> How does it know what XAML to create? The generated XAML is based on the type and name of the property and whether it is read-only. There are some carefully chosen options provided by default but everything is configurable. In the future, it may also be powered by AI, too. How does it know where to put the created file? It's based on conventions but is fully configurable. It even supports having the View and ViewModel in different projects if that's what you prefer. That all sounds like one feature. Why call it a toolkit? Because there are plans for much more. XAML generation is just the start, but it enables lots of scenarios and opportunities. Even today, it doesn't just provide the ability to produce whole files. It also provides the ability to generate XAML without having to create the file. You can generate the XAML for an entire class, an individual property, or a selection of properties. This raises the question of where the XAML will go. We leave that up to you. The generated XAML can either be copied to the clipboard (so you can paste it wherever you wish) or sent to the clipboard (so you can drag it where you want it.) The final part of setting up the basic association between VM and View is to ensure that the data context for the page is correctly set. If it's not, then it provides the option to do this for you. Does it do X? Probably not...yet! There are lots of features planned but if you've got a suggestion, please raise an issue in the GitHub repository. How does this compare to Windows Template Studio? Windows Template Studio will help you to scaffold a UWP app. The Rapid XAML Toolkit is a sister project and can help you once you've created the stub of your app, or if you have an existing app. It doesn't have to be a UWP app either. We hope this will be helpful to Xamarin developers too. Wait, what's this about Xamarin? I thought it was a UWP thing. Nope, it's a XAML thing! While there may end up being some elements or features that are UWP only, the intention is that it will be beneficial to any developer who is working with XAML. This means Xamarin.Forms and WPF. (It will work with Silverlight too, but I'm just not providing any default configuration for it.) Really, WPF? Yes, WPF, too. Anything else? Yes, it works with both C# AND VB.Net. Really, VB.Net? Yes, because all developers deserve nice things. Plus, we think this will be particularly beneficial to developers converting WinForms and WPF apps to UWP, and we know a lot of them use VB. If they're learning UWP, they shouldn't have to learn C# as well. Where can people find out more? GitHub is the place. Go to and check out the current features. As you'd expect, I'll be sharing more details soon, but, if you're at //build, I can show you more. You'll find me at the Windows Template Studio & Community Toolkit booth. Published at DZone with permission of Matt Lacey , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/i-just-quotgavequot-a-load-of-code-to-microsoft
CC-MAIN-2019-09
refinedweb
1,004
66.33
World is getting better - definitely If You are a lazy programmer like me this will blow Your mind .... I promise. When developing software I'd like to write only the stuff that absolutely definitely needs to be written, nothing more. So I have a long standing hate-hate relationship with those verbose xml-configs and convoluted frameworks since all they represent extra work that I need to do. In the worst case such work that has no value to the actual customer (remember customer anyone ?) , but work that just needs to be done to make the wheels go round. I hate writing boilerplate code/config/definition with a passion. So Imagine my smile when I found out that I can consume the services of a modern Authorization stack (including multi factor authentication) with a simple click of a button. Doesn't sound like much does it ? Wait, let me show You ... Setting the scene I’m writing this from the viewpoint of an LOB – developer. More specifically I want to write responsive HTML5 single page applications so I’m using Bootstrap on the front end side and some jquery (optionally some knockout,angular,…) and some sort of backend to service my front ed app some REST JSON APIs. Simple enough. In this scenario my backend needs to spit out some html but mostly serve my API calls. And when doing that the backend needs to know who the user is ! The users can’t be stored on the LOB-applications own database anymore (we all know that) so how about using some modern structure like oAuth and Azure Ad and so on … That’s when everything starts getting convoluted ! The minute You start dabbling with authorization things slow down like in Matrix bullet-time and You lose all productivity and Your hair starts getting gray very rapidly. But luckily there’s a really nice rabbit hole to bypass oll this grief when operating with Azure websites: The red bar is the real superhero of this story. It intercepts all calls to my application, regardless if it is a page request or a REST call to one of my APIs It makes sure that only those people who have successfully logged in with my AD can pass. And when they do pass my application is being given a nice context describing the user … And all this without writing any code. … well almost no code , since I need to write code to acces and use the userinfo I’ve been so graciously given. Enough talk, show me the money ! For the sake of simplicity I’ll only present the skeleton portion of an LOB, next time well continue from that. Visual Studio 2013 / New Project / Cloud / ASP.NET Web application / Web API / No Authentication : So now we have a nice skeleton. Lets tweak it just a little. And hit run (F5) . Once it runs go to. Mine is: You should get some JSON back. Opening mine : ["value1","value2",""] – Exactly as it should be So now we are done developing !!! Header: using System.Security.Claims; using System.Threading; using System.Web.Configuration; Code: public IEnumerable<string> Get() { String res = ""; var claimsPrincipal = Thread.CurrentPrincipal as ClaimsPrincipal; if (claimsPrincipal != null && claimsPrincipal.Identity.IsAuthenticated) { foreach (var claim in claimsPrincipal.Claims) { res += "<div>type:" + claim.Type + ":" + claim.Value + "</div>"; } res += "<p>Logout:" + WebConfigurationManager.AppSettings["WEBSITE_AUTH_LOGOUT_PATH"] + "</p>"; } return new string[] { "value1", "value2", res }; } Let’s deploy this to azure. If You didn’t already create a publishing profile while creating the product do it now: - Stop the debug, Right click Your project name and choose “publish” - As publish target choose “Microsoft Azure Website” - If You don’t have existing websites choose new. - Either way now You have a website picked, Validate the connection - On File publish options choose : “Remove additional files” and hit publish - After few minutes Your site should come up. go to the same relative url “/api/values” - You should get the exact same response as when running locally Fine . Now for the magic ….. (drumroll ,please) …. Goto to : manage.windowsazure.com/ , find Your website and go to configure and find this: Choose Your directory and “Add Application”. That’s it. You’re done . Next take a new browser or inPrivate browsing or whatever it is called in Your browser. The idea being that You should have a browser where You have no ongoing sessions to azure. This time You are presented with a login screen and once You clear that you go to the same url Notice that the returned JSON has all kinds of interesting claims now. Amongst them are your name , email and so on … Now You know that for every api or page request the caller is going to be identified against Your Azure AD (You can manage the AD as You wish) and You are being told who the caller is. Isn't this wonderful. Adding Multi Facet login capabilities Let's get a little greedy and add some cool stuff while we're at it. Take a look at here on how to setup Your AD to support multi facet logins : I have Azure AD premium so I just enabled the feature and made one of my users to require some additional authentication. Then I logged in with that user and was presented with a request to use my mobile app for extra credentials. Sorry , this is in Finnish, but You probably can guess what is says. So the application in my mobile came up and I chose “verify” and Boooom … I got in to my little sample application . And sure enough, the returned JSON message from "/api/values" had my webusers details in it. Easy breezy multi-factor-authentication-implemented-with-no-code This is so cool . Next time we’ll start to implement the actual application itself since we now got this super cool authentication implemented in no time. There are still things to wish for. Group-support is one of them but it is coming. Read more from here: Until next time … Good read, one correction and someone pointed it out to me when I said the same thing. "If You are a LAZY programmer like me this will blow Your mind" I will say to you what they said to me "You are not lazy you are efficient." I disagree with "Guy". You are lazy, AND you're coupling your system to your hosting provider. Thanks Guy. John, You are right. This couples my app to azure. We can of course implement this in such a way that the coupling is not so tight and it would be replacable later on. I've always tried to insulate my applications from the gory details of user authentication and the rest of the app not knowing where this information really comes from but just utilizing it through nice class would of course be a step in some direction. This is basically how this functionality is packaged in part two of this article. . Hope You check that out too when it is published. In regards to John – I actually appreciate that you left out any code hiding the gory details – it makes it clear how to integrate simply with this authentication option. I can figure out how to hide that behind a configurable facade or whatever on my own, so it is easily switchable with another authentication provider if needed. And if my only plan in to host in Azure – then why complicate things (I can run a bunch of find and replace code later when I decide Azure is no longer the platform I want to host on). Not all applications are (need to be?) complex… Jason – You got it. See part two in about a month for some more discussion about the the subject of (over/under/rightsize)-engineering and the cost of development contrasted with the lifetime epctancy of an application. Nice article. I am confused why John (and you in your reply) say that you are coupling your system to your hosting provider. I cannot see anything in the code specific to Azure. You seem to be using the standard ClaimsPrincipal. What am I missing?
https://blogs.msdn.microsoft.com/petsablog/2015/02/06/authorization-as-a-servicethe-coolest-thing-for-lob/
CC-MAIN-2018-26
refinedweb
1,359
62.98
I often find myself having to decide between making a project in VC++ or Perl and having to make it one or the other, not both. Perl is wonderful for string manipulation, hashes and arrays of arbitrary objects, and DWIM (Do What I Mean) behavior. VC++ is fast, has excellent type checking and debugging, and the resulting program can be easily packaged up for other machines. Perl requires that the target machine has Perl already installed. Some operations are one or two lines in Perl and 100 or 200 lines in VC++ (and vice versa). Perl is very fast for prototyping, etc. ad nauseum. I have seen the manual pages for Perl (perlguts, perlembed, perlapi, ...) showing how easy (ha!) it is to embed Perl into C/C++, but they are almost incomprehensible to somebody who doesn't get into the guts of Perl. Almost as bad as OLE! Further, even with the code to have embedded Perl, there is still the issue of getting C++ variables into and out of that instance of Perl. Even more arcane magic is required. This led me to spend some time reading and testing Perl embedding capabilities. Virtually everything I have here comes from the Perl manual pages, particularly perlguts, perlembed, and perlapi. These are not for the faint of heart. They certainly aren't for casual use. This effort, plus a little experience in real-world applications using embedded Perl, yields the following: Update (21-Feb-2012): See also CPerlWrapSTL in the source archive for a non-MFC version, courtesy of CodeProject member SLJW (a.k.a., jwilde). This class allows you to create an instance of Perl, pass variables into and out of that instance, and run arbitrary scripts. The instance "stays alive" until explicitly destroyed, so you can run many different scripts without re-instantiating. The three major variable types in Perl are the scalar ($abc), the list (@def), and the hash (%ghi) which correspond to MFC types of CString/int/double (for scalars), CStringArray (for lists), and CMapStringToString (for hashes). For each of these, there is a get and a set function: $abc @def %ghi CString int double CStringArray CMapStringToString // These are used to create and populate arbitrary variables. // Good for setting up data to be processed by the script. // They all return TRUE if the 'set' was successful. // set scalar ($varName) to integer value BOOL setIntVal(CString varName, int value); // set scalar ($varName) to double value BOOL setFloatVal(CString varName, double value); // set scalar ($varName) to string value BOOL setStringVal(CString varName, CString value); // set array (@varName) to CStringArray value BOOL setArrayVal(CString varName, CStringArray &value); // set hash (%varName) to CMapStringToString value BOOL setHashVal(CString varName, CMapStringToString &value); // These are used to get the values of arbitrary // variables ($a, $abc, @xyx, %gwxy, etc.) // They all return TRUE if the variable was defined and set // get scalar ($varName) as an int BOOL getIntVal(CString varName, int &val); // get scalar ($varName) as a double BOOL getFloatVal(CString varName, double &val); // get scalar ($varName) as a string BOOL getStringVal(CString varName, CString &val); // get array (@varName) as a CStringArray BOOL getArrayVal(CString varName, CStringArray &values); // get hash (%varName) as a CMapStringToString BOOL getHashVal(CString varName, CMapStringToString &value); So if I have a CString that I want to do something Perlish on, for instance extracting all the words into an array of words, here is my VC++ code: // perlInst is an instance of CPerlWrap CString str("this is a verylong set of words" " that would be a pain to deal with in C++"); perlInst.setStringVal("string",str); perlInst.doScript("@b = split(/\s+/, $string);"); CStringArray words; perlInst.getArrayVal("b", words); (Yes, this could be done in C++, but it's an easy example!) Or perhaps I want to capitalize each word in that string, using the following VC++ code: // perlInst is an instance of CPerlWrap CString str("this is a verylong set of " "words that would be a pain to deal with in C++"); perlInst.setStringVal("string",str); perlInst.doScript("$string =~ s/(\w+)/\u\L$1/g;"); perlInst.getStringVal("string", str); The results: This Is A Verylong Set Of Words That Would Be A Pain To Deal With In C++ Or how about getting the first non-trivial-sized plural word and some context? // perlInst is an instance of CPerlWrap CString str("this is a verylong set of words" " that would be a pain to deal with in C++"); perlInst.setStringVal("string",str); perlInst.doScript( "$str =~ m/(\w+)\s+(\w{3,}s)\s+(\w+)/;\n" "$match = \"lead context = '$1' " "match = '$2' trail context = '$3'\";" ); CString match; perlInst.getStringVal("match", match); Which results in match containing: match lead context = 'of' match = 'words' trail context = 'that' Ah! I have your attention now! Good. The scripts needn't be one liners. CString script( "$a = \"this is a verylong set of " "words that would be a pain to deal with in C++\";\n" "$a=~ s/(\w+)/\u\L$1/g;" ); perlInst.doScript(script); perlInst.getStringVal("a",str); As it happens, this particular script doesn't really need the embedded new-line \n, but if you want the errors message to point to something other than line 1, you'll add new-lines. \n Error messages? Well, startling as it may seem, sometimes there are errors in the Perl script that you run. It never happens to me (#include <NoseGettingLonger>) of course, but I've included some support for it. Here is an example showing an error and getting access to the problem report from Perl: #include <NoseGettingLonger> // this is missing the ';' at the end of the first line CString script( "my $d = 'this is a verylong set of words'\n" "$d =~ m/(\w+)\s+(\w{3,}s)\s+(\w+)/;" ); if(!perlInst.doScript(script)) { CString errmsg = perlInst.getErrorMsg(); if(!errmsg.IsEmpty()) errmsg = getWarnings(); MessageBox(errmsg,"Script Failure"); } Which yields: Scalar found where operator expected at (eval 18) line 2, near "'this is a verylong set of words' $d" (Missing operator before $d?) By default, warnings are not considered errors and all warnings are cleared before a script is executed. But if you want to easily detect warnings and errors, you can use these two functions to tune CPerlWrap's behavior: CPerlWrap // set to TRUE if warnings cause doScript() to return FALSE BOOL SetfailOnWarning(BOOL); // set to TRUE if warnings are // cleared before executing a doScript() BOOL SetclearWarningsOnScript(BOOL); First and foremost, to build a project with CPerlWrap, you need to have Perl 5.14 (or later) installed on your build machine. It is not necessary for Perl to be installed on the target machine, but it must be on your build machine. Your target machine must have the Perl512.dll file (or Perl514.dll or whatever you built against), so don't forget to package that up with your executable! However, if you use a Perl package, then you may be better off with Perl installed on your target machine. Go to and download the free Windows Perl. The price is right. Then install it. I'll wait here until that is done. Finished? Good. Took you long enough! Next, copy PerlWrap.h and PerlWrap.cpp into your project's directory. Use Project->Add to Project->Files... (or whatever the latest Visual Studio mechanism is) to add them to your project. Don't build quite yet; there is something else that needs doing. You need to add the Include directory for the Perl CORE files: This assumes you have installed Perl into C:\Perl. If you have installed elsewhere, make the appropriate adjustment and make a similar adjustment to the top of PerlWrap.h. // Adjust this to point to the proper .lib file for Perl on your machine // Remember to package Perl514.dll along with your project when you install // onto other machines! #if !defined(NOIMPLINK) && (defined(_MSC_VER) || defined(__BORLANDC__)) # pragma comment(lib,"C:/Perl/lib/CORE/Perl514.lib") #endif Now rebuild. Check that you can browse the CPerlWrap class. If so, then it is time to do something with it! Add a member variable to the class where you are doing your work. For me, this tends to be something like CMyProjectView and I add: CMyProjectView // Implementation public: CPerlWrap perlInstance; virtual ~CCPerlWrapperView(); #ifdef _DEBUG virtual void AssertValid() const; virtual void Dump(CDumpContext& dc) const; #endif If you like (recommended), you can tune Perl's behavior: void CMyProjectView::OnInitialUpdate() CFormView::OnInitialUpdate(); GetParentFrame()->RecalcLayout(); ResizeParentToFit(); perlInstance.SetclearWarningsOnScript(TRUE); // blah, blah ... The hardest part about using CPerlWrap is the backslashes (\). If you have a string that you want evaluated (interpolated) in Perl, such as "$var1 is xyz to $var2", then that string must be surrounded by " characters and you must escape those quotes in your VC++ code: \ "$var1 is xyz to $var2" " CString script("$string = \"$var1 is xyz to $var2\";"); perlInst.doScript(script); On the other hand, if you just want to have a string that is not interpolated, then use single-quotes: CString script("$string = 'this is an uninterpolated string';"); perlInst.doScript(script); If you need to have a backslash in the script, you need to double it up so that VC++ doesn't get it. Note the \\d is to get a \d (the match-a-digit pattern) into the script: \\d \d CString script( "$string =~ m/(\\d+)/;\n" "$firstNumber = $1;" ); perlInst.doScript(script); CString firstNumber; perlInst.getIntVal("firstNumber",firstNumber); It gets really ugly if you need to insert a backslash: perlInst.doScript("$StartDir =~ s%/%\\\\%g; " "# change '/' to Windows-style '\'") For reasons that I have not been able to discover, this embedded Perl doesn't allow for sub-processes (note: this statement is from 2003; the situation may have changed by now in 2012). So Perl favorites like: open(F,"./unzip.exe -p db.zip |") or die("Cannot open pipe from unzip, $!"); @uncompressed = <F>; # suck in entire file, one line per list element close(F); just don't work! Same thing with using the backtick “`” or the system() function. Just don't work. If anybody has a fix for this, please let me know, as it has been a source of frustration for me. ` system() In Perl, the my operator is used to declare a variable in the current scope. Scope is determined, much like in VC++, by surrounding {} pairs. The doScript() function performs a Perl eval {script} (note the {} pair) and so any variable declared with my will not be available with the get* and put* functions; they are local to that instance of the eval. If you like to have use strict; in your code, then you will have to define all your "global" variables using the put* functions (which puts them into the main:: module). my {} doScript() eval {script} eval use strict; main:: One of the great advantages of Perl is the long list of available modules. These are the Perl equivalent of C/C++ libraries. Modules are included using the syntax: use CGI; use Win32; where CGI and Win32 are two such modules. These modules are usually included in the directory tree where Perl is installed. Which means that using a Perl module in CPerlWrap requires that the tree be around on the target machine. CGI Win32 If the module in question is pure Perl (no embedded C functions), then you can copy the module (CGI.pm, Win32.pm, or whatever) to the target machine and tell Perl where to find it with the use lib('some new directory'); pragma. use lib('some new directory'); But (there is always a but), if you want a module that has embedded C functions (such as, sadly, Win32), then you will have to diddle the xs_init() function (found in PerlWrap.cpp) and that is 'way beyond what I know about'. I have put some comments (gleaned from the manual pages) to get you started, but I really know nothing about it. If you need such a module, start with perlguts, perlapi, and perlembed. xs_init() Update: Recent versions of Perl have better support for this kind of thing. In fact, these two commands are your friends: # gets a list of libraries that you can reference with: # #pragma comment(lib,”libraryName.lib”) perl -MExtUtils::Embed -e ccopts -e ldopts # creates the xs_init() function with the hooks for compiled modules perl -MExtUtils::Embed -e xsinit -- -o perlxsi.c CPerlWrap will probably always be a work in progress, so I will try and update this article when I make significant changes. I suspect that the greatest source of changes will be fixes to bugs all of you have pointed out! I don't pretend to be a perlguts expert -- everything is in the Perl manual pages and all I've done is to try and wrap it up so that it is easy to use. See the disclaimers below. Your Mileage May Vary. Void where prohibited. Do not take internally. Not intended for ophthalmic use. Not intended for children under the age of 65. Do not use while sleeping. Warning: May cause drowsiness. For indoor or outdoor use only. For off-road use only. For office use only. Do not attempt to stop chain with your hands or genitals. Remember, objects in the mirror are actually behind you. This product not tested on animals. No humans were harmed or even used in the creation of this page. Not to be taken internally, literally, or seriously. Some assimilation required. Resistance is futile. This product is meant for educational purposes only. The manufacturer will not be responsible for any damages or inconvenience that may result and no claim to the contrary may legitimately be expressed or implied. Some assembly required. Use only as directed. No other warranty expressed or implied. Do not use while operating a motor vehicle or heavy equipment. May be too intense for some viewers. No user-serviceable parts inside. Subject to change without notice. Breaking seal constitutes acceptance of agreement. Contains a substantial amount of non-tobacco ingredients. Use of this product may cause a temporary discoloring of your teeth. Not responsible for direct, indirect, incidental, or consequential damages resulting from any defect, error, or failure to perform. Don't try this in your living room; these are trained professionals. Sign here without admitting guilt. Out to lunch. The author is not responsible for any mental distress caused. Use under adult supervision. Not responsible for typographical errors. Do not put the base of this ladder on frozen manure. Some of the trademarks mentioned in this product appear for identification purposes only. Objects in mirror may be closer than they appear. These statements have not been evaluated by the Food and Drug Administration. This product is not intended to diagnose, treat, cure, or prevent any disease. Not authorized for use as critical components in life support devices or systems. In the unlikely event of an emergency, participants may be liable for any rescue or evacuation costs incurred either on their behalf or as a result of their actions. In certain states, some of the above limitations may not apply to you. This supersedes all previous notices unless indicated otherwise. References on CodeProject that relate: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) // perlInst is an instance of CPerlWrap CString str("this is a verylong set of words" " that would be a pain to deal with in C++"); perlInst.setStringVal("string",str); <-- error "string" is const char not CString perlInst.doScript("@b = split(/\s+/, $string);"); <-- error CStringArray words; perlInst.getArrayVal("b", words); <-- error "b" is const char not CString. perlInst.setStringVal(_T("string"),str); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. News on the future of C# as a language
http://www.codeproject.com/Articles/3037/CPerlWrap-A-Class-Wrapper-for-embedding-Perl-into
CC-MAIN-2014-10
refinedweb
2,621
64.71
- NAME - DESCRIPTION - CREATING A TEST - BASE METHODS - METHODS FOR LOADING MODULES - METHODS FOR RUNNING TEST - USING THE TESTS METHOD - SPECIFYING THE TESTS - ENVIRONMENT VARIABLES - HISTORY - KNOWN BUGS AND LIMITATIONS - SEE ALSO - LICENSE - AUTHOR NAME Test::Inter - framework for more readable interactive test scripts DESCRIPTION This is another framework for writing test scripts. Some of the syntax None offer the ability to write the tests in whatever format would make the tests the most readable The way I write and use test scripts, existing Test::* modules are not nearly as useful as they could be. Test scripts written using Test::More work fine when running as part of the test suite, but debugging an individual test requires extra steps, and the tests themselves are not as readable as they should be. I do most of my debugging using test scripts. When I find a bug, I write a test case for it, debug it using the test script, and then leave the test there so the bug won't come back (hopefully). Since I use test scripts in two ways (part of a standard test suite and to run the scripts in some interactive way to debug problems), I want to be able to do the following trivially: - Easy access to a specific test or tests If I'm running the test script interactively (perhaps in the debugger), there are several common functions that I want to have available, including: Run only a single test, or a subset of tests Set a breakpoint in the debugger to run up to the start of the Nth test - Better diagnostics When running a test script as part of a test suite, the pass/fail status is really the only thing of interest. You just want to know if the module passes all the tests. When running interactively, additional information may allow me to quickly track down the problem without even resorting to a debugger. If a test fails, I almost always want to see why it failed if I'm running it interactively. If reasonable, I want to see a list of what was input, what was output, and what was expected. The other feature that I wanted in a test suite is the ability to define the tests in a format that is natural and readable FOR THE TESTS. In almost every case, it is best to think of a test script as consisting of two separate parts: a script part, and a test part. The script part of a test script is the least important part! It's usually fairly trivial, rarely needs to be changed, and is not the focus of the test script. The tests part of the script IS the important part, and these should be expressed in a form that is natural to them, easy to maintain, easy to read, and easy to modify, and none of these should involve modifying the script portion of the test script in general. Because the content of the tests is the important part of the script, the emphasis should be in making them more readable, even at the expense of the script portion. As a general rule, if the script portion of the test script obscures the tests in any way, it's not written correctly! The solution to this is well understood, and is common to many other systems where you are mixing two "languages". The task of correctly specifying both the tests and the test script is virtually identical to the task of creating a PHP script which consists of a mixture of PHP and HTML, or the task of creating a template file using some templating system where the file consists of a mixture of text to be displayed and templating commands. It is well understood in each of these cases that the more the two "languages" are interwoven, the less readable both are, and the harder it is to maintain. The more you are able to separate the two, the easier both are to read and maintain. As often as possible, I want the tests to be written in some sort of text format which can be easily read as a table with no perl commands interspersed. I want to the freedom to define the tests in one section (a long string, the DATA section, or even in a separate file) which is easily readable. This may introduce the necessity of parsing it, but it makes it significantly easier to maintain the tests. This flexibility makes it much easier to read the tests (as opposed to the script) which is the fundamental content of a test script. To illustrate some of this, in Test::More, a series of tests might be specified as: # test 1 $result = func("apples","bushels"); is($result, "enough"); # test 2 $result = func("grapefruit","tons"); is($result, "enough"); # test 3 $result = func("oranges","boatloads"); is($result, "insufficient"); Thinking about the features I want that I listed above, there are several difficulties with this. - Debugging the script is tedious Say you ran the test suite, and test 3 failed. To debug it using a traditional Test::* module, you have to open up the test script, find the 3rd test (which won't necessarily be trivial, especially if you're talking about the 103rd test), and then run the debugger setting a break point at that line number. This sequence of steps is typically not very hard (especially when the test script is as simple as the example above), but it's still lots of steps that serve only to break your train of thought. How much better to be able to set a break point in the function that actually performs the test for the Nth test. It would also be nice to be able to skip the first two tests... perhaps they take a long time to run, and I want to get right to work on test 3. You can do this easily too by setting the $::TI_START variable. - Way too much perl interspersed with the tests It's difficult to read the tests individually in this script because there is too much perl code among them, and virtually impossible to look at them as a whole. It is true that looking at this particular example, it is very simple... but the script ISN'T the content you're interested in (and bear in mind that many test scripts are nowhere near this simple). The REAL content of this script are the tests, which consist of the function arguments and the expected result. Although it's not impossible to see each of these in the script above, it's not in a format that is conducive to studying the tests, and especially not for examining the list of tests as a whole. Now, look at an alternate way of specifying the tests using this module: $tests = " apples bushels => enough grapefruit tons => enough oranges boatloads => insufficient "; $o->tests(tests => $tests, func => \&func); Here, it's easy to see the list of tests, and adding additional tests is a breeze. This module supports a number of methods for defining tests, so you can use whichever one is most convenient (including methods that are identical to Test::More if that really is the best method). In addition, the following debugger command works as desired: b func ($::TI_NUM==3) and you're ready to debug (assuming that the test function is named 'func'). CREATING A TEST Every test may have several pieces of information: - A name Every test is automatically assigned a number, but it may be useful to specify a name of a test (which is actually a short description of the test). Whenever a test result is reported, the name will be given (if one was specified). The name may not have a '#' in it. The name is completely optional, but makes the results more readable. - An expected result In order to test something, you need to know what result was expected (or in some cases, what result was NOT expected). - A function and arguments OR a result You also need to know the results that you're comparing to the expected results. This can be obtained by simply working with a set of results, or a function name and a set of arguments to pass to it. - Conditions It is useful to be able to specify state information at the start of the test suite (for example, to see if certain features are available), and some tests may only run if those conditions are met. If no conditions are set for a test, it will always run. - Todo tests Some tests may be marked as 'todo' tests. These are test which are allowed to fail (meaning that they have been put in place for an as-yet unimplemented feature). Since it is expected that the test will fail, the test suite will still pass, even if these tests fail. The tests will still run and if they pass, a message is issued saying that the feature is now implemented, and the tests should be graduated to non-todo state. BASE METHODS - new $o = new Test::Inter [$name] [%options]; This creates a new test framework. There are several options which may be used to specify which tests are run, how they are run, and what output is given. The entire test script can be named by passing in $name. Options can be passed in as a hash of ($opt,$val) pairs. Options can be set in four different ways. First, you can pass in an ($opt,$val) pair in the new method. Second, you can set an environment variable (which overrides any value passed to the new method). Third, you can set a global variable (which overrides both the environment variable and options passed to the new method). Forth, you can call the appropriate method to set the option. This overrides all other methods. Each of the allowed options are described below in the following base methods: start end testnum plan abort quiet mode skip_all width - version $o->version(); Returns the version of the module. - start $o = new Test::Inter 'start' => $N; $o->start($N) To define which test you want to start with, pass in an ($opt,$val) pair of ('start',N), set an environment variable TI_START=N, or a global variable $::TI_START=N. When the start test is defined, most tests numbered less than N are completely ignored. If the tests are being run quietly (see the quiet method below), nothing is printed out for these tests. Otherwise, a skip message is printed out. One class of tests IS still executed. Tests run using the require_ok or use_ok methods (to test the loading of modules) are still run. If no value (or a value of 0) is used, it defaults to the first test. - end $o = new Test::Inter 'end' => $M; $o->end($M); To define which test you want to end with, pass in an ($opt,$val) pair of ('end',M), set an environment variable TI_END=M, or set a global variable $::TI_END=M. When the end test is defined, all tests numbered more than M are completely ignored. If the tests are being run quietly (see the quiet method below), nothing is printed out for these tests. Otherwise, a skip message is printed out. If no value is given, it defaults to 0 (which means that all remaining tests are run). - testnum $o = new Test::Inter 'testnum' => $N; $o->testnum($N); This is used to run only a single test. It is equivalent to setting both the start and end tests to $N. - plan - - done_testing $o = new Test::Inter 'plan' => $N; $o->plan($n); $o->done_testing(); $o->done_testing($n); The TAP API (the 'language' used to run a sequence of tests and see which ones failed and which ones passed) requires a statement of the number of tests that are expected to run. This statement can appear at the start of the test suite, or at the end. If you know in advance how many tests should run in the test script, you can pass in a non-zero integer in a ('plan',N) pair to the new method, or set the TI_PLAN environment variable or the $::TI_PLAN global variable, or call the plan method. If you know how many tests should run at the end of the test script, you can pass in a non-zero integer to the done_testing method. Frequently, you don't really care how many tests are in the script (especially if new tests are added on a regular basis). In this case, you still need to include a statement that says that the number of tests expected is however many were run. To do this, call the done_testing method with no argument. NOTE: if the plan method is used, it MUST be used before any tests are run (including those that test the loading of modules). If the done_testing method is used, it MUST be called after all tests are run. You must specify a plan or use a done_testing statement, but you cannot do both. It is NOT strictly required to set a plan if the script is only run interactively, so if for some reason this module is used for test scripts which are not part of a standard perl test suite, the plan and done_testing statements are optional. As a matter of fact, the script will run just fine without them... but a perl installer will report a failure in the test suite. - abort $o = new Test::Inter 'abort' => 0/1/2; $o->abort(0/1/2); The abort option can be set using an ('abort',0/1/2) option pair, or by setting the TI_ABORT environment variable, or the $::TI_ABORT global variable. If this is set to 1, the test script will run unmodified until a test fails. At that point, all remaining tests will be skipped. If it is set to 2, the test script will run until a test fails at which point it will exit with an error code of 1. In both cases, todo tests will NOT trigger the abort behavior. - quiet $o = new Test::Inter 'quiet' => 0/1/2; $o->quiet(0/1/2); The quiet option can be set using an ('quiet',0/1/2) option pair, or by setting the TI_QUIET environment variable, or the $::TI_QUIET global variable. If this is set to 0 (the default), all information will be printed out. If it is set to 1, some optional information will not be printed. If it is set to 2, all optional information will not be printed. - mode $o = new Test::Inter 'mode' => MODE; $o->mode(MODE); The mode option can be set using a ('mode',MODE) option pair, or by setting the TI_MODE environment variable, or the $::TI_MODE global variable. Currently, MODE can be 'test' or 'inter' meaning that the script is run as part of a test suite, or interactively. When run in test mode, it prints out the results using the TAP grammar (i.e. 'ok 1', 'not ok 3', etc.). When run in interactive mode, it prints out results in a more human readable format. - width $o = new Test::Inter 'width' => WIDTH; $o->width(WIDTH); The width option can be set using a ('width',WIDTH) option pair, or by setting the TI_WIDTH environment variable, or the $::TI_WIDTH global variable. WIDTH is the width of the terminal (for printing out failed test information). It defaults to 80, but it can be set to any width (and lines longer then this are truncated). If WIDTH is set to 0, no truncation is done. - skip_all $o = new Test::Inter 'skip_all' => REASON; $o->skip_all(REASON); The skip_all option can be set using an ('skip_all',REASON) option pair, or by setting the TI_SKIP_ALL environment variable, or the $::TI_SKIP_ALL global variable. If this is set, the entire test script will be skipped for the reason given. This must be done before any test is run, and before any plan number is set. The skip_all can also be called at any point during the script (i.e. after tests have been run). In this case, all remaining scripts will be skipped. $o->skip_all(REASON,FEATURE,FEATURE,...); $o->skip_all('',FEATURE,FEATURE,...); This will skip all tests (or all remaining tests) unless all features are available. REASON can be entered as an empty string and the reason the tests are skipped will be a message about the missing feature. - feature $o->feature($feature,$val); This defines a feature. If $val is non-zero, the feature is available. Otherwise it is not. - diag - - note $o->diag($message); $o->note($message); Both of these print an optional message. Messages printed with the note method are always optional and will be omitted if the quiet option is set to 1 or 2. Messages printed with the diag method are optional and will not be printed if the quiet option is set to 2, but they will be printed if the quiet method is set to 1. - testdir Occasionally, it may be necessary to know the directory where the tests live (for example, there may be a config or data file in there). This method will return the directory. METHODS FOR LOADING MODULES Test scripts can load other modules (using either the perl 'use' or 'require' commands). There are three different modes for doing this which determine how this is done. - required mode By default, this is used to test for a module that is required for all tests in the test script. Loading the module is treated as an actual test in the test suite. The test is to determine whether the module is available and can be loaded. If it can be loaded, it is, and it is reported as a successful test. If it cannot be loaded, it is reported as a failed test. In the result of a failed test, all remaining tests will be skipped automatically (except for other tests which load modules). - feature mode In feature mode, loading the module is not treated as a test (i.e. it will not print out an 'ok' or 'not ok' line. Instead, it will set a feature (named the same as the module) which can be used to determine whether other tests should run or not. - forbid mode In a few very rare cases, we may want to test for a module but expect that it not be present. This is the exact opposite of the 'required' mode. Successfully loading the module is treated as a test failure. In the event of a failure, all remaining tests will be skipped. The methods available are: - require_ok $o->require_ok($module [,$mode]); This is used to load a module using the perl 'require' function. If $mode is not passed in, the default mode (required) is used to test the existence of the module. If $mode is passed in, it must be either the string 'forbid' or 'feature'. If $mode is 'feature', a feature named $module is set if the module was able to be loaded. - use_ok $o->use_ok(@args [,$mode]); This is used to load a module with 'use', or check a perl version. BEGIN { $o->use_ok('5.010'); } BEGIN { $o->use_ok('Some::Module'); } BEGIN { $o->use_ok('Some::Module',2.05); } BEGIN { $o->use_ok('Some::Module','foo','bar'); } BEGIN { $o->use_ok('Some::Module',2.05,'foo','bar'); } are the same as: use 5.010; use Some::Module; use Some::Module 2.05; use Some::Module qw(foo bar); use Some::Module 2.05 qw(foo bar); Putting the use_ok call in a BEGIN block allows the functions to be imported at compile-time and prototypes are properly honored. You'll also need to load the Test::Inter module, and create the object in a BEGIN block. $mode acts the same as in the require_ok method. METHODS FOR RUNNING TEST There are several methods for running tests. The ok, is, and isnt methods are included for those already comfortable with Test::More and wishing to stick with the same format of test script. The tests method is the suggested method though since it makes use of the full power of this module. - ok $o->ok(TESTS); A test run with ok looks at a result, and if it evaluates to 0 (or false), it fails. If it evaluates to non-zero (or true), it passes. These tests do not require you to specify the expected results. If expected results are given, they will be compared against the result received, and if they differ, a diagnostic message will be printed, but the test will still succeed or fail based only on the actual result produced. These tests require a single result and either zero or one expected results. To run a single test, use any of the following: $o->ok(); # always succeeds $o->ok($result); $o->ok($result,$name); $o->ok($result,$expected,$name); $o->ok(\&func); $o->ok(\&func,$name); $o->ok(\&func,$expected,$name); $o->ok(\&func,\@args); $o->ok(\&func,\@args,$name); $o->ok(\&func,\@args,$expected,$name); If $result is a scalar, the test passes if $result is true. If $result is a list reference, and the list is either empty, or the first element is a scalar), the test succeeds if the list contains any values (except for undef). If $result is a hash reference, the test succeeds if the hash contains any key with a value that is not undef. If \&func and \@args are passed in, then $result is generated by passing @args to &func and behaves identically to the calls where $result is passed in. If \&func is passed in but no arguments, the function takes no arguments, but still produces a result. $result may be a scalar, list reference, or hash reference. If it is a list reference, the test passes is the list contains any defined values. If it is a hash reference, the test passes if any of the keys contain defined values. If an expected value is passed in and the result does not match it, a diagnostic warning will be printed, even if the test passes. - is - - isnt $o->is(TESTS); $o->isnt(TESTS); A test run with is looks at a result and tests to see if it is identical to an expected result. If it is, the test passes. Otherwise it fails. In the case of a failure, a diagnostic message will show what result was actually obtained and what was expected. A test run with isnt looks at a result and tests to see if the result obtained is different than an expected result. If it is different, the test passes. Otherwise it fails. The is method can be called in any of the following ways: $o->is($result,$expected); $o->is($result,$expected,$name); $o->is(\&func,$expected); $o->is(\&func,$expected,$name); $o->is(\&func,\@args,$expected); $o->is(\&func,\@args,$expected,$name); The isnt method can be called in exactly the same way. As with the ok method, the result can be a scalar, hashref, or listref. If it is a hashref or listref, the entire structure must match the expected value. - tests $o->tests($opt=>$val, $opt=>$val, ...) The options available are described in the following section. - file $o->file($func,$input,$outputdir,$expected,$name [,@args]); Sometimes it may be easiest to store the input, output, and expected output from a test in a text file. In this case, each line of output will be treated as a single test, so the output and expected output must match up exactly. $func is a reference to a function which will produce a temporary output file. If $input is specified, it is the name of the input file. If it is empty, no input file will be used. The input file can be fully specified, or it can be relative to the test directory. If $outputdir is passed in, it is the directory where the output file will be written. It can be fully specified, or relative to the test directory. If $outputdir is left blank, the temporary file will be written to the test directory. $expected is the name of a file which contains the expected output. It can be fully specified, or it will be checked for in the test directory. $name is the name of this series of tests. @args are extra arguments to pass to the test function. The function will be called with the arguments: &$func( [$input,] $output,@args); $input is only passed in if it was passed in to this method. If no input file is specified, nothing will be passed to the function. $output is the name of a temporary file where the output will be written to. USING THE TESTS METHOD It is expected that most tests (except for those that load a module) will be run using the tests method called as: $o->tests($opt => $val, $opt => $val, ...); The following options are available: - name name => NAME This sets the name of this set of tests. All tests will be given the same name. - tests - - func - - expected In order to specify a series of tests, you have to specify either a function and a list of arguments, or a list of results. Specifying the function and list of arguments can be done using the pair: func => \&FUNCTION tests => TESTS If the func option is not set, tests contains a list of results. A list of expected results may also be given. They can be included in the tests => TESTS option or included separately as: expected => RESULTS The way to specify these are covered in the next section SPECIFYING THE TESTS. - feature - - disable feature => [FEATURE1, FEATURE2, ...] disable => [FEATURE1, FEATURE2, ...] The default set of tests to run is determined using the start, end, and skip_all methods discussed above. Using those methods, a list of tests is obtained, and it is expected that these will run. The feature and disable options modify the list. If the feature option is included, the tests given in this call will only run if ALL of the features listed are available. If the disable option is included, the tests will be run unless ANY of the features listed are available. - skip skip => REASON Skip these tests for the reason given. - todo todo => 0/1 Setting this to 1 says that these tests are allowed to fail. They represent a feature that is not yet implemented. If the tests succeed, a message will be printed notifying the developer that the tests are now ready to promote to actual use. SPECIFYING THE TESTS A series of tests can be specified in two different ways. The tests can be written in a very simple string format, or stored as a list. Demonstrating how this can be done is best done by example, so let's say that there is a function (func) which takes two arguments, and returns a single value. Let's say that the expected output (and the actual output) from 3 different sets of arguments is: Input Expected Output Actual Output ----- --------------- ------------- 1,2 a a 3,4 b x 5,6 c c (so in this case, the first and third tests pass, but the 2nd one will fail). Specifying these tests as lists could be done as: $o->tests( func => &func, tests => [ [1,2], [3,4], [5,6] ], expected => [ [a], [b], [c] ], ); Here, the tests are stored as a list, and each element in the list is a listref containing the set of arguments. If the func option is not passed in, the tests option is set to a list of results to compare with the expected results, so the following is equivalent to the above: $o->tests( tests => [ [a], [x], [c] ], expected => [ [a], [b], [c] ], ); If an argument (or actual result) or an expected result is only a single value, it can be entered as a scalar instead of a list ref, so the following is also equivalent: $o->tests( func => &func, tests => [ [1,2], [3,4], [5,6] ], expected => [ a, b, [c] ], ); The only exception to this is if the single value is itself a list reference. In this case it MUST be included as a reference. In other words, if you have a single test, and the expected value for this test is a list reference, it must be passed in as: expected => [ [ \@r ] ] NOT as: expected => [ \@r ] Passing in a set of expected results is optional. If none are passed in, the tests are treated as if they had been passed to the 'ok' method (i.e. if they return something true, they pass, otherwise they fail). The second way to specify tests is as a string. The string is a multi-line string with each tests being separate from the next test by a blank line. Comments (lines which begin with '#') are allowed, and are ignored. Whitespace at the start and end of the line is ignored. The string may contain the results directly, or results may be passed in separately. For example, the following all give the same sets of tests as the example above: $o->tests( func => &func, tests => " # Test 1 1 2 => a # Test 2 3 4 => b 5 6 => c ", ); $o->tests( func => &func, tests => " 1 2 3 4 5 6 ", expected => [ [a], [b], [c] ] ); $o->tests( func => &func, tests => [ [1,2], [3,4], [5,6] ], expected => " a b c ", ); $o->tests( func => &func, tests => " 1 2 3 4 5 6 ", expected => " a b c ", ); The expected results may also consist of only a single set of results (in this case, it must be passed in as a listref). In this case, all of the tests are expected to have the same results. So, the following are equivalent: $o->tests( func => &func, tests => " 1 2 => a b 3 4 => a b 5 6 => a b ", ); $o->tests( func => &func, tests => " 1 2 3 4 5 6 ", expected => [ [a, b] ], ); $o->tests( func => &func, tests => " 1 2 3 4 5 6 ", expected => "a b", ); The number of expected values must either be 1 (i.e. all of the tests are expected to produce the same value) or exactly the same number as the number of tests. The parser is actually quite powerful, and can handle multi-line tests, quoted strings, and nested data structures. The test may be split across any number of lines, provided there is not a completely blank line (which signals the end of the test), so the following are equivalent: tests => "a b c", tests => "a b c", Arguments (or expected results) may include data structures. For example, the following are equivalent: tests => "[ a b ] { a 1 b 2 }" tests => [ [ [a,b], { a=>1, b=>2 } ] ] Whitespace is mostly optional, but there is one exception. An item must end with some kind of delimiter, so the following will fail: tests => "[a b][c d]" The first element (the list ref [a b]) must be separated from the second element by the delimiter (which is whitespace in this case), so it must be written as: tests => "[a b] [c d]" As already demonstrated, hashrefs and listrefs may be included and nested. Elements may also be included inside parens, but this is optional since all arguments and expected results are already treated as lists, so the following are equivalent: tests => "a b c" tests => "(a b) c" Although parens are optional, they may make things more readable, and allow you to use something other than whitespace as the delimiter. If the character immediately following the opening paren, brace, or bracket is a punctuation mark, then it is used as the delimiter instead of whitespace. For example, the following are all equivalent: [ a b c ] [a b c] [, a,b,c ] [, a, b, c ] A delimiter is a single character, and the following may not be used as a delimiter: any opening/closing characters () [] {} single or double quotes alphanumeric characters underscore Whitespace (including newlines) around the delimiter is ignored, so the following is valid: [, a, b, c ] Two delimiters next to each other or a trailing delimiter produce an empty string. "(,a,b,)" => (a, b, '') "(,a,,b)" => (a, '', b) Hashrefs may be specified by braces and the following are equivalent: { a 1 b 2 } {, a,1,b,2 } {, a,1,b,2, } Note that a trailing delimiter is ignored if there are already an even number of elements, or an empty string otherwise. Nested structures are allowed: "[ [1 2] [3 4] ]" For example, $o->tests( func => &func, tests => "a [ b c ] { d 1 e 2 } => x y" ); is equivalent to: $o->tests( func => &func, tests => [ [a, [b,c], {d=>1,e=>2}] ], results => [ [x,y] ], ); Any single value can be surrounded by single or double quotes in order to include the delimiter. So: "(, a,'b,c',e )" is equivalent to: "( a b,c e )" Any single value can be the string '__undef__' which will be turned into an actual undef. If the value is '__blank__' it is turned into an empty string (''), though it can also be specified as '' directly. Any value can have an embedded newline by including a __nl__ in the value, but the value must be written on a single line. Expected results are separated from arguments by ' => '. ENVIRONMENT VARIABLES To summarize the information above, the following environment variables (and main:: variables) exist. Each can be set in a perl script as a variable in the main namespace: $::TI_END or as an environment variable: $ENV{TI_END} - TI_START Set this to define the test you want to start with. Example: If you have a perl test script (my_test_script) and you want to start running it at test 12, run the following shell commands: TI_START=12 ./my_test_script.t - TI_END Set this to define the test you want to end with. - TI_TESTNUM Set this to run only a single test - TI_QUIET How verbose the test script is. - TI_MODE How the output is formatted. - TI_WIDTH The width of the terminal. - TI_NOCLEAN When running a file test, the temporary output file will not be removed if this is set. HISTORY The history of this module dates back to 1996 when I needed to write a test suite for my Date::Manip module. At that time, none of the Test::* modules currently available in CPAN existed (the earliest ones didn't come along until 1998), so I was left completely on my own writing my test scripts. I wrote a very basic version of my test framework which allowed me to write all of the tests as a string, it would parse the string, count the tests, and then run them. Over the years, the functionality I wanted grew, and periodically, I'd go back and reexamine other Test frameworks (primarily Test::More) to see if I could replace my framework with an existing module... and I've always found them wanting, and chosen to extend my existing framework instead. As I've written other modules, I've wanted to use the framework in them too, so I've always just copied it in, but this is obviously tedious and error prone. I'm not sure why it took me so long... but in 2010, I finally decided it was time to rework the framework in a module form. I loosely based my module on Test::More. I like the functionality of that module, and wanted most of it (and I plan on adding more in future versions). So this module uses some similar syntax to Test::More (though it allows a great deal more flexibility in how the tests are specified). One thing to note is that I may have been able to write this module as an extension to Test::More, but after looking into that possibility, I decided that it would be faster to not do that. I did "borrow" a couple of routines from it (though they've been modified quite heavily) as a starting point for a few of the functions in this module, and I thank the authors of Test::More for their work. KNOWN BUGS AND LIMITATIONS None known. SEE ALSO Test::More - the 'industry standard' of perl test frameworks LICENSE This script is free software; you can redistribute it and/or modify it under the same terms as Perl itself. AUTHOR Sullivan Beck ([email protected])
https://metacpan.org/pod/Test::Inter
CC-MAIN-2016-18
refinedweb
6,048
67.08
The iOS Huddle #0 Monday, August 15, 2016 The iOS Huddle is our monthly roundup of the best links shared in the Black Pixel iOS team’s Slack channel. From BPXL Craft In case you missed them, here are a couple of recent iOS-centric articles from BPXL Craft: NSFetchedResultsController Woes Core Data is an often misunderstood beast. It’s difficult to know exactly how it functions behind the scenes. Michael Gachet does a deep dive into some issues we discovered when using NSFetchedResultsController in a recent project and provides several solutions for addressing them. Some things are not always as they seem. tvOS App Development Challenges: Focus Effects & Infinite Carousel tvOS introduced a new interaction paradigm with the Apple TV Remote-driven focus engine. Building great app interfaces presents a whole new host of challenges to even skilled UIKit developers. Jared Sinclair describes some strategies we used for crafting a polished tvOS experience. From Around the Watercooler NS_REQUIRES_SUPER While protocol-oriented programming is a great new strategy for building software, good old object-oriented design is still the norm for many projects, especially for anyone still using Objective-C. One common pitfall when using inheritance is forgetting to call the super implementation inside of an overridden method. Klaus-Peter Dudas has a great explanation of how to deal with this in Objective-C by using the NS_REQUIRES_SUPER macro. “No Matching Provisioning Profiles Found” Seeing these words can feel like a punch in the gut. Code signing and device provisioning are some of the most mysterious aspects of iOS development. Fortunately, poking the black provisioning box until something starts working will be a thing of the past with Xcode 8. In his article Code Signing in Xcode 8, Jay Graves discusses how the newly redesigned Automatic Code Signing, Customized Code Signing, and Report Navigator will allow all of us to breath a collective sigh of relief and get back to building our apps. Animating Blur Radius Filip Radelic explains how to use the new UIViewPropertyAnimator in iOS 10 to enable fine-grained control when animating the blur radius of a UIVisualEffectView. As a bonus, Filip also describes how to open a Swift Playground directly on an iPad. A Case for No-Case Enums? Swift provides some powerful new tools that change the way we write software. We were surprised to learn that it’s possible to create no-case enums for things like singletons and namespaces. This looks like a much cleaner way than creating a struct with a private init method. Erica Sadun goes into detail about why you’d create no-case enums with examples of how to apply this powerful technique in practice. Measurements and Units in Foundation There are certain things that most developers should never try to write themselves. Cryptography and datetime functions are a couple of examples that have long had deep SDK support. Now we can add units of measurement and conversions between them to the list. Ole Begemann describes how Foundation has been updated to allow easy locale-specific distance conversions for a recent bicycle tour. Closures Capture Semantics, Part 1: Catch Them all! Swift brought us much easier ways to use closures in our code. In fact, it would be a challenge to try not using them. Knowing how variables with different scopes are captured and handled by ARC is very important for writing memory-efficient, well-behaved software. Using the apropos example of capturing Pokemon, Olivier Halligon breaks down the fundamentals of memory management in Swift. What Every Computer Scientist Should Know About Floating Point Math is hard. Floating-point math is harder. This reprint of a document originally written by David Goldberg provides an extremely thorough tutorial on what floating-point numbers are, potential issues that arise in working with them, and some examples of how to support them. If you have ever wondered exactly how floating-point numbers work, then grab a pot of coffee and get ready to have your mind blown. For more insights on design and development, subscribe to BPXL Craft and follow Black Pixel on Twitter.
https://medium.com/bpxl-craft/the-ios-huddle-0-7b6c7c324f7b
CC-MAIN-2017-51
refinedweb
682
52.8
#include "instaweb_handler.h" Context for handling a request, computing options and request headers in the constructor. This must be called on any InPlaceResourceRecorder allocated by instaweb_handler before calling DoneAndSetHeaders() on it. Checks to see whether the configuration has set up cookie-based proxy authentication. If so, and the cookies are not present, clients will be redirected to a page where the cookies can be obtained. Returns true if the client is authorized for proxying. Return false and responds to the request_ if the client was not authorized. Attempts to handle this as an in-place resource. Returns false if the in-place handling didn't occur, and another handler should take over the request. Unconditionally handles a resource that looks like a .pagespeed. resource, whether the result is success or failure. Attempts to handle this as a proxied resource (see MapProxyDomain). Returns false if the proxy handling didn't occur, and another handler should take over the request. Tries to acts as a full-featured proxy, handling both HTML and resources. Handle mod_pagespeed-specific requests. Handles both .pagespeed. rewritten resources and /mod_pagespeed_statistics, /mod_pagespeed_beacon, etc. By default, apache imposes limitations on URL segments of around 256 characters that appear to correspond to filename limitations. To prevent that, we hook map_to_storage for our own purposes. Was this request made by mod_pagespeed itself? If so, we should not try to handle it, just let Apache deal with it like normal. Makes a driver from the request_context and options. Note that this can only be called once, as it potentially mutates the options as it transfers ownership of custom_options. The driver is owned by the InstawebHandler and will be cleaned up at destruction, unless you call DisownDriver(). Allocates a Fetch object associated with the current request and the specified URL. Include in debug_info anything that's cheap to create and would be informative if something went wrong with the fetch. If any uses will be from other threads you must set buffered=true to keep your other thread from getting blocked if our output is being read by a slow reader. Allocates a Fetch object associated with the current request and its URL. Please read the comment above before setting buffered=false. Returns the options, whether they were custom-computed due to htaccess file, query params, or headers, or were the default options for the vhost. Loads the URL based on the fetchers and other infrastructure in the factory, returning true if the request was handled. This is used both for slurping and for handling URLs ending with proxy_suffix. Save the original URL as a request "note" before mod_rewrite has a chance to corrupt mod_pagespeed's generated URLs, which would prevent instaweb_handler from being able to decode the resource. Implementation of the Apache 'translate_name' hook. Used by the actual hook 'save_url_hook' and directly when we already have the server context. Waits for an outstanding fetch (obtained by MakeFetch) to complete. On failure, a failure response will be sent to the client. The request is handled unconditionally.
http://modpagespeed.com/psol/classnet__instaweb_1_1InstawebHandler.html
CC-MAIN-2017-30
refinedweb
502
57.67
The last chapter was a good introduction to SAX. However, there are several more topics that will round out your knowledge of SAX. While I’ve called this chapter “Advanced SAX,” don’t be intimidated. It could just as easily be called “Less-Used Portions of SAX that are Still Important.” In writing these two chapters, I followed the 80/20 principle. 80% of you will probably never need to use the material in this chapter, and Chapter 3 will completely cover your needs. However, for those power users out there working in XML day in and day out, this chapter covers some of the finer points of SAX that you’ll need. I’ll start with a look at setting parser properties and features, and discuss configuring your parser to do whatever you need it to. From there, I’ll move on to some more handlers: the EntityResolver and DTDHandler left over from the last chapter. At that point, you should have a comprehensive understanding of the standard SAX 2.0 distribution. However, we’ll push on to look at some SAX extensions, beginning with the writers that can be coupled with SAX, as well as some filtering mechanisms. Finally, I’ll introduce some new handlers to you, the LexicalHandler and DeclHandler, and show you how they are used. When all is said and done (including another “Gotcha!” section), you should be ready to take on the world with just your parser and the SAX classes. So slip into your shiny spacesuit and grab the flightstick—ahem. Well, I got carried away with the taking on the world. In any case, let’s get down to it. With the wealth of XML-related specifications and technologies emerging from the World Wide Web Consortium (W3C), adding support for any new feature or property of an XML parser has become difficult. Many parser implementations have added proprietary extensions or methods at the cost of code portability. While these software packages may implement the SAX XMLReader interface, the methods for setting document and schema validation, namespace support, and other core features are not standard across parser implementations. To address this, SAX 2.0 defines a standard mechanism for setting important properties and features of a parser that allows the addition of new properties and features as they are accepted by the W3C without the use of proprietary extensions or methods. Lucky for you and me, SAX 2.0 includes the methods needed for setting properties and features in the XMLReader interface. This means you have to change little of your existing code to request validation, set the namespace separator, and handle other feature and property requests. The methods used for these purposes are outlined in Table 4-1. For these methods, the ID of a specific property or feature is a URI. The core set of features and properties is listed in Appendix B. Additional documentation on features and properties supported by your vendor’s XML parser should also be available. These URIs are similar to namespace URIs; they are only used as associations for particular features. Good parsers ensure that you do not need network access to resolve these features; think of them as simple constants that happen to be in URI form. These methods are simply invoked and the URI is dereferenced locally, often to constantly represent what action in the parser needs to be taken. Don’t type these property and feature URIs into a browser to “check for their existence.” Often, this results in a 404NotFound error. I’ve had many browsers report this to me, insisting that the URIs are invalid. However, this is not the case; the URI is just an identifier, and as I pointed out, usually resolved locally. Trust me: just use the URI, and trust the parser to do the right thing. In the parser configuration context, a property requires some object value to be usable. For example, for lexical handling, a DOM Node implementation would be supplied as the value for the appropriate property. In contrast, a feature is a flag used by the parser to indicate whether a certain type of processing should occur. Common features are validation, namespace support, and including external parameter entities. The most convenient aspect of these methods is that they allow simple addition and modification of features. Although new or updated features will require a parser implementation to add supporting code, the method by which features and properties are accessed remains standard and simple; only a new URI need be defined. Regardless of the complexity (or obscurity) of new XML-related ideas, this robust set of four methods should be sufficient to allow parsers to implement the new ideas. More often than not, the features and properties you deal with are the standard SAX-defined ones. These are features and properties that should be available with any SAX distribution, and that any SAX-compliant parser should support. Additionally, this preserves vendor-independence in your code, so I recommend that you use SAX-defined properties and features whenever possible. The most common feature you’ll use is the validation feature. The URI for this guy is, and not surprisingly, it turns validation on or off in the parser. For example, if you want to turn on validation in the parsing example from the last chapter (remember the Swing viewer?), make this change in the SAXTreeViewer.java source file: public void buildTree(DefaultTreeModel treeModel, DefaultMutableTreeNode base, String xmlURI) throws IOException, SAXException { //); // Request validation reader.setFeature("", true);// Parse InputSource inputSource = new InputSource(xmlURI); reader.parse(inputSource); } Compile these changes, and run the example program. Nothing happens, right? Not surprising; the XML we’ve looked at so far is all valid with respect to the DTD supplied. However, it’s easy enough to fix that. Make the following change to your XML file (notice that the element in the DOCTYPE declaration no longer matches the actual root element, since XML is case-sensitive): <?xml version="1.0"?> <!DOCTYPE Book SYSTEM "DTD/JavaXML.dtd"><!-- Java and XML Contents --> <book xmlns="" xmlns: Now run your program on this modified document. Because validation is turned on, you should get an ugly stack trace reporting the error. Of course, because that’s all that our error handler methods do, this is precisely what we want: C:\javaxml2\build>java javaxml2.SAXTreeViewer c:\javaxml2\ch04\xml\contents.xml **Parsing Error** Line: 7 URI: Message: Document root element "book", must match DOCTYPE root "Book". org.xml.sax.SAXException: Error encountered at javaxml2.JTreeErrorHandler.error(SAXTreeViewer.java:445) [Nasty Stack Trace to Follow...] Remember, turning validation on or off does not affect DTD processing; I talked about this in the last chapter, and wanted to remind you of this subtle fact. To get a better sense of this, turn off validation (comment out the feature setting, or supply it the “false” value), and run the program on the modified XML. Even though the DTD is processed, as seen by the resolved OReillyCopyright entity reference, no errors occur. That’s the difference between processing a DTD and validating an XML document against that DTD. Memorize, understand, and recite this to yourself; it will save you hours of confusion in the long run. Next to validation, you’ll most commonly deal with namespaces. There are two features related to namespaces: one that turns namespace processing on or off, and one that indicates whether namespace prefixes should be reported as attributes. The two are essentially tied together, and you should always “toggle” both, as shown in Table 4-2. This should make sense: if namespace processing is on, the xmlns-style declarations on elements should not be exposed to your application as attributes, as they are only useful for namespace handling. However, if you do not want namespace processing to occur (or want to handle it on your own), you will want these xmlns declarations reported as attributes so you can use them just as you would use other attributes. However, if these two fall out of sync (both are true, or both are false), you can end up with quite a mess! Consider writing a small utility method to ensure these two features stay in sync with each other. I often use the method shown here for this purpose: private void setNamespaceProcessing(XMLReader reader, boolean state) throws SAXNotSupportedException, SAXNotRecognizedException { reader.setFeature( "", state); reader.setFeature( "", !state); } This maintains the correct setting for both features, and you can now simply call this method instead of two setFeature( ) invocations in your own code. Personally, I’ve used this feature less than ten times in about two years; the default values (processing namespaces as well as not reporting prefixes as attributes) almost always work for me. Unless you are writing low-level applications that either don’t need namespaces or can use the speed increase obtained from not processing namespaces, or you need to handle namespaces on your own, I wouldn’t worry too much about either of these features. This code brings up a rather important aspect of features and properties, though: invoking the feature and property methods can result in SAXNotSupportedException s and SAXNotRecognizedException s. These are both in the org.xml.sax package, and need to be imported in any SAX code that uses them. The first indicates that the parser knows about the feature or property but doesn’t support it. You won’t run into this much in even average quality parsers, but it is commonly used when a standard property or feature is not yet coded in. So invoking setFeature( ) on the namespace processing feature on a parser in development might result in a SAXNotSupportedException. The parser recognizes the feature, but doesn’t have the ability to perform the requested processing. The second exception most commonly occurs when using vendor-specific features and properties (covered in the next section), and then switching parser implementations. The new implementation won’t know anything about the other vendor’s features or properties, and will throw a SAXNotRecognizedException. You should always explicitly catch these exceptions so you can deal with them. Otherwise, you end up losing valuable information about what happened in your code. For example, let me show you a modified version of the code from the last chapter that tries to set up various features, and how that changes the exception-handling architecture: public void buildTree(DefaultTreeModel treeModel, DefaultMutableTreeNode base, String xmlURI) throws IOException, SAXException { String featureURI = ""; try {//); /** Deal with features **/ featureURI = ""; // Request validation reader.setFeature(featureURI, true); // Namespace processing - on featureURI = ""; setNamespaceProcessing(reader, true); // Turn on String interning featureURI = ""; reader.setFeature(featureURI, true); // Turn off schema processing featureURI = ""; reader.setFeature(featureURI, false);// Parse InputSource inputSource = new InputSource(xmlURI); reader.parse(inputSource); } catch (SAXNotRecognizedException e) { System.out.println("The parser class " + vendorParserClass + " does not recognize the feature URI " + featureURI); System.exit(0); } catch (SAXNotSupportedException e) { System.out.println("The parser class " + vendorParserClass + " does not support the feature URI " + featureURI); System.exit(0); }} By dealing with these exceptions as well as other special cases, you give the user better information and improve the quality of your code. The three remaining SAX-defined features are fairly obscure. The first,, turns string interning on or off. By default this is false (off) in most parsers. Setting it to true means that every element name, attribute name, namespace URI and prefix, and other strings have java.lang.String.intern() invoked on them. I’m not going to get into great detail about interning here; if you don’t know what it is, check out Sun’s Javadoc on the method at. In a nutshell, every time a string is encountered, Java attempts to return an existing reference for the string in the current string pool, instead of (possibly) creating a new String object. Sounds like a good thing, right? Well, the reason it’s off by default is most parsers have their own optimizations in place that can outperform string interning. My advice is to leave this setting alone; many people have spent weeks tuning things like this so you don’t have to mess with them. The other two features determine whether textual entities are expanded and resolved (), and whether parameter entities are included () when parsing occurs. These are set to true for most parsers, as they deal with all the entities that XML has to offer. Again, I recommend you leave these settings as is, unless you have a specific reason for disabling entity handling. The two standard SAX properties are a little less clear in their usage. In both cases, the properties are more useful for obtaining values, whereas with features the common use is to set values. Additionally, both properties are more helpful in error handling than in any general usage. And finally, both properties provide access to what is being parsed at a given time. The first, identified by the URI, returns the current DOM node being processed, or the root DOM node if parsing isn’t occurring. Of course, I haven’t really talked about DOM yet, but this will make more sense in the next two chapters. The second property, identified by the URI, returns the literal string of characters being processed. You’ll find varying support for these properties in various parsers, showing that many parser implementers find these properties of arguable use as well. For example, Xerces does not support the xml-string property, to avoid having to buffer the input document (at least in that specific way). On the other hand, it does support the dom-node property so that you can turn a SAX parser into (essentially) a DOM tree iterator. In addition to the standard, SAX-defined features and properties, most parsers define several features and properties of their own. For example, Apache Xerces has a page of features it supports at properties it supports at. I’m not going to cover these in great detail, and you should steer clear of them whenever possible; it locks your code into a specific vendor. However, there are times when using a vendor’s specific functionality will save you some work. In those cases, exercise caution, but don’t be foolish; use what your parser gives you! As an example, take the Xerces feature that enables and disables XML schema processing:. Because there is no standard support for XML schemas across parsers or in SAX, use this specific feature (it’s set to true by default) to avoid spending parsing time to deal with any referenced XML schemas in your documents, for example. You save time in production if you don’t use this processing, and it needs a vendor-specific feature. Check out your vendor documentation for options available in addition to SAX’s. No credit card required
https://www.oreilly.com/library/view/java-and-xml/0596001975/ch04.html
CC-MAIN-2019-22
refinedweb
2,456
53.92
I have two question:Is there a Python IDLE or editor outside of the Python window that has the autocompletion capabilities? Th Is there a Python IDLE or editor outside of the Python window that has the autocompletion capabilities? Can someone help me with my error message? File "C:\Python27\Lib\site-packages\pythonwin\pywin\framework\scriptutils.py", line 322, in RunScript debugger.run(codeObject, __main__.__dict__, start_stepping=0) File "C:\Python27\Lib\site-packages\pythonwin\pywin\debugger\__init__.py", line 60, in run _GetCurrentDebugger().run(cmd, globals,locals, start_stepping) File "C:\Python27\Lib\site-packages\pythonwin\pywin\debugger\debugger.py", line 655, in run exec cmd in globals, locals File "C:\EsriPress\Python\Data\Exercise06\Results\Field_list.py", line 1, in <module> import arcpy ImportError: No module named arcpy -- This is where the problem is This is the script I run. import arcpy from arcpy import env env.overwriteOutput = True env.overwriteOutput = " C:/esripress/python/data/exercise06" fieldList = arcpy.ListFields("counties.shp") for field in fieldList: print field.name + " " + field.type Hi Ravi, Based on the File "C:\Python27\Lib..." in your stack trace it looks like you're using a version of Python that you installed rather than the one that is included with ArcGIS. There can be multiple instances of python.exe on one machine - since arcpy depends on some of our DLLs it only works with the ArcGIS copy. Try looking for a python.exe in "C:\Python27\ArcGIS10.4" (or whichever 10.x version number you're working with) and run your script using it instead.
https://community.esri.com/thread/185742-no-module-names-arcpy
CC-MAIN-2018-22
refinedweb
261
53.47
Brian Sabbey <sabbey at u.washington.edu> wrote: > > On Fri, 11 Mar 2005, Josiah Carlson wrote: > > >, > > I find it quite natural. Stuff on the right of 'yield' is going out, > stuff on the left is coming in. Since 'yield' is different than return in > that it marks a spot at which execution both leaves and re-enters the > frame, it makes sense that 'yield' should have a syntax that indicates as > much. Inventors always find their ideas to be quite natural; if they weren't, they wouldn't have come to them. > > and the functionality you desire does not > > require syntax modification to be possible. > > Was the functionality provided by generators or decorators or anything > else impossible before those were introduced? Of course not. The point > is to make things easier and more natural, not to enable some previously > impossible functionality. I don't find the syntax you provide to be 'natural'. Trying to convince me of the 'naturalness' of it is probably not going to be productive, but as I said in my original email, "whether my opinion means anything is another matter". Being that no one else has publically responded to your post, and it has been nearly 24 hours, my feeling is that either there are more important things going on (Python 2.4.1, sum/product/any/all discussion, etc.), people haven't been paying attention in recent days (due to the higher volume), people are not sufficiently one side or another to comment, or some other thing. > >). > > Perhaps you are right, I don't know. It seems to me that most people > would have to see the syntax once to know exactly what is going on, but I > certainly don't know that for sure. Either way, I'd hate to have all my > suggestions dismissed because of the syntax of this one piece. I say it is magical. Why? The way you propose it, 'yield' becomes an infix operator, with the name provided on the left being assigned a value produced by something that isn't explicitly called, referenced, or otherwise, by the right. In fact, I would say, it would be akin to the calling code modifying gen.gi_frame._f_locals directly. Such "action at a distance", from what I understand, is wholly frowned upon in Python. There also does not exist any other infix operator that does such a thing (=, +=, ... assign values, but where the data comes from is obvious). Bad things to me (so far): 1. Assignment is not obvious. 2. Where data comes from is not obvious. 3. Action at a distance like nothing else in Python. 4. No non-'=' operator assigns to a local namespace. > >). > > But they're *not* in the same namespace necessarily. That is entirely the > point. One is changing scope but has no clean way to pass values. How is > making 'l' some (more) global variable possibly a clearer way to pass it > to the generator? Your argument is like saying one does not need to > return values from a function because we could always just use a global > variable to do it. Or you could even use a class instance. class foo: def pickled_file(self, name): f = open(name, 'r') yield pickle.load(f) f.close() f = open(name, 'w') pickle.dump(self.l, f) f.close() fi = foo() for l in fi.pickled_file('greetings.pickle'): l.append('hello') l.append('howdy') fi.l = l If you can call the function, there is always a shared namespace. It may not be obvious (you may need to place the function as a method of an instance), but it is still there. > > Hrm, not good enough? Use a Queue, or use another variable in a > > namespace accessable to both your function and your loop. > > Again, I entirely realize it's possible to do these things now, but that > is not the point. The point of your proposed syntax is to inject data back into a generator from an external namespace. Right? My point is that if you are writing software, there are already fairly reasonable methods to insert data back into a generator from an external namespace. Furthermore, I would say that using yield semantics in the way that you are proposing, while being discussed in other locations (the IBM article on cooperative multithreading via generators springs to mind), is a clever hack, but not something that should be supported via syntax. Considering the magical nature of how you propose to change yield, I can't see any serious developer of the Python language (those with commit access to Python CVS) saying anything other than "-1". Coupled with the fact that I cannot recall Guido or anyone else here ever having said "it would be nice if we could put stuff back into generators", my feeling is that your syntax proposal is not going to make it (I could be wrong). - Josiah
https://mail.python.org/pipermail/python-dev/2005-March/052108.html
CC-MAIN-2018-26
refinedweb
814
63.59
Update: Removed Internal Constructor constraint on AsyncBuilder. As I covered earlier in my post Functional .NET - LINQ or Language Integrated Monads, I talked about using asynchronous computation expressions (monads) from C# 3.0. Brian McNamara, of the F# team, posted back in May about using them from C#. But since then, things have changed slightly. Before, I showed a basic example of how to utilize the F# libraries from C#, but let's go deep under the covers to see how this actually works. In order to make use of the F# libraries in our C# library, we need to add references to them. We need the following items: And then I need to open the namespaces in order to take advantage of F#: As part of process of creating the asynchronous monad builders in C#, we need to create an instance of the F# class AsyncBuilder in the Microsoft.FSharp.Control namespace. We can get a reference to this from the Pervasives class, through the async property. This is a static reference which is available to all. Now that we have this, we have to realize that F# doesn't use the standard .NET delegates for functions. Let's walk through some ways of converting back and forth. As we've noted before, F# does not use the standard Func and Action delegates that are commonly used in C# and VB.NET. Instead, the functions in F# use the FastFunc class. This allows for the F# compiler to better optimize the closures, especially due to the fact that these closures are quite commonly used. Another point of difference is that there is no distinction between Func and Action delegates, and instead, for functions that return no value have the return type of Unit. This is the optimal way of handling this, due to the fact that Void is not treated as a real type, which you've heard me complain about in the past. In order to convert from the Func delegate to the FastFunc, we use the FuncConvertExtensions class in the FSharp.PowerPack.Linq.dll assembly. Then we can create extension methods on our Func delegates to expose the conversion methods. The conversion methods should look like this. Now that the conversions have been put in place, we can turn our attention to creating the extension methods required for LINQ expressions. In order for LINQ to bind and return data, the SelectMany and Select methods must be implemented. We need to implement these methods to return an Async<T> class for binding and returning purposes. As part of the implementation, we need to ensure our Func delegates are converted to the proper FastFunc types. Once the LINQ methods are added, we can turn our attention on how we might actually implement the asynchronous behavior with our given classes. In order for your classes to use the asynchronous behavior, we need to expose ways of building primitives so that we can enlist those methods that expose the Beginxxx and Endxxx signature which includes the IAsyncResult. Also, in order to support method calls that do not, we have the ability within the asynchronous computation expressions to unblock via a new thread. First, we need to define the method which allows us to do non-blocking calls on methods that do not support the Begin/End pattern. In order for this to happen, we need to call an internal method to the FileExtensions class in the FSharp.PowerPack.dll assembly called UnblockViaNewThread. Due to the fact that it is a generic method, we have to do special binding using reflection. Once this has been created, we can now enlist functions that follow the Func<Res> signature. Second, we need to define methods to build primitives to allow for methods that follow the Begin/End pattern using the IAsyncResult and AsyncCallback delegate. Overloads are necessary due to the fact that methods may have return types or not. Examples of this pattern are Stream.BeginRead, Stream.BeginWrite and so on. Lastly, we need the ability to start the asynchronous computation expression. This must be the first statement in any asynchronous computation expression that we write. Let's look at how the code is implemented. Now that we have the ability to add asynchronous behavior to our classes, let's implement some extension methods that encompass the behavior. For methods that have the standard Begin/End methods can use the BuildPrimitive methods that we defined above. As our first example, let's implement an asynchronous WebRequest.GetResponse. Normally in our .NET code, we have to implement the Begin/End ourselves. Instead, we pass the methods to the BuildPrimitive method to bind. Any method signature that follows this pattern should be able to partake. Now what about those methods that do not follow the Begin/End pattern. What can we do about those? As I mentioned previously, I want the ability to perform asynchronous operations on those methods that do not follow the Begin/End pattern. In order to do that, I must use the UnblockViaNewThread method that was defined above. For example, we could expose the ability to open files asynchronously as either readers or streams. Let's define some methods to open files asynchronously. This pattern would also apply to such things as WebRequest.Create and so on. Now, let's bring it all together. What the above code does for me is allows me to count the number of hyperlinks, given a URL, in parallel. This is a pretty simplistic example, yet works quite well for this demonstration. Now, using our C# implementation, let's take the above code and get it to work in C#. For each of the above, we get the following result: Well, that works exactly according to plan. But, using this technique, are there any downsides? The asynchronous code I wrote above is pretty simple and naive. Counting links in an HTML page is a pretty simple example with some asynchronous calls. But, this breaks down easily if it gets more complex than this. There are constructs that can be done in the asynchronous computation expressions in F# that cannot be done through LINQ. For example, there are several things that cannot be done through LINQ expressions such as the following: These above constructs don't have a 1 to 1 mapping with LINQ. As a result, our code may have to look significantly different than the F# code that we'd write. That of course eliminates some of the savings we may have had with our LINQ implementation. As you see, when combined with such things as the MailboxProcessor, the asynchronous computation expressions become a bit more powerful. Unfortunately, LINQ doesn't support these features of conditionals, using statements and so on. The alternative being, using the awkward syntax with converting between FastFunc and Func delegates, which negates any savings we may have had. As I've shown in the past posts, LINQ expressions could be used to your advantage to do more than just standard query operations. Instead, if we start to think of them as very powerful monadic constructs, we can then think of better uses for them. Although the LINQ expressions are limited in what they can achieve through the asynchronous computation expressions, it still gets you quite far. I hope this exploration gets you to understand languages such as Haskell and how monads are useful. For further information, you should check out "What is a monad, why should I use it and when it is appropriate" on Lambda the Ultimate. The code from this post is available, as always from the Functional C# Library on MSDN Code Gallery. [Advertisement] Pingback from Reflective Perspective - Chris Alcock » The Morning Brew #201 You've been kicked (a good thing) - Trackback from DotNetKicks.com Thanks to Mitch Wheat for inviting me to talk, and letting me choose the topic: F#. Also a major thanks Hi, I 've just made a Google code project and added some samples that I made to work: code.google.com/.../fsharpdemos Everyone is welcome to contribute. Cheers, An
http://codebetter.com/blogs/matthew.podwysocki/archive/2008/10/15/functional-c-implementing-async-computations-in-c.aspx
crawl-002
refinedweb
1,347
63.59
Good evening all, I am a new member to this forum, it seems like a great place for beginners like me! I am at the very beginning of my Java education and I am having a little trouble with an exercise involving Static Methods. This exercise is designed to introduce me to invoking static methods inside a class without instantiating an object. Basically; I am just trying to print out the value of a returned calculation, but I am getting a compile error. I'll paste me code below. Thanks in advance for any guidance public class MyMath { public static long square(int a){ long b = (long) (a*a); return b; } } public class Squares { public static void main(String[] args) { MyMath.square(12); System.out.println(MyMath.square()); } }
http://www.javaprogrammingforums.com/whats-wrong-my-code/30752-simple-static-method-problem.html
CC-MAIN-2015-22
refinedweb
128
51.48
The hygp_register_handler function is partially dependent on internal VM structures, and must be called with NULL as a third (userData) parameter. #include "hyport.h" #include "hycomp.h" #include "gp.h" Provides the name and value, specified by category/index of the gp information in info. Returns the kind of information found at category/index specified, or undefined if the category/index are invalid. The number of items in the category specified must equal the count hygp_info_count returns for that category. Above allows the handler function registered in hygp_register_handler to distinguish (and use) them from the other gp items. The caller is responsible for allocating and freeing any buffers used by **name, **value. Kicks off the new thread by calling the function provided in protected_fn fn. All threads spawned by the vm start here and all OS signals that will be handled by fn must be registered to the OS here. Upon receiving a signal from the OS, fn is responsible for calling the function specified in hygp_register_handler if it is determined that a shutdown is required. Sets the function that is responsible for preserving/outputting the state of the vm and initiating a graceful shutdown resulting from a gp. fn is not called by the OS but by the gp handler function specified in the call to hygp_protect gp module above occurs after the OS has passed a signal along to us and it is determined that a shutdown is required. PortLibrary shutdown. This function is called during shutdown of the portLibrary. Any resources that were created by hygp_startup should be destroyed here. PortLibrary startup. This function is called during startup of the portLibrary. Any resources that are required for the shared library operation may be created here. All resources created here should be destroyed in hygp_shutdown. Genereated on Tue Dec 9 14:12:59 2008 by Doxygen. (c) Copyright 2005, 2008 The Apache Software Foundation or its licensors, as applicable.
http://harmony.apache.org/externals/vm_doc/html/hygp_8c.html
CC-MAIN-2015-27
refinedweb
321
55.64
How to Download Videos from Any Website using Python How to Download Videos from Any Website using Python Send download link to: In one of our previous tutorial we learnt to download videos from YouTube. We used a custom library called pytube3 for it. But what if we want to download videos using python from any other website? We can’t use pytube3 there nor can we have custom libraries for every website. So to download videos from any website we will have to use our web scrapping libraries BeautifulSoup and Requests. In this tutorial we will learn how we can download videos from any website using our web scraping skills. We will go to University of Munich’s website and download the videos. This website contains videos as well as some pdf’s and other files, we will only download videos. If you notice carefully you can see that all the videos have mp4 extension, which is what we have to look for. Moreover all the files have an embedded link from where they can be downloaded. We can find all these links and then download files: Let’s get to the code: import requests from bs4 import BeautifulSoup # specify the URL of the archive here archive_url = "" def get_video_links(): #create response object r = requests.get(archive_url) #create beautiful-soup object soup = BeautifulSoup(r.content,'html5lib') #find all links on web-page links = soup.findAll('a') #filter the link ending with .mp4 video_links = [archive_url + link['href'] for link in links if link['href'].endswith('mp4')] return video_links Now that we have grabbed the links we can send get request to these links and download videos as below: def download_video_series(video_links): for link in video_links: # iterate through all links in video_links # and download them one by one #obtain filename by splitting url and getting last string file_name = link.split('/')[-1] print ("Downloading file:%s"%file_name) #create response object r = requests.get(link, stream = True) #download started with open(file_name, 'wb') as f: for chunk in r.iter_content(chunk_size = 1024*1024): if chunk: f.write(chunk) print ("%s downloaded!\n"%file_name) print ("All videos downloaded!") return if __name__ == "__main__": #getting all video links video_links = get_video_links() #download all videos download_video_series(video_links) Output: You can find the downloaded videos in your working directory. Know more ways to download videos using python from website.
https://www.worthwebscraping.com/how-to-download-videos-from-any-website-using-python/
CC-MAIN-2021-43
refinedweb
389
72.66
TensorFlow.jsTensorFlow.js TensorFlow.js is an open-source hardware-accelerated JavaScript library for training and deploying machine learning models. Develop ML in the Browser Use flexible and intuitive APIs to build. ImportingImporting You can import TensorFlow.js directly via yarn or npm: yarn add @tensorflow/tfjs or npm install @tensorflow/tfjs. Alternatively you can use a script tag. The library will be available as a global variable named tf: <script src=""></script> <!-- or --> <script src=""></script> You can also specify which version to load replacing @latest with a specific version string (e.g. 0.6.0). About this repoAbout this repo This repository contains the logic and scripts that combine two packages: - TensorFlow.js Core, a flexible low-level API, formerly known as deeplearn.js. - TensorFlow.js Layers, a high-level API which implements functionality similar to Keras. If you care about bundle size, you can import those packages individually. ExamplesExamples Getting startedGetting started Let's add a scalar value to a vector. TensorFlow.js supports broadcasting the value of scalar over all the elements in the tensor. import * as tf from '@tensorflow/tfjs'; // If not loading the script as a global const a = tf.tensor1d([1, 2, 3]); const b = tf core-concepts tutorial for more. Now, let's build a toy model to perform linear regression. import * as tf from '@tensorflow/tfjs'; // A sequential model is a container which you can add layers to. const model = tf.sequential(); // Add a dense layer with 1 output unit. model.add(tf.layers.dense({units: 1, inputShape: [1]})); // Specify the loss type and optimizer for training. model.compile({loss: 'meanSquaredError', optimizer: 'sgd'}); // Generate some synthetic data for training. const xs = tf.tensor2d([[1], [2], [3], [4]], [4, 1]); const ys = tf.tensor2d([[1], [3], [5], [7]], [4, 1]); // Train the model. await model.fit(xs, ys, {epochs: 500}); // Ater the training, perform inference. const output = model.predict(tf.tensor2d([[5]], [1, 1])); output.print(); For a deeper dive into building models, see the MNIST tutorial Importing pre-trained modelsImporting pre-trained models We support porting pre-trained models from: Find out moreFind out more TensorFlow.js is a part of the TensorFlow ecosystem. For more info:
https://javascript.ctolib.com/tensorflow-tfjs.html
CC-MAIN-2019-22
refinedweb
366
52.26
RedisDays Available Now On-Demand. This post is part five of a series of posts examining the features of the Redis-ML module. The first post in the series can be found here. The sample code included in this post requires several Python libraries and a Redis instance with the Redis-ML module loaded. Detailed setup instructions for the runtime environment are provided in both part one and part two of the series. Decision Trees Decision trees are a predictive model used for classification and regression problems in machine learning. Decision trees model a sequence of rules as a binary tree. The interior nodes of the tree represent a split or a rule and the leaves represent a classification or value. Each rule in the tree operates on a single feature of the data set. If the condition of the rule is met, move to the left child; otherwise move to the right. For a categorical feature (enumerations), the test the rule uses is membership in a particular category. For features with continuous values the test is “less than” or “equal to.” To evaluate a data point, start at the root note and traverse the tree by evaluating the rules in the interior node, until a leaf node is reached. The leaf node is labeled with the decision to return. An example decision tree is shown below: Many different algorithms (recursive partitioning, top-down induction, etc.) can be used to build a decision tree, but the evaluation procedure is always the same. To improve the accuracy of decision trees, they are often aggregated into random forests which use multiple trees to classify a datapoint and take the majority decision across the trees as a final classification. To demonstrate how decision trees work and how a decision tree can be represented in Redis, we will build a Titanic survival predictor using the scikit-learn Python package and Redis. Titanic Dataset On April 15, 1912, the Titanic sank in the North Atlantic Ocean after colliding with an iceberg. More than 1500 passengers died as a result of the collision, making it one of the most deadly commercial maritime disasters in modern history. While there was some element of luck in surviving the disaster, looking at the data shows biases that made some groups of passengers more likely to survive than others. The Titanic Dataset, a copy of which is available here, is a classic dataset used in machine learning. The copy of the dataset, from the Vanderbilt archives, which we used for this post contains records for 1309 of the passengers on the Titanic. The records consist of 14 different fields: passenger class, survived, name, sex, age, number of siblings/spouses, number of parents/children aboard, ticket number, fare, cabin, port of embarkation, life boat, body number and destination. A cursory scan of our data in Excel shows lots of missing data in our dataset. The missing fields will impact our results, so we need to do some cleanup on our data before building our decision tree. We will use the pandas library to preprocess our data. You can install the pandas library using pip, the Python package manager: pip install pandas or your prefered package manager. Using pandas, we can get a quick breakdown of the count of values for each of the record classes in our data: pclass 1309 survived 1309 name 1309 sex 1309 age 1046 sibsp 1309 parch 1309 ticket 1309 fare 1308 cabin 295 embarked 1307 boat 486 body 121 home.dest 745 Since the cabin, boat, body and home.dest records have a large number of missing records, we are simply going to drop them from our dataset. We’re also going to drop the ticket field, since it has little predictive value. For our predictor, we end up building a feature set with the passenger class (pclass), survival status (survived), sex, age, number of siblings/spouses (sibsp), number of parents/children aboard (parch), fare and port of embarkation (“embarked”) records. Even after removing the sparsely populated columns, there are still several rows missing data, so for simplicity, we will remove those passenger records from our dataset. The initial stage of cleaning the data is done using the following code: import pandas as pd # load data from excel orig_df = pd.read_excel('titanic3.xls', 'titanic3', index_col=None) # remove columns we aren't going to work with, drop rows with missing data df = orig_df.drop([“name”, "ticket", "body", "cabin", "boat", "home.dest"], axis=1) df = df.dropna() The final preprocessing we need to perform on our data is to encode categorical data using integer constants. The pclass and survived columns are already encoded as integer constants, but the sex column records the string values male or female and the embarked column uses letter codes to represent each port. The scikit package provides utilities in the preprocessing subpackage to perform the data encoding. The second stage of cleaning the data, transforming non-integer encoded categorical features, is accomplished with the following code: from sklearn import preprocessing # convert enumerated columns (sex,) encoder = preprocessing.LabelEncoder() df.sex = encoder.fit_transform(df.sex) df.embarked = encoder.fit_transform(df.embarked) Now that we have cleaned our data, we can compute the mean value for several of our feature columns grouped by passenger class (pclass) and sex. survived age sibsp parch fare pclass sex 1 female 0.961832 36.839695 0.564885 0.511450 112.485402 male 0.350993 41.029250 0.403974 0.331126 74.818213 2 female 0.893204 27.499191 0.514563 0.669903 23.267395 male 0.145570 30.815401 0.354430 0.208861 20.934335 3 female 0.473684 22.185307 0.736842 0.796053 14.655758 male 0.169540 25.863027 0.488506 0.287356 12.103374 Notice the significant differences in the survival rate between men and women based on passenger class. Our algorithm for building a decision tree will discover these statistical differences and use them to choose features to split on. Building a Decision Tree We will use scikit-learn to build a decision tree classifier over our data. We start by splitting our cleaned data into a training and a test set. Using the following code, we split out the label column of our data (survived) from the feature set and reserve the last 20 records of our data for a test set. X = df.drop(['survived'], axis=1).values Y = df['survived'].values X_train = X[:-20] X_test = X[-20:] Y_train = Y[:-20] Y_test = Y[-20:] Once we have our training and test sets, we can create a decision tree with a maximum depth of 10. # Create the real classifier depth=10 cl_tree = tree.DecisionTreeClassifier(max_depth=10, random_state=0) cl_tree.fit(X_train, Y_train) Our depth-10 decision tree is difficult to visualize in a blog post, so to visualize the structure of the decision tree, we created a second tree and limited the tree’s depth to 3. The image below shows the structure of the decision tree, learned by the classifier: The Redis-ML module provides two commands for working with random forests: ML.FOREST.ADD to create a decision tree within the context of a forest and ML.FOREST.RUN to evaluate a data point using a random forest. The ML.FOREST commands have the following syntax: ML.FOREST.ADD key tree path ((NUMERIC|CATEGORIC) attr val | LEAF val [STATS]) [...] ML.FOREST.RUN key sample (CLASSIFICATION|REGRESSION) Each decision tree in Redis-ML must be loaded using a single ML.FOREST.ADD command. The ML.FOREST.ADD command consists of a Redis key, followed by an integer tree id, followed by node specifications. Node specifications consist of a path, a sequence of . (root), l and r, representing the path to the node in a tree. Interior nodes are splitter or rule nodes and use either the NUMERIC or CATEGORIC keyword to specify the rule type, the attribute to test against and the value of threshold to split. For NUMERIC nodes, the attribute is tested against the threshold and if it is less than or equal to it, the left path is taken; otherwise the right path is taken. For CATEGORIC nodes, the test is equality. Equal values take the left path and unequal values take the right path. The decision tree algorithm in scikit-learn treats categoric attributes as numeric, so when we represent the tree in Redis, we will only use NUMERIC node types. To load the scikit tree into Redis, we will need to implement a routine that traverses the tree. The following code performs a pre-order traversal of the scikit decision tree to generate a ML.FOREST.ADD command (since we only have a single tree, we generate a simple forest with only a single tree). # scikit represents decision trees using a set of arrays, # create references to make the arrays easy to access the_tree = cl_tree t_nodes = the_tree.tree_.node_count t_left = the_tree.tree_.children_left t_right = the_tree.tree_.children_right t_feature = the_tree.tree_.feature t_threshold = the_tree.tree_.threshold t_value = the_tree.tree_.value feature_names = df.drop(['survived'], axis=1).columns.values # create a buffer to build up our command forrest_cmd = StringIO() forrest_cmd.write("ML.FOREST.ADD titanic:tree 0 ") # Traverse the tree starting with the root and a path of “.” stack = [ (0, ".") ] while len(stack) > 0: node_id, path = stack.pop() # splitter node -- must have 2 children (pre-order traversal) if (t_left[node_id] != t_right[node_id]): stack.append((t_right[node_id], path + "r")) stack.append((t_left[node_id], path + "l")) cmd = "{} NUMERIC {} {} ".format(path, feature_names[t_feature[node_id]], t_threshold[node_id]) forrest_cmd.write(cmd) else: cmd = "{} LEAF {} ".format(path, np.argmax(t_value[node_id])) forrest_cmd.write(cmd) # execute command in Redis r = redis.StrictRedis('localhost', 6379) r.execute_command(forrest_cmd.getvalue()) Comparing Results With the decision tree loaded into Redis, we can create two vectors to compare the predictions of Redis with the predictions from scikit-learn: # generate a vector of scikit-learn predictors s_pred = cl_tree.predict(X_test) # generate a vector of Redis predictions r_pred = np.full(len(X_test), -1, dtype=int) for i, x in enumerate(X_test): cmd = "ML.FOREST.RUN titanic:tree " # iterate over each feature in the test record to build up the # feature:value pairs for j, x_val in enumerate(x): cmd += "{}:{},".format(feature_names[j], x_val) cmd = cmd[:-1] r_pred[i] = int(r.execute_command(cmd)) To use the ML.FOREST.RUN command, we have to generate a feature vector consisting of a list of comma separated <feature>:<value> pairs. The <feature> portion of the vector is a string feature name that must correspond to the feature names used in the ML.FOREST.ADD command. Comparing the r_pred and s_pred prediction values against the actual label values: Y_test: [0 0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0] r_pred: [1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0] s_pred: [1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0] Redis’ predictions are identical to those of the scikit-learn package, including the misclassification of test items 0 and 14. An passenger’s chance of survival was strongly correlated to class and gender, so there are several surprising cases of individuals with a high probability of survival who actually perished. Investigating some of these outliers leads to fascinating stories from that fateful voyage. There are many online resources that tell the stories of the Titanic passengers and crew, showing us the people behind the data. I’d encourage you to investigate some of the misclassified people and learn their stories. In the next and final post we’ll tie everything together and wrap up this introduction to Redis-ML. In the meantime, if you have any questions about this or previous posts, please connect with me (@tague) on Twitter. By continuing to use this site, you consent to our updated privacy agreement. You can change your cookie settings at any time but parts of our site will not function correctly without them.
https://redis.com/blog/introduction-redis-ml-part-five/
CC-MAIN-2022-21
refinedweb
2,005
54.63
With the rise in technological advancements in the field of artificial neural networks, there have been several libraries that are used to solve and compute modern deep learning tasks. In my previous articles, I have covered some other deep learning frameworks, such as TensorFlow and Keras, in detail. It is recommended that the viewers who are new to this topic to out the following link for TensorFlow and this particular link for Keras. In this article, we will cover another spectacular deep learning framework in PyTorch, which is also widely used for performing a variety of complex tasks. PyTorch, since its release in September 2016, has always offered stiff competition to TensorFlow due to its Pythonic style of coding archetypes and comparatively more simple coding methodologies in some cases. The table of contents for the concepts we will discuss in this article is provided on the right. For starters, we will get accustomed to PyTorch with a basic introduction. We will then proceed to install the PyTorch framework in a virtual environment for the construction of deep learning projects. We will understand the concepts of tensors and the numerous possible operations that a user can compute with the various functionalities offered in PyTorch. Once we have a basic understanding of tensor operations, we will discuss all the basic steps and necessities to construct a PyTorch model. Finally, we will briefly discuss the differences between TensorFlow and PyTorch. Introduction To PyTorch: PyTorch is one of the best options for deep learning, which is available as an open-source deep learning framework that was first introduced and developed by Facebook's AI Research lab (FAIR). The primary aim of the torch environment library developed was to construct highly effective models that could produce the best possible results and solutions for a particular task. The applications of the PyTorch library extend from machine learning applications to natural language processing and computer vision tasks. Apart from these use cases, they are also utilized in a number of software structures. Some of the examples include Uber's Pyro, Tesla Autopilot, Hugging Face Transformers, PyTorch Lightning, and Catalyst. The main ability of PyTorch is to support basic numpy operations by making use of tensors which they can utilize for computing complex operations with a Graphics Processing Unit (GPU). This ability of PyTorch to make use of tensors to perform complicated tasks and computations with ease, thanks to its access to GPU support, is one of the significant characteristics of PyTorch. We will discuss the topic of these tensor operations in further detail in a later section in this article. Apart from its ability to compute operations faster, it also utilizes a system of automatic differentiation that allows its users to directly compute the backpropagation values for their neural networks in an extremely simplified manner. Due to some of these popular PyTorch functions, users don't need to do any manual calculations in favor of using the given functions to calculate the desired derivatives accordingly. Although PyTorch is supported in languages like C++, the primary focus of PyTorch is to provide the users with a solid foundation and complete support with the Python programming language. Most of the code written in PyTorch can seamlessly be integrated into Python. PyTorch also bonds well with most of the Python libraries, such as numpy. Since most of the code is so Pythonic, it becomes easier for newer developers who have experience with Python to learn PyTorch. It also allows the user to debug more efficiently, hence leading to higher developer productivity. As discussed before, it also has graphical support as CUDA was one of the languages in which the PyTorch library was written. And finally, PyTorch is also supported by a multitude of Cloud platforms for model development and deployment similar to TensorFlow. With this basic introduction of PyTorch, we can proceed with the major steps involved in the installation of this deep learning framework. Installation procedure of PyTorch: My recommendation for the first step will be to get started with the download of the Anaconda development environment. Anaconda is a distribution that supports a multitude of libraries, programming languages, and tons of useful materials for beginners and experts. It is suitable across all platforms of operating systems such as Windows, macOS, and Linux. This software is widely considered as one of the best tools that a data science enthusiast must possess to achieve the best and most desired results on any particular task. You can download the latest suitable version for your desktop from the following link. Once you have downloaded and installed the Anaconda distribution, you can choose the editor accordingly. Your main choices are PyCharm, Microsoft Visual Studio Code, Sublime Text, Jupyter Notebooks, and many others. I would recommend sticking to the Jupyter Notebook for the purpose of this article. Ensure that you have all the setup for the Anaconda distribution completed. Once you have everything ready, please make sure that you create a virtual environment with a name of your preference. This virtual environment will contain all the future installations that we will install. The next step is to go to the official PyTorch website and set up the build according to your system. Please follow this link to reach the website. Once you reach the website, set up your custom settings according to your requirements. The above image is an example of the settings that are best suitable for my PC. The appropriate Anaconda command for the installation of PyTorch is also provided accordingly. Activate your virtual environment and copy-paste the given code in your command prompt. It is crucial to note that if you do not have a GPU on your system, then please select the CPU option to install PyTorch on your system. However, it is highly recommended that you do have a GPU for faster computations of PyTorch models. Don't have access to a GPU? Sign up for Gradient and get access to the compute you need now! Below is the command that I entered in my command prompt (or windows shell) in my virtual environment to install the PyTorch deep learning framework. conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch If you are having issues with the installation procedure, then visit Stack Overflow or GitHub. One of the alternate commands that worked for me is as follows: conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge Once you are able to successfully install PyTorch GPU or CPU version on your system, please check the availability of the same in your command prompt, windows shell, or whatever else you choose to use (This could include any Integrated Development Environment of your choice, like Visual Studio or Gradient). Activate the virtual environment you work in and enter the following commands. Firstly, we will enter the Python console and try to import torch to check if there were no errors in the installation process, and finally, check the downloaded version of PyTorch. (torchy) C:\Users\Admin>python Python 3.9.5 (default, May 18 2021, 14:42:02) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch .__version__ '1.8.1' Once you are able to run the following commands without any errors (sometimes warnings are fine, just look into it and try to fix them), we can start constructing numerous PyTorch projects. Understanding PyTorch Tensors in Detail: In this section, we will describe and examine all the details required for getting started with PyTorch. The performance of tensors in PyTorch is similar to the way they perform in TensorFlow. In this section of the article, we will learn how to initialize the tensors, convert them accordingly, perform mathematical computations, and look into some other basic operations. 1. Initialization of tensors The first significant step to learn for deep learning is the process of understanding tensors. A tensor is basically an n-dimensional array and is an object that describes a multilinear relationship between sets of algebraic objects related to a vector space. In this section, we will check out how to initialize the tensors in PyTorch to proceed with further operations. To get started with the initialization of tensors and any other type of operation, we will import the torch module and quickly verify the version. # Importing PyTorch and checking its version import torch torch.__version__ Output: '1.8.1' Let us look at a couple of ways in which we can initialize tensors. In the code block below, we can view a variable 'a', which stores a list of numbers, and 'a_' stores a numerical array containing the same numbers. In the first method, we can use the from_array() function to convert the numpy array into a PyTorch tensor. For converting a list into a PyTorch tensor, the process is quite simple as you can complete the following operation with the tensor() function. The code and results are as shown below. import numpy as np a = [1,2,3] a_ = np.array(a) print(a_) # Method-1 b = torch.from_numpy(a_) print(b) # Method-2 c = torch.tensor(a) print(c.dtype) print(c) Output: [1 2 3] tensor([1, 2, 3], dtype=torch.int32) torch.int64 tensor([1, 2, 3]) In the next few blocks of code, we will learn some essential aspects to note about PyTorch. Firstly, the device variable is often used to set the computational environment for PyTorch. If you have GPU support with CUDA, all the operations will be performed on the GPU, and otherwise it will default to a CPU. You can also assign a tensor some essential properties and proceed to check them accordingly, as shown in the code block below. # Some must know parameters for tensor() function device = torch.device('cuda' if torch.cuda.is_available() else cpu) d = torch.tensor([1,2,3], dtype = torch.float32, device = device, requires_grad = True) print(d.shape) print(d.dtype) print(d.device) print(d.requires_grad) Output: torch.Size([3]) torch.float32 cuda:0 True Three of the major assigning operations that you will mostly perform with the initialization of tensors are assigning them a particular shape of zeros, ones, or random numbers. An application of initializing tensors is useful for managing and declaring weights. torch.zeros(3, 4, dtype=torch.float64) torch.ones(4, 2, dtype=torch.float64) torch.rand(3, 3, dtype=torch.float64) Output: tensor([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]], dtype=torch.float64) tensor([[1., 1.], [1., 1.], [1., 1.], [1., 1.]], dtype=torch.float64) tensor([[0.3741, 0.7208, 0.4353], [0.7368, 0.9386, 0.9840], [0.2983, 0.7320, 0.6277]], dtype=torch.float64) There are multiple other methods to initialize tensors with functions such as eye, linspace, arrange, and many more. Feel free to explore these options on your own and find out how you can exactly utilize these initialization techniques in your PyTorch projects for achieving efficient results. Also, it becomes more crucial to understand when a particular function has more utility than its other counterparts. This topic will be covered in future articles as we start to work on more projects. 2. Tensor Conversions In this section, we will look at some tensor conversion operations that you can perform with the help of PyTorch tensors. Some of the basic tensor conversions include the various data types such as Boolean, short, long, half, and double. The codes and their respective outputs are as shown below. a = torch.tensor([0, 1, 2, 3]) # boolean values print(a.bool()) # Integer type values print(a.short()) # int16 print(a.long()) # int64 # float type values print(a.half()) # float16 print(a.double()) # float64 Output: tensor([False, True, True, True]) tensor([0, 1, 2, 3], dtype=torch.int16) tensor([0, 1, 2, 3]) tensor([0., 1., 2., 3.], dtype=torch.float16) tensor([0., 1., 2., 3.], dtype=torch.float64) Another simple conversion operation, as discussed in the previous initialization of the tensors section, is the ability to convert a list into a tensor, a numpy array into a tensor, and vice versa. The application for the following is as shown in the below code blocks. # Conversion from numpy array to tensor and vice-versa import numpy as np a = [1,2,3] a_ = np.array(a) print(a_) # Numpy to Tensor b = torch.from_numpy(a_) print(b) # Tensor to Numpy c = b.numpy() print(c) Output: [1 2 3] tensor([1, 2, 3], dtype=torch.int32) [1 2 3] 3. Tensor Math Operations Since tensors are basically n-dimensional arrays, there are numerous, useful computations that we can perform with them. These operations include mathematical calculations similar to those that you can perform on numpy arrays, such as the addition of tensors, subtraction of tensors, multiplications of tensors, and so much more. Let us explore each of these individual aspects accordingly and see how they can be performed with PyTorch. Addition of Tensors: You can perform the addition operation in three different ways with PyTorch. Firstly, initialize your tensor variables with the appropriate values and set the data type as a float. There are three ways in which you can add these tensors. In the first method, you can directly add them with the help of the plus '+' symbol. With the second method, you can use the add function in the torch library to perform the addition of the assigned tensors. You can extend this step by adding an empty variable with the same shape as the defined arrays and store the output in the following variable. Finally, you also have the operations for adding the sums and computing the sum of all the individual elements in the entire tensor matrix. a = torch.tensor([1, 2, 3], dtype=torch.float) b = torch.tensor([7, 8, 9], dtype=torch.float) # Method-1 print(a + b) # Method-2 print(torch.add(a, b)) # Method-3 c = torch.zeros(3) c = torch.add(a, b, out=c) print(c) # Cumulative Sum print(torch.add(a, b).sum()) Output: tensor([ 8., 10., 12.]) tensor([ 8., 10., 12.]) tensor([ 8., 10., 12.]) tensor(30.) Subtraction of Tensors: Similar to the addition operation, you can perform subtraction of tensors. You can either find their appropriate differences with the sequential or the intermediate order of the elements. The absolute function can come in handy if you only want the absolute value of the variables. The code and output are as shown below. a = torch.tensor([1, 2, 3], dtype=torch.float) b = torch.tensor([7, 8, 9], dtype=torch.float) # Method-1 print(a + b) # Method-2 print(torch.subtract(b, a)) # Method-3 (Variation) c = torch.zeros(3) c = torch.subtract(a, b, out=c) print(c) # Cumulative Sum of differences torch.subtract(a, b).sum() #Absolute cumulative Sum of differences torch.abs(torch.subtract(a, b).sum()) Output: tensor([ 8., 10., 12.]) tensor([6., 6., 6.]) tensor([-6., -6., -6.]) tensor(-18.) tensor(18) Multiplication of Tensors: Multiplication of tensors is one of the most important operations that you can perform. The operation is computable with either the '*' symbol between the declared variables or by making use of the mul() function. It is also possible to compute the dot multiplication with PyTorch tensors. This process can be done as follows. a = torch.tensor([1, 2, 3], dtype=torch.float) b = torch.tensor([7, 8, 9], dtype=torch.float) # Method-1 print(a * b) # Method-2 print(a.mul(b)) # Calculating the dot product print(a.dot(b)) Output: tensor([ 7., 16., 27.]) tensor([ 7., 16., 27.]) tensor(50.) Another key computation to keep in mind is the ability of PyTorch Tensors to perform matrix multiplication. They can be computed as follows. # Matrix multiplication # a shape of (m * n) and (n * p) will return a shape of (m * p) a = torch.tensor([[1, 4, 2],[1, 5, 5]], dtype=torch.float) b = torch.tensor([[5, 7],[8, 6],[9, 11]], dtype=torch.float) # 3 ways of performing matrix multiplication print("Method-1: \n", torch.matmul(a, b)) print("\nMethod-2: \n", torch.mm(a, b)) print("\nMethod-3: \n", a@b) Output: Method-1: tensor([[55., 53.], [90., 92.]]) Method-2: tensor([[55., 53.], [90., 92.]]) Method-3: tensor([[55., 53.], [90., 92.]]) Division of Tensors: You can perform the division operation as well with either the '/' symbol or using the true_divide function that is available in PyTorch. The code and output below show how you can compute them accordingly. a = torch.tensor([1, 2, 3], dtype=torch.float) b = torch.tensor([7, 8, 9], dtype=torch.float) # Method-1 print(a / b) # Method-2 c = torch.true_divide(a, b) print(c) # Variation c = torch.true_divide(b, a) print(c) Output: tensor([0.1429, 0.2500, 0.3333]) tensor([0.1429, 0.2500, 0.3333]) tensor([7., 4., 3.]) Other math operations that the users must consider are the in-place operations, exponentiation, simple comparisons between variables, and other similar mathematical operations that are useful for specific use cases. Feel free to explore the various possible options. 4. Other basic tensor operations Some of the other operations that you can perform on tensors include operations of indexing the respective label and slicing the array from the given starting point to the respective end point. These computations are computed as follows. a = torch.tensor(np.arange(0,10).reshape(2,5)) print(a) # Indexing of tensors print(a[0]) print(a[0][0]) # Tensor slicing print(a[:, 0:2]) Output: tensor([[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]], dtype=torch.int32) tensor([0, 1, 2, 3, 4], dtype=torch.int32) tensor(0, dtype=torch.int32) tensor([[0, 1], [5, 6]], dtype=torch.int32) There is so much more that you can accomplish with PyTorch tensors. If you are interested to learn more, I would highly recommend checking out more information by yourself from their official website. Steps to construct a PyTorch model: In this section of the article, we will discuss how to construct a typical model architecture with the help of the PyTorch deep learning framework. The basic steps involved in the process of building any model with PyTorch is to import the essential libraries, analyze the type of problem, construct the model accordingly to solve the particular task, train the model for a certain number of epochs to achieve high accuracy and low loss, and finally evaluate the saved model. Let us cover some code snippets for three of the most significant steps involved in the process of constructing these models. Importing the libraries: One of the more essential steps to create PyTorch models is to import the suitable libraries with the help of which we can successfully construct the desired model that we are trying to accomplish. We will understand the specific details of each of these imports when we look at future articles on how to construct high-level projects with the help of PyTorch. # Importing all the essential libraries import torch import torchvision import torch.nn as nn import torch.nn.functional as F from torch.utils.data import DataLoader import numpy as np import matplotlib.pyplot as plt Once you import the essential libraries required for completing your particular task, the next step is to define all the requirements for the specific problem accordingly. Once all the basic parameters for the required task are specified, the process of construction of the model and training the created model can be successfully completed. Construction of the model: PyTorch uses a Pythonic way of coding. This is incredibly helpful for learning their technology due to how simple it is to understand the inherent process of constructing our deep learning model with the help of this deep learning framework. For constructing the model, you can define your desired model in the form of a class, and then use functions inside your defined function to create all the elementary operations to perform. The code snippet provided below is one of the best examples of how you can construct a simple neural network architecture by making use of PyTorch programming. # Constructing the model class neural_network(nn.Module): def __init__(self, input_size, num_classes): super(neural_network, self).__init__() self.fc1 = nn.Linear(in_features=input_size, out_features=50) self.fc2 = nn.Linear(in_features=50, out_features=num_classes) def forward(self, x): x = self.fc1(x) x = F.relu(x) x = self.fc2(x) return x The init block in the defined class can have the super inheritance to allow all the significant elements of the parent class to be accessible. We can then proceed to define a couple of deep learning hidden layers that be built with the nn library of PyTorch. Once you define the essential requirements in the init section of the class, you can proceed to move on to the forward function to construct the primary model. By making use of the previously defined hidden layers (or convolutional or any other layer), you can proceed to construct the final structure of the model and return it accordingly. Training The Model: The final step to complete the construction of your model is to train the model. Below is a sample code that covers the process of running the model for a specific number of epochs. The main steps involved are to set the main parameters, run the forward propagation, and finally use the last three in-built commands for completing the backpropagation process. # Train the network for epoch in range(epochs): for batch, (data, target) in enumerate(train_loader): # Obtaining the cuda parameters data = data.to(device=device) target = target.to(device=device) # Reshaping to suit our model data = data.reshape(data.shape[0], -1) # Forward propogation score = model(data) loss = criterion(score, target) # Backward propagation optimizer.zero_grad() loss.backward() optimizer.step() Once the training process is complete, you can evaluate the performance of your models, and once you deem it suitable for the particular task, we can deploy these models to perform a specific task. In future articles related to PyTorch, we will look at more specific examples and cover a detailed approach to constructing our models to solve numerous tasks with this amazing deep learning framework. Let us now look at a brief comparison between TensorFlow and PyTorch. Differences Between TensorFlow and PyTorch: Before we conclude this article, it would be interesting to draw a quick comparison between TensorFlow and PyTorch to understand their differences as well as their similarities. Both these libraries are fantastic deep learning frameworks and have successfully amassed immense popularity through the years. They offer stiff competition to each other and have some similar working patterns as in both cases we deal with tensors to accomplish a wide array of tasks. However, there are still a lot of differences between these two popular frameworks. PyTorch uses a more Pythonic style of coding, which is more suitable for newer developers or someone who is looking to adapt to learning deep learning neural networks. TensorFlow can sometimes be quite complicated for a new programmer aiming to develop deep learning models because of some of the complex and unintuitive code structures. One of the primary benefits of PyTorch is the dynamic state of computation, while TensorFlow makes use of a state for computational graphs. PyTorch is highly compatible with research projects due to its dynamic state and fast training methodologies. However, it lacks majorly in the department in the area of performing visualizations. TensorBoard is the preferable method for most visualization processes. To learn more about the analysis of PyTorch and TensorFlow, I would recommend checking out the article from the following link. Conclusion: PyTorch is one of the best deep learning frameworks in modern deep learning. It is extensively used for the development of neural networks for solving various applications and is a great alternative to TensorFlow. It has many beneficial features, such as supporting dynamic computational graphs and allowing data parallelism, which means that it can distribute and divide the work among numerous CPUs or GPUs accordingly. It is also an extremely simplistic library granting the users access to better-debugging capabilities, more transparency, and the ability to code easily with more efficiency. In this article, we covered most of the topics related to the basics of PyTorch. We briefly discussed most of the essential aspects of why the PyTorch deep learning framework works so well for most deep learning computations and its rising popularity. Firstly we looked into the installation procedure of PyTorch. And then, we understood most of the basic operations of tensors with PyTorch, including tensor conversions, mathematical operations, and other basic tensor operations. We then discussed the steps to construct a PyTorch model. Finally, we concluded with the differences between TensorFlow and PyTorch. In future articles, we will work on more projects from scratch with PyTorch. Until then, keep learning and coding new stuff! Add speed and simplicity to your Machine Learning workflow today
https://blog.paperspace.com/ultimate-guide-to-pytorch/
CC-MAIN-2022-21
refinedweb
4,158
55.64
Huge state machine delay in post-synthesis timing simulation I have been doing some simulation about my project and I wanted to show you guys some pictures to get your experienced ideas. First one is post-synthesis functional simulation and it is what it is meant to be. However, the second one is wrong. Because there is a huge delay in states. As soon as padding_in_valid is active, state_reg should have been 10 but after 5-6 clk it gets correct states. Does anyone know why there is that huge delay? Post-functional simulation: (click to enlarge) Timing simulation: (click to enlarge) And another question is why there is UUUUU or ZZZZZ. I have assign zero in reset situation to all buffers. See also questions close to this topic - VHDL Comparison Operation Not Defined with Looping Counter I've been trying to make an SRAM chip in vhdl with arbitrary amount of registers and register size using generics and I've almost gotten it to work except for the addressing part. To make an arbitrary sized SRAM chip I started by making a unit SRAM Cell (which I tested to confirm that it works) with the following port map. component SRAM_Cell_vhdl port ( IN : in std_ulogic; Select_Chip : in std_ulogic; Write_Enable : in std_ulogic; Out1 : out std_ulogic ); The generic SRAM chip has the following port map: port ( Datain : in std_logic_vector(m-1 downto 0); address: in std_logic_vector(n-1 downto 0); Chip_Select: in std_logic; Output_Enable: in std_logic; Write_Enable: in std_logic; Out2: out std_logic_vector(m-1 downto 0) ); The way I'm trying to do the addressing is that when it generates the SRAM it checks if the loop counter is equal to the address. If it is it will write the bit to the SRAM cell, if not it will not. loop1: for I in 0 to n-1 generate loop2: for J in 0 to m-1 generate SRAM_Cell_vhdl1 : SRAM_Cell_vhdl port map (Datain(J), Chip_Select and (I = to_integer(unsigned(address))), Write_Enable and Chip_Select, intermediate_out(I, J)); end generate loop2; end generate loop1; However, I am getting an error at I = to_integer(unsigned(address)))telling me that it can't determine the definition of the operation "=". I thought that a loop counter is an integer and the way I'm converting the address to an integer it should be doing a comparison between two integers. The other way I thought of doing this is to use an if statement comparing I and the address, but then I fear that it will not generate all of the required SRAM cells. Is there a way to solve this problem? - Making a generic vhdl decoder I'm trying to make a generic decoder in VHDL, however I am having a problem with the Decoderout output when I am trying to write to it. I am getting the error Error (10028): Can't resolve multiple constant drivers for net "decoder_output[n]" at decoder_generic.vhd(20). How can I resolve this issue? LIBRARY ieee; USE ieee.std_logic_1164.all; use ieee.numeric_std.all; entity decoder_generic is generic( n : positive := 4 -- amount of registers ); port( Decoderin : in std_logic_vector(n-1 downto 0); Decoderout: out std_logic_vector(2**n-1 downto 0) ); end decoder_generic; architecture archdecoder_generic of decoder_generic is begin Decoderout <= (others => '0'); -- default Decoderout(to_integer(unsigned(Decoderin))) <= '1'; end archdecoder_generic; - VHDL design - creating if loop within second process not working I've written a VHDL design that halves the clock's frequency and outputs this 'data clock' onto the sclk pin. I also have a data pin called 'sda' that I'd like to send data out of. The following code works fine. I see the clock signal out of sclk and sda is permanently set to high. Enable is attached to a push button. library ieee; use ieee.std_logic_1164.all; use ieee.std_logic_unsigned.all; -- For Main Clock -- library machXO3l; use machXO3l.all; -------------------- entity top is -- entity's pin out. port( enable : in std_logic; sda : out std_logic := '0'; sclk : out std_logic := '0' ); end entity; architecture top_behav of top is signal temp_sclk : std_logic := '0'; signal clk : std_logic; signal temp_sda : std_logic := '1'; signal stdby : std_logic := '0'; component OSCH -- Component description that is being used within the entity. -- synthesis translate_off generic (NOM_FREQ: string := "24.18"); -- synthesis translate_on port( STDBY : in std_logic; OSC : out std_logic ); end component; attribute NOM_FREQ : string; attribute NOM_FREQ of OSCinst0 : label is "24.18"; begin OSCinst0: OSCH -- synthesis translate_off generic map( NOM_FREQ => "24.18" ) -- synthesis translate_on -- mapping the OSCH component to our entity pin out. port map( OSC => clk, STDBY => stdby ); -- DATA CLOCK GENERATION sclk_p : process(clk, enable) begin if (enable = '0') then temp_sclk <= '0'; elsif (clk'event and clk = '1') then temp_sclk <= NOT temp_sclk; end if; end process; sclk <= temp_sclk; sda <= temp_sda; end top_behav; The problem is when I create the following process within the architecture, both lines are permanently set to 0. I don't understand why. Simulation works fine. I'm able to synthesize my code and program it on to the FPGA. But when monitoring the pins using a scope, they're just set to low. sda_p : process(clk, enable) begin if (enable = '0') then temp_sda <= '0'; else temp_sda <= '1'; end if; end process; The following works fine too: sda_p : process(clk, enable) begin temp_sda <= '1'; end process; I'm using lattice diamond and machx03l evk - How do you manipulate the output from R's cuminc function For instance, in the console, > x=cuminc(ftime=c(1,2,3,4,5,6,5,4),c(1,0,0,1,1,0,0,0),0) > x Estimates and Variances: $`est` 1 2 3 4 5 0 1 0.125 0.125 0.125 0.3 0.5333333 $var 1 2 3 4 5 0 1 0.015625 0.015625 0.015625 0.040625 0.0725 How would I get the 'est' data? - How to create a simulation of a small data set in R I am very new to programming, therefore, I apologize in case my question may seem to fundamental. Basically I have now a data set of apprx. 300 rows. The idea was now to create an entire new data set with the size of 10k for instance, however, which still has the same characteristics as the smlla data set of 300. ID Category1 Category2 Amount1 Probability1 1 Class1 A 100 0.3 2 Class2 B 800 0.2 3 Class3 C 300 0.7 4 Class2 A 250 0.4 5 Class3 C 900 0.6 I already did exploratory analysis. I know that my numeric data has a beta distribution and I know the mean and sd (and the level of skewness in case it is relevant) For my categorical data I know the percent distribution so for instance category A take 25% of the data set. Category B takes 35% and category C takes 40%. My question now is: what are the best packages in order to simulate this data and to create a bigger data set? I found on the simstudy package which seemed very goodm however, I am still very new to programming and I'm having hard time to get my head around the code. Here is the link to the description (I also checked the R documentation but for a newbie like me it is very hard to follow and fully understand it) I still don't really get how I can define there my categorical values. (They set there the percent distribution of the single classes but they dont actually set what apply to which class. Maybe, someone here could help me explain me how I could apply it on my data set or is there another better package for that? Thank you very much in advance! - Why does my balls keep speeding up after bouncing off the wall? [Verlet Integration] I am trying out simulation with Verlet Integration, but after two or so collisions, the balls speed up. Need advice with how to implement this, and possibly explaining why this happens. Thanks! BouncingPane.java - handles the behaviour of balls. import java.util.ArrayList; import java.util.Iterator; import javafx.scene.layout.Pane; public class BouncingPane extends Pane{ private ArrayList<Ball> balls; public BouncingPane() { super(); balls = new ArrayList<Ball>(); this.setPrefSize(300, 300); } public void addBall(Ball b) { balls.add(b); } public ArrayList<Ball> getBallsCollection() { return balls; } public void render() { this.getChildren().addAll(balls); } public void updatePos() { Iterator<Ball> it = this.getBallsCollection().listIterator(); Ball b; double x_new, y_new; while(it.hasNext()) { b = it.next(); x_new = b.getCenterX(); x_new += b.getCenterX() - b.getPX(); if(x_new - b.getRadius() < 0) { x_new = b.getRadius(); b.setPX(b.getRadius() - (b.getPX() - b.getCenterX())); x_new = b.getRadius(); } else if(x_new + b.getRadius() >= getWidth()) { x_new = getWidth() - b.getRadius(); b.setPX(getWidth() - b.getRadius() + (b.getCenterX() - b.getPX())); x_new = getWidth() - b.getRadius(); } else { b.setPX(b.getCenterX()); b.setCenterX(x_new); } y_new = b.getCenterY(); y_new += b.getCenterY() - b.getPY(); if(y_new - b.getRadius() < 0) { y_new = b.getRadius(); b.setPY(b.getRadius() - (b.getPY() - b.getCenterY())); y_new = b.getRadius(); } else if(y_new + b.getRadius() >= getHeight()) { y_new = getHeight() - b.getRadius(); b.setPY(getHeight() - b.getRadius() + (b.getCenterY() - b.getPY())); y_new = getHeight() - b.getRadius(); } else { b.setPY(b.getCenterY()); b.setCenterY(y_new); } } } } Ball.java - Just a Circle class that tracks previous position import javafx.scene.shape.Circle; import javafx.scene.paint.Color; public class Ball extends Circle{ private double prevX; private double prevY; public Ball(double x, double y, double rad, Color col) { super(x, y, rad, col); prevX = 0; prevY = 0; } public void setPX(double x) {prevX = x;} public void setPY(double y) {prevY = y;} public double getPX() {return prevX;} public double getPY() {return prevY;} } Main.java - Where the UI is initialised. Some imports were not used (I just copied from my previous codes) import java.util.Iterator; import javafx.animation.AnimationTimer; import javafx.animation.KeyFrame; import javafx.animation.KeyValue; import javafx.animation.Timeline; import javafx.application.Application; import javafx.geometry.Bounds; import javafx.scene.Scene; import javafx.scene.layout.Pane; import javafx.scene.paint.Color; import javafx.scene.shape.Circle; import javafx.stage.Stage; import javafx.util.Duration; public class Main extends Application{ public static void main(String[] args) { launch(); } public void start(Stage stage) { BouncingPane canvas = new BouncingPane(); Pane main = new Pane(canvas); Scene scene = new Scene(main, 300, 300); Ball ball = new Ball(100, 100, 5, Color.RED); Ball ball1 = new Ball(50, 50, 5, Color.YELLOW); canvas.addBall(ball); canvas.addBall(ball1); ball.setPX(99.0); ball.setPY(98.0); ball1.setPX(49.0); ball1.setPY(51.0); canvas.render(); stage.setTitle("Moving Ball"); stage.setScene(scene); stage.show(); startAnimation(canvas); } public void startAnimation(final BouncingPane canvas) { final AnimationTimer timer = new AnimationTimer() { long lastTime = 0; public void handle(long now) { if(lastTime > 0) { canvas.updatePos(); } lastTime = now; } }; timer.start(); } } - Debugging Microblaze Soft Processor on FPGA leads to incorrect breakpoints being hit I am trying to debug set of source files with UART capability and PS/2 using Memory Mapped Input Output and when I use the loader file I set the instruction and data memory depth to 262144 while adjusting the onboard RAM's on my FPGA to the same value. The issue encountered is that my program keeps breaking initially at different lines as opposed to the main function where I set my break point. How do I ensure code is properly fitting and that my breakpoints will reflect the current and correct line of code I am debugging? - Pulse Shaping using FPGA I have input PWM signal with arbitrary frequency and duty cycle, and I have to limit the duty cycle of the output PWM to a set value. If the input PWM duty cycle is lower than that set duty cycle than it can pass, if it is higher, than the output PWM duty cycle gets limited to the set duty cycle. So far I have been successful in creating a duty cycle limiter module that works by measuring the input PWM high pulse length and low pulse length, and calculating the duty cycle based on that, but this module doesn't work if the input pwm signal duty cycle varies continuously. It will work if the input duty cycle changes but than stays constant after that. How can I make it work if the duty cycle keeps changing and I am unable to measure the pulse widths - How do I generate an ELF file given custom C/C++ code to override default provided .elf file in an existing project? I have been trying to determine if I can generate an ELF (Executable Linkable Format) using SDK tools in Xilinx Vivado 2018.3 for generation of instruction memory content. Nowhere do I see a simple procedure for generating such a file (updated to match cpp sources modified after acquisition of a project using Microblaze and HDL Code, along with CPP source files. Any help in generation of ELF files in the mentioned toolset would be appreciated.
http://quabr.com/51277912/huge-state-machine-delay-in-post-synthesis-timing-simulation
CC-MAIN-2019-13
refinedweb
2,144
56.15
Assuming that every person has a National ID number, I am trying to check if the Entered number is a valid ID number, under these conditions: 1. ID number must be of 10 digits length. 2. If ID length is greater than/equal to 8 and less than 10 digits, add one or two 0s to the left. 3. ID's digits must not be all the same(e.g. '1111111111', '222222222',... 4. Multiply the first 9 digits by numbers 10 to 2 respectively, add them up and devide the sum by 11: 4.1.if the reminder is less than 2: the reminder must be equal to the last ID digit 4.2.if the reminder is equal to/greater than 2, subtract it from 11:the reminder must be equal to the last ID digit if any condition is not met, the ID number is INVALID. this is my effort: def ID_check(ID_code): if (all(x==ID_code[0] for x in ID_code)) or len(ID_code)<8 or len(ID_code)>10 : return False if 8<=len(ID_code)<10: ID_code = (10- len(ID_code))*'0'+ID_code intlist = [int(i) for i in ID_code] control = (intlist[0]*10+intlist[1]*9+intlist[2]*8+intlist[3]*7+intlist[4]*6+intlist[5]*5+intlist[6]*4+intlist[7]*3+intlist[8]*2)%11 if control<2: return control == intlist[9] elif control >= 2: control = 11 - control return control == intlist[9] print ID_check(raw_input("Enter Your ID Code Number: ")) Any Suggestion/Correction is appreciated. P.S. Sorry for my English, Its not my First language. Edited by M.S.
https://www.daniweb.com/programming/software-development/threads/451737/validating-personal-id-number
CC-MAIN-2017-26
refinedweb
266
67.49
I really get confused when I use the function torch.multiprocessing.spawn. Consider the following code: import torch import torch.multiprocessing as mp x = [1, 2] def f(id, a): print(x) print(a) if __name__ == '__main__': x.append(3) mp.spawn(f, nprocs=2, args=(x, )) For any process the main function spwans, it outputs the following: [1, 2] [1, 2, 3] I have the following questions: (1) Why is the first line of output [1, 2]? I think x is a global variable, and fork new process will share the memory in linux, which follows this page: (2) Are the parameters in spawn deep copied to the new processes? Or just pass a reference? Thank you very much!
https://discuss.pytorch.org/t/some-confusion-about-torch-multiprocessing-spawn-in-pytorch/56008
CC-MAIN-2022-21
refinedweb
120
72.76
Hi I am trying to display time in a text box which updates itself every second. I am using visual C# and I am pretty new to it. (My first program in visual C#) (Actually its part of a little bigger problem in which I have to implement a timer class, which will show me how much time has been elapsed since I have started the program. Again this elapsed time will update it every second) I know this can be done using timer class but I don't know exactly how to use it. I found a similar thread at DaniWeb, but I am stuck at "how to display it" and where are all the ticks thing that they talk about. Also I tried to use object browser but seems like I am unable to interpret what each thing displayed in object browser does. So here is my code: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Threading; namespace Timer { public partial class Timer : Form { public Timer() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { String hours; String minutes; String seconds; hours = DateTime.Now.Hour.ToString(); minutes = DateTime.Now.Minute.ToString(); seconds = DateTime.Now.Second.ToString(); textBox1.Text = "System Time - " + hours + ":" + minutes + ":" + seconds; timer1.Enabled = false; } private void button1_Click(object sender, EventArgs e) { timer1.Enabled = true; } private void button2_Click(object sender, EventArgs e) { } private void timer1_Tick(object sender, EventArgs e) { textBox1.Text = DateTime.Now.ToString("hh:mm:ss tt"); } private void button3_Click(object sender, EventArgs e) { timer1.Enabled = false; } } } I have shown the time using datetime.now, but how to update it everysecond. Other way I see is timer, setting interval to 1000, and showing it in the textbox. how to get the time of timer into the text box. Design of the form attached as a reference.
https://www.daniweb.com/programming/software-development/threads/198471/displaying-time-in-a-text-box
CC-MAIN-2017-30
refinedweb
315
68.57
If you were to ask a veteran programmer for one piece of advice on good programming practices, after some thought, the most likely answer would be, “Avoid global variables!”. And with good reason: global variables are one of the most historically abused concepts in the language. Although they may seem harmless in small academic programs, they are often problematic in larger ones. New programmers are often tempted to use lots of global variables, because they are easy to work with, especially when many calls to different functions are involved (passing data through function parameters is a pain). However, this is generally a bad idea. Many developers believe non-const global variables should be avoided completely! But before we go into why, we should make a clarification. When developers tell you that global variables are evil, they’re usually variable). g_mode doSomething() main() In short, global variables make the program’s state unpredictable. Every function call becomes: After debugging, you determine that your program isn’t working correctly because g_mode has value 3, not 4. How do you fix it? Now you need to find all of the places g_mode could possibly be set to 3, and trace through how it got set in the first place. It’s possible this may be in a totally unrelated piece of code! 3 One of the key reasons to declare local variables as close to where they are used as possible is because doing so minimizes the amount of code you need to look through to understand what the variable does. Global variables are at the opposite end of the spectrum -- because they can be accessed anywhere, you might have to look through the entire program to understand their usage. In small programs, this might not be an issue. In large ones, it will be. actually functions. Best practice Use local variables instead of global variables whenever possible. So what are very good reasons to use non-const global variables? There aren’t many. In most cases, there are other ways to solve the problem that avoids the use of non-const global variables. But in some cases, judicious use of non-const global variables can actually reduce program complexity, and in these rare cases, their use may be better than the alternatives. A good example is a log file, where you can dump error or debug information. It probably makes sense to define this as a global, because you’re likely to only have one log in a program and it will likely be used everywhere in your program. For what it’s worth, the std::cout and std::cin objects are implemented as global variables (inside the std namespace). As a rule of thumb, any use of a global variable should meet at least the following two criteria: There should only ever be one of the thing the variable represents in your program, and its use should be ubiquitous throughout your program. Many new programmers make the mistake of thinking that something can be implemented as a global because only one is needed right now. For example, you might think that because you’re implementing a single player game, you only need one player. But what happens later when you want to add a multiplayer mode (versus or hotseat)? Protecting yourself from global destruction If you do find a good use for a non-const global variable, a few useful bits of advice will minimize the amount of trouble you can get into. This advice isn’t only for non-const global variables, but can help with all global variables. First, prefix all non-namespaced global variables with “g” or “g_”, or better yet, put them in a namespace (discussed in lesson 6.2 -- User-defined namespaces), to reduce the chance of naming collisions. For example, instead of: Do this: Second, instead of allowing direct access to the global variable, it’s a better practice to “encapsulate” the variable. First, make sure the variable can only be accessed from within the file it’s declared in, eg. by making the variable static or const.. A reminder const variables have internal linkage by default, gravity doesn’t need to be static. const gravity static Third, when writing an otherwise standalone function that uses the global variable, don’t use the variable directly in your function body. Pass it in as an argument instead. That way, if your function ever needs to use a different value for some circumstance, you can simply vary the argument. This helps maintain modularity. Instead of: A joke What’s the best naming prefix for a global variable? Answer: // C++ jokes are the best. If you were writing a simple game, with, let's say, an inventory, and the whole game is comprised of a bunch of functions that change said inventory in some set way, would it be bad practice to use said inventory as a global? An inventory generally implies some sort of ownership (whose inventory is it?). Therefore, I'd define the inventory as a part of whoever the owner is (the player, a merchant, a container object such as a chest, etc...). That way if you later want to have multiple players or chests or whatever, each one can have its own associated inventory. From an implementation standpoint, I might define an Inventory type, and then include that as a subtype of the owning object: This will make more sense once we've covered structs (and classes). -Anakin, Global Variables are evil! -From my point of view, Local Variables are evil! -Well then you are lost! Good joke lol The lessons in this chapter are great! They're a bit difficult (because of how much information they contain), but I can always refer back to them when I'm unsure about something. 1. Nitpick: Section "Protecting yourself from global destruction": > First, make sure the variable can only be accessed from within the file it’s declared in, eg. "eg." should be "e.g." 2. > const variables have internal linkage by default, gravity doesn’t need to be static. Are there any negative consequences to adding "static" anyway for consistency? 3. > A joke Thank you for bringing this masterpiece upon our eyes. I hope to see many more like this. You cannot denigrate global variables just because they exist. They have plenty of valid uses. And they can be used quite reasonably and never cause any problem. It depends on the scope of the program. Small modular programs are always better, and global variables within a limit scope are sometimes the best way to maintain code readability. We don't need Nazi's trying to totally eliminate something or pronouncing them as "evil" just because they don't match their own way of thinking. Look in the mirror some time. Rewrite the article without the use of the "evil" term. external bool hate{true}; Jesus Christ dude this is a beginner-level C++ tutorial so it's perfectly reasonable to call them evil, as that leaves a strong memory to people new to programming not to use them when they really don't need to. It's 1's and 0's, not Jews. If it's best to have the global be internal, why not use 'constexpr' rather than 'const'? Wouldn't 'constexpr' be better, in that: it CAN'T be forward declared (it's forced to be internal) and is faster/more efficient than 'const'? This page's "reach other pages" link at the bottom is busted, might want to fix that. Hello! This is a very weird question. Can hackers use non-constant global variables to their advantage? Since they can be modified anywhere by any function?? ps: I don't know how they'll have access to the source code in the first place I am only speaking hypothetically. Thanks in advance :) If a hacker has access to your source code, you've got bigger problems that global variables. Staying hypothetical, the constness of the global variable doesn't matter. If a compiler can inline the variable (Replace its uses with the variable's value and remove the variable), it will do so no matter if the variable is `const` or not. The `const` only helps the compiler to determine if it can be inlined. If the variable doesn't get inlined, it can be overridden no matter if it's `const` or not. If the variable is `const`, overriding it causes undefined behavior, but it will very likely work. "That way, if your function ever needs to use a different value for some circumstance, you can simply vary the argument. " Does it mean that if the function wants to use a different global variable or different value for the same global variable? because for the second one we could just change the literals used to initialized the global variable! How come we can "export" this function to other files? '// this function can be exported to other files to access the global outside of this file' Functions have external linkage by default, they can be used in other files via a forward declaration. Great! Appreciated! Hi, I am confused a bit. You mentioned that, "if you find a good use for a non-const global variable". But all of your examples are about CONST global variables!why? "Protecting yourself from global destruction If you do find a good use for a non-const global variable, a few useful bits of advice will minimize the amount of trouble you can get into" > all of your examples are about CONST global variables!why? Because there aren't many good uses of non-const global variables. The advice given isn't only for non-const global variables, I updated the lesson. Thank you. I got the "//" joke although it took a little while. Well done on all the other jokes though! Thank you for the lesson. Am I correct in assuming that non-const global variables mean global variables that do not use the keyword "const" or "constexpr"? If that is the case, could I substitute "non-const" global variables for "non-constant" global variables? Also for the last section, "Protecting yourself from global destruction" is it correct to assume that these pieces of advice apply to all global variables, i.e non-const (e.g variables that don't use the const and constexpr keyword) as well const global variables? I ask the last question because at first my understanding of the section was that is gives advice for non-const variables, but then I became confused when it used const variables in the example. Thank you! > Am I correct in assuming [...] Yes, you're right. Non-const is a common term in the C++ world. > is it correct to assume that these pieces of advice apply to all global variables these pieces of advice apply to all global variables Within So what are very good reasons to use non-const global variables? You could remove: "of the" from the sentence: There should only ever be one "of the" thing the variable represents in your program, and its use should be ubiquitous throughout your program. How can we use something like that?? static const double gravity { 9.8 }; as I know 'const' variables are internal linkage by default This is covered in the first example in lesson 6.8. I think the problem here is that the `static` is redundant, because `const` already causes internal linkage. Definitely redundant! The question is somewhat ambiguous -- I guessed that he was asking how we can use an internal linkage across multiple files, but perhaps I was incorrect. I was wondering about this as well. The line "static const double gravity { 9.8 }; // has internal linkage, is accessible only by this file". Is there a reason for redundancy here? Could you add a comment to the section explaining why (or remove 'static' altogether)? Also, why wouldn't you use constexpr here? > Is there a reason for redundancy here? Nope, I removed it and added a reminder about `const`'s linkage. > why wouldn't you use constexpr here? `constexpr` is definitely the way to go. But with `constexpr`, the example wouldn't work, because it's never used with `extern`. This example applies if the type isn't `constexpr`, eg. `std::vector` before C++20. This example will likely be updated when we get new `constexpr` rules in C++20. Now the previous paragraph says "First, make the variable static, so it can only be accessed directly in the file in which it is declared.". Maybe "First, make sure the variable has internal linkage (e.g. by making it static, const, or constexpr), so it can only be accessed directly in the file in which it is declared." Hey guys. Still going through the tutorials when I get the time to do so. Noticed a mistake - minor and irrelevant to any examples - but I thought you might want to fix it. Section "So what are very good reasons to use non-const global variables?", second paragraph: "A good example is an log file,..." should be "A good example is a log file,..." Lesson amended, thanks! Isn't this invalid because constexpr can't be forward declared (and it's not inline)? I think the example should be changed to "const" or clarify in the comment. It's valid, but it doesn't make sense. I changed this example to use `const` instead, thanks for your suggestion! Parenthesis used in variable initialization instead of the { } Lesson updated, thanks again :) You're welcome. Teamwork =) !" -- could something like this not be fixed with shadowing>? for example for the nuclear missile example reinitializing the variable g_mode within the dosomething function like so int g_mode{} causes "missile" not to launch due to the shadowed value being hidden and g_mode(2) to be destroyed once the dosomething function is exited. Presumably doSomething() has a legitimate reason to change the global g_mode state. If it doesn't, it would be better off using a local variable named "mode", not a shadowing variable named "g_mode" that is actually a local variable. Name (required) Website Save my name, email, and website in this browser for the next time I comment.
https://www.learncpp.com/cpp-tutorial/why-global-variables-are-evil/
CC-MAIN-2021-17
refinedweb
2,373
64
Hello Everyone, I am new in this community to ask my django query, I am learning django programming and I am facing some problems like the exact role for Django model? I know about basic like Django’s model makes use of a powerful ORM layer which simplifies dealing with the database and the data and accelerates the development process but I want to more explore about it like what the structure and templates for django? Can anyone suggest me There are two main ways to organize your template structure in Django: the default app-level way and a custom project-level approach. Option 1: App Level By default the Django template loader will look within each app for a templates folder. But to avoid namespace issues you also need to repeat the app name in a folder below that before adding your template file. For example, if we had an example_project with a pages app and a home.html template file, the proper structure would be like this: within the pages app we create a templates directory, then a pages directory, and finally our pages | ├── home.html └── manage.py This is demonstrated in the official Django polls tutorial and works just fine. Option 2: Project Level As a Django projects grow in size it’s often more convenient to have all the templates in one place rather than hunting for them within multiple apps.. # settings.py TEMPLATES = [ { ... 'DIRS': [os.path.join(BASE_DIR, 'templates')], ... }, ] Then create a templates directory at the same level as the project. Here’s an example of what it would look like with the home.html └── manage.py To know more about What is Django? Just check [here](
https://forum.djangoproject.com/t/exploring-django-model/1110
CC-MAIN-2022-21
refinedweb
282
59.03
Hi, I am currently trying to implement a simple game soundtrack in the Unity3D Angry Bots Demo using FMod. For now it’s just one single event containing a Multisound module in which I have loaded two different 16 bar loops. I’ve set the Module to looping and random. When I trigger the event in Unity3D using .start() it starts playing, however, after 32 bars the sound stops, and resumes after a brief amount of silence. There are no gaps in the loops and it loops correctly in FMod itself. I am polling the "paused" value of the event in the Update() function and it’s always False, so the event is playing. Any ideas what I might be missing? - michaelklier asked 3 years ago - You must login to post comments Hi Peter, sorry for the late response. I got around to check back on this just today. I am indeed using FMod_Listener script in Unity3D. I have it attached to the AudioListener. Right after that I have this piece of code in another script: [code:2ab205o3] using UnityEngine; using System.Collections; using FMOD.Studio; public class MusicManager : MonoBehaviour { FMOD.Studio.EventInstance loop; float volume; bool paused; // Use this for initialization void Start () { loop = FMOD_StudioSystem.instance.GetEvent ("event:/Music/Explore"); loop.start (); } // Update is called once per frame void Update () { loop.getVolume (out volume); loop.getPaused (out paused); UnityEngine.Debug.Log (volume.ToString()); UnityEngine.Debug.Log (paused.ToString()); } } [/code:2ab205o3] My FMod Session contains just one looped event with a Multisound Module containing 2 loops. Inside FMod everything works as expected. Maybe I am missing something or doing sth. wrong (I have some experience when it comes to programming but I am mainly an audio guy :)). Thanks again for the response and have nice weekend! Michael - michaelklier answered 3 years ago The most likely cause would be update not being called. Are you calling Studio.System.update regularly (i.e. every frame) ? If you’re using the FMODListener.cs script this will happen automatically but if you’re creating the system yourself, you will need to make sure to call update. Other than that haven’t encountered any issues which would cause that problem, if it isn’t resolved by calling update please send you project through to [email protected] and I will take a look. - Guest answered 3 years ago
https://www.fmod.org/questions/question/forum-40956/
CC-MAIN-2017-26
refinedweb
393
68.87
Pure Python interface to the Pluggable Authentication Modules system on Linux. This is the first highly rated package on Python Diary to date, due to all it's appealing attributes. If your a Linux shop, and need a solution to enable authentication throughout your organization, more than likely your using PAM somewhere in there. Since PAM has so many backends and connects to NIS services, using this to authenticate your corporate applications makes complete sense. It may not enable SSO, but I personally think SSO is a tad insecure, since it doesn't protect against idiot employees who leave their workstations unlocked. The convenience of SSO has it's security drawbacks. I admit it's nice to be automatically signed into my intranet services at work. If you know nothing about UNIX like systems, NIS is something like Active Directory, where it is a central server which stores users and their passwords. However, unlike Active Directory, NIS is not LDAP, and NIS only stores users and their specific information. However, it is possible to use NIS to distribute /etc/hosts files across an organization, or other files such as automounts for NFS. Remember, Linux unlike Windows prefers to be very modular, so unlike Windows, your not going to have a single application doing a dozen things. However, since PAM is very modular(plugable), it supports various methods of authentication, using LDAP and even authentication through Active Directory using the SMB/CIFS protocol. In the end, using PAM in Python on a Linux server which is connected to a Windows domain server, will provide your organization with an authentication system all users can relate to(transparency). The end-users can literally use their Windows domain username and password to authenticate against PAM, so using this Python package will bring utter transparency to your corporate userbase. Using this package is super easy and almost failproof, just put the variables in the right place and your all set to go. Here's a simple example to show how easy this is: import pam from getpass import getuser, getpass from random import choice print "Please supply your credentials to receive an awesome quote..." if pam.authenticate(getuser(), getpass()): print choice(open('/usr/share/games/fortunes/fortunes','r').read().split('%')) else: print "The /var/log/auth.log file will be seen by an administration soon and you will be in trouble... That is your fortune." The code above assumes you have the BSD Fortune program and it's associated fortune files installed on your machine. This example will provide the best fortunate regardless if the username/password combo is correct(also considering how lazy your local *NIX admin is too). Here's how you would create a Django authentication backend for PAM: from django.contrib.auth.models import User import pam class PAMBackend(object): supports_inactive_user = False def authenticate(self, username=None, password=None): if pam.authenticate(username, password): try: user = User.objects.get(username=username) except User.DoesNotExist: # Create a new user. Note that we can set password # to anything, because it won't be checked; the password # from PAM will. user = User(username=username, password='get from PAM') user.save() return user return None def get_user(self, user_id): try: return User.objects.get(pk=user_id) except User.DoesNotExist: return None Lovely, isn't it? It's super simply to use this package in any framework really, I just thought I'd add a Django example, since most of the readers here use Django. This is why this package is rated so high, it's literally so easy to use and I highly doubt anybody will make a mistake while using it's syntax. There is a third parameter to authenticate which I neglected to mention. It is for the service which you are authenticating against. The default is login which is a rather sane default, so unless you really need to customize which PAM backend you want to authenticate against, it's safe to exclude it entirely. Pros: - Quick and easy to use - Zero configuration required - Works right after install - Easy to learn and understand Cons: - Can conflict with the python_pam module used for writing PAM modules Hi Kevin, I just saw the page and I want to write a simple django project for user authentication. These users are all linux users, so what am I supposed to do? Hi Kevin, I dont understand how PAM works. In your example, you use: pam.authenticate(username, password), and it returns true or false, but, where does PAM query? inside /etc/passwd ? So, you have two methods of auth, because then you create user at Django, really? I'm lost. thanks in advance Kevin Hello Manu, that is a great question. PAM queries anywhere you tell it to, it's configured within PAM. Normally /etc/passwd is queries, but other sources can be added such as LDAP, Windows Domain Logon, etc... The reason Django also adds the user is for that "User Relationships" still work. Say if a user creates a new object in the database that needs to be assigned to them somehow, you can't connect MySQL directly to PAM, so a dummy user needs to be created for this purpose. This is how all Django authentication plugins work, such as facebook connect, OpenID, etc...
http://pythondiary.com/reviews/python-pamV0.1.4.html
CC-MAIN-2014-35
refinedweb
881
54.63
Comparing XML documents Within the Unix environment, one of my favorite utilities is the diff command, which allows me to get a glimpse of differences between two files. I often use it to compare text files containing data or program source files to see what exactly changed between two versions. Naturally, I used diff to compare XML files as well since these days it seems all data files are in XML. It soon became evident that I'm missing half of the story. The diff tool has its roots in text comparison (character by character). It obviously is not aware of the structure of an XML file and cannot take advantage of the inherent hierarchy to make a comparison that is meaningful in the XML context. I started searching for an XML-aware diff tool and found one at. There are two main products that are both Java-based. The first is DeltaXML-Markup, which can compare and combine well-formed XML files (without DTD). The other product is DeltaXML-DTD that does the same for valid XML files (with a DTD). This results in smaller change files because it understands more about the structure of the data. For example, it can ignore changes to element order that are not significant, and it can match elements in the two files based on some keys in the data. java jar dxml.jar status which should produce the following: Delta XML Tools for DeltaXML-Markup(version 1_7) License Details: Using built-in license key | Function | Mode | Expiration | +--------------+--------+--------------+ | Compare |DEMO | PERMANENT | | Combine |DEMO | PERMANENT | Usage: java -jar dxml.jar compare [-v] [-q] file1.xml file2.xml delta.xml java -jar dxml.jar combine [-v] [-q] file.xml delta.xml result.xml java -jar dxml.jar combine-forward [-v] [-q] file1.xml delta.xml file2.xml java -jar dxml.jar combine-reverse [-v] [-q] file2.xml delta.xml file1.xml java -jar dxml.jar relicense license-key java -jar dxml.jar status With the evaluation license, you are limited to small files. I used the following simple XML file (stored in a file called a.xml): <?xml version="1.0"?> <classroom> <student grade="8"> <name> John Doe </name> <score> 88 </score> </student> <student grade = "10"> <name> Jane Doe </name> <score> 98 </score> </student> <student grade="9"> <name> Bill Jones </name> <score> 91 </score> </student> </classroom> To create b.xml, I changed the grade of the second student to 11 (instead of 10) and I changed the last student's score element to scored. I then ran the comparator as follows: java -jar dxml.jar compare -v a.xml b.xml delta.xml The resulting file defines its own namespace (deltaxml) and then proceeds to mark the various elements as either unchanged or specify the nature of the change. For example, the student whose grade was changed to 11 produced the following: <student deltaxml: The product ships with an XSL stylesheet that formats the result file into HTML tables for a concise and clear picture of the changes. The HTML table uses color coding to show what elements/attributes have changed and what the old/new values are. I used the Apache Xalan processor as follows: C:\DeltaXML-Markup-1_7>java org.apache.xalan.xslt.Process -xsl deltaxml-tables.xsl -in delta.xml -out visualdelta.html You can find additional documentation and technical details about DeltaXML from the web site. The version that understands DTD should be helpful when dealing with various meta data repositories. I can also see some usage in comparing/combining XSLT stylesheets with this tool. - Piroz Mohseni Piroz Mohseni is a freelance writer for Developer.com
http://www.developer.com/xml/article.php/641591/Comparing-XML-documents.htm
CC-MAIN-2014-52
refinedweb
605
66.03
If! This same process will work for you too (skip to thesection).: var express = require('express'); var router = express.Router(); /* GET users listing. */ router.get('/', function(req, res, next) { // Comment out this line: //res.send('respond with a resource'); // And insert something like this instead: res.json([{ id: 1, username: "samsepi0l" }, { id: 2, username: "D0loresH4ze" }]); }); module.exports = router;: import React, { Component } from 'react'; import './App.css'; class App extends Component { state = {users: []} componentDidMount() { fetch('/users') .then(res => res.json()) .then(users => this.setState({ users })); } render() { return ( <div className="App"> <h1>Users</h1> {this.state.users.map(user => <div key={user.id}>{user.username}</div> )} </div> ); } } export default App;. If your browser doesn’t have fetch support yet, you’ll need to install the polyfill . See here for which browsers currently support fetch..
http://126kr.com/article/8t9le1x2q0n
CC-MAIN-2017-17
refinedweb
132
63.36
I created phandom.org a few months ago, but yesterday finally found the time to make some needed changes to it. So, now is a good time to explain how I'm using Phandom in some of my unit tests. Before I get started, though, I should say a few words about phantomjs, which is a JavaScript interface for WebKit. WebKit, on the other hand, is a web browser without a user interface. WebKit is a C++ library that enables manipulation of HTML content, through DOM calls. For example, this is a simple JavaScript located code in example.js: var page = require('webpage').create(); page.open( '', function() { console.log('loaded!'); phantom.exit(0); } ); We run phantomjs from the command line with the following code: $ phantomjs example.js PhantomJS creates a page object (provided by webpage module inside phantomjs), and then asks it to open() a Web page. The object communicates with WebKit and converts this call into DOM instructions. After which, the page loads. The PhantomJS engine then terminates on line 6. WebKit renders a web page with all necessary components such as CSS, JavaScript, ActionScript, etc, just as any standard Web browser would. So far so good, and this is the traditional way of using PhantomJS. Now, on to giving you an idea of how Phandom (which stands for "PhantomJS DOM") works inside Java unit tests: To test this, let's give phantomjs an HTML page and ask him to render it. When the page is ready, we'll ask phantomjs to show us how this HTML looks in WebKit. If we see the elements we need and desire,—we're good to go. Let's use the following example: import com.rexsl.test.XhtmlMatchers; import org.hamcrest.MatcherAssert; import org.phandom.Phandom; public class DocumentTest { @Test public void rendersValidHtml() { Document doc = new Document(); // This is the method we're testing. It is supposed // to return a valid HTML without broken JavaScript // and with all required HTML elements. String html = doc.html(); MatcherAssert.assertThat( XhtmlMatchers.xhtml(new Phandom(html).dom()), XhtmlMatchers.hasXPath("//p[.='Hello, world!']") ); } } When we use the above code, here is what happens. First, we get HTML html as a String from doc object, and then pass it to Phandom as an argument. Then, on line 13, we call the Phandom.dom() method to get an instance of the class org.w3c.dom.Document. If our HTML contains any broken JavaScript code, method dom() produces a runtime exception and the unit test fail. If HTML is clean and WebKit is able to render it without problems, the test passes. I'm using this mechanism in a few different projects,and it works quite well. Therefore, I highly recommend it. Of course, you shouldn't forget that you must have phantomjs installed on your build machine. In order to avoid unit test failures when phantomjs is not available or present, I've created the following supplementary method: public class DocumentTest { @Test public void rendersValidHtml() { Assume.assumeTrue(Phandom.installed()); // the rest of the unit test method body... } } Enjoy and feel free to report any bugs or problems you encounter to: GitHub issues :)
http://www.yegor256.com/2014/04/06/phandom.html
CC-MAIN-2017-26
refinedweb
523
66.84
You may think the motivation for good style is earning that ✔+ from your section leader, but the most important beneficiary of your efforts is you yourself. Committing yourself to writing tidy, well-structured code from the start sets you up for good times to come. Your code will be easier to test, will have fewer bugs, and what bugs there are will be more isolated and easier to track down. You finish faster, your results are better, and your life is more pleasant. What's not to like? The guidelines below identify some of style qualities we will be looking for when grading your programs. As with any complex activity, there is no one "best" style, nor a definitive checklist that covers every situation. That said, there are better and worse choices and we want to guide you toward the better choices. In grading, we will expect that you make a concerted effort to follow these practices. While it is possible to write code that violates these guidelines and yet still exhibits good style, we recommend that you adopt our habits for practical purposes of grading. If you have theoretical points of disagreement, come hash that out with us in person. In most professional work environments you are expected to follow that company's style standards. Learning to carefully obey a style guide, and writing code with a group of other developers where the style is consistent among them, are valuable job skills. This guide gives our general philosophy and priorities, but even more valuable will be the guidance on your own particular style choices. Interactive grading with your section leader is your chance to receive one-on-one feedback, ask questions, and learn about areas for improvement. Don't miss out on this opportunity! Layout, Indentation, and Whitespace Indentation: Consistent use of whitespace/indentation always! Proper whitespace/indentation illuminates the structure of your program and makes it easier to follow the code and find errors. - Increment indent level on each opening brace {, and decrement on each closing brace }. - Chose an increment of 2-4 spaces per indent level. Be consistent. - Do not place more than one statement on the same line. // confusing, hard to follow while (x < y) { if (x != 0) { binky(x); } else { winky(y); y--; }} return x; // indentation follows structure while (x < y) { if (x != 0) { binky(x); } else { winky(y); y--; } } return x; - Long lines: When any line is longer than 100 characters, break it into two lines. Indent the overflow text to align with text above. result = longFunctionName(argument, 106 * expression * variable) + variable - longerFunctionName() + otherFunction(variable); result = longFunctionName(argument, 106 * expression * variable) + variable - longerFunctionName() + otherFunction(variable); Blank lines: Use blank lines to separate functions and logical groups of statements within a function. Whitespace: Add space between operators and their operands. Add parentheses to show grouping where precedence might be unclear to reader. int root = (-b+sqrt(b*b-4*a*c))/2*a; int root = (-b + sqrt((b * b) - (4 * a * c))) / (2 * a); Names Choose meaningful identifiers. This reduce the cognitive load for reader and self-documents the purpose of each variable and function. - Nouns for variable names: For variables, the question is "What is it?" Use a noun ( name, scores) add modifier to clarify ( courseName, maxScore). Do not repeat the variable type in its name (not titleString, just title). Avoid one-letter names like aor p(exceptions for loop counters i, jor, coordinates xand y). Never name a variable l, much too easily confused with the number one. Verbs for function names: For functions, the question is "What does it do?" Functions which perform actions are best identified by verbs ( findSmallest, stripPunctuation, drawTriangle). Functions used primarily for their return value are named according to property being returned ( isPrime, getAge). Use named constants: Avoid sprinkling magic numbers throughout your code. Instead declare a named constvalue and use where that value is needed. This aids readability and gives one place to edit value when needed. const int VOTING_AGE = 18; - Capitalization: Use camel-case for names of functions and variables ( countPixels), capitalize names of classes/types ( GridLocation), and uppercase names of constants ( MAX_WIDTH). Conventions allows reader to quickly determine which category a given identifier belongs to. Variable scope - Scope: Declare variables in the narrowest possible scope. For example, if a variable is used only inside a loop, declare it inside the scope for the loop body rather than at the top of the function or at the top of the file. - Don't reuse same name in inner scope: Declaring a variable in inner scope with same name as a variable in outer scope will cause inner use of name to "shadow" the outer definition. Not only is this confusing, it often leads to difficult bugs. - No global variables: Do not declare variables at global scope. When there is information to be shared across function calls, it should flow into and out via parameters and return values, not reach out and access global state. Use of C++ language features Prefer C++ idioms over C idioms: Since C++ is based on C, there is often a "C++ way" to do a given task and also a "C way". For example, the "C++ way" to print output is via the output stream cout, while the "C way" is using printf. C++ strings use the stringclass, older code uses the C-style char*. Prefer the modern C++ way. // old school char* str = "My message"; printf("%s\n", str); // modern and hip string str = "My message"; cout << str << endl; for vs while: Use a forloop when the number of repetitions is known (definite); use a whileloop when the number of repetitions is unknown (indefinite). // loop exactly n times for (int i = 0; i < n; i++) { ... } // loop until there are no more lines string str; while (input >> str) { ... } break and continue in loops: Wherever possible, a loop should be structured in the ordinary way with clear loop start, stop, advance and no disruptive loop control. That said, there are limited uses of breakthat are okay, such as loop-and-a-half ( while(true)with break) or need to exit loop mid-iteration. Use of continueis quite rare and often confusing to reader, better to avoid. Use of fallthrough in switch cases: A switch case should almost always end with a breakor returnthat prevents continuing into the subsequent case. In the very rare case that you intend to fallthrough, add a comment to make that clear. Accidental fallthrough is the source of many a difficult bug. switch (val) { case 1: handleOne(); break; case 2: handleTwo(); // NOTE: fallthrough *** case 3: handleTwoOrThree(); return statements Although it is allowed for a function to have multiple returnstatements, in most situations it is preferable to funnel through a single returnstatement at the end of the function. An early returncan be a clean option for a recursive base case or error handled at the beginning of a function. returncan also serve as a loop exit. However, scattering other returnthroughout the function is not a good idea– experience shows they are responsible for a disproportionate number of bugs. It is easy to overlook the early-return case and mistakenly assume the function runs all the way to its end. Always include {}on control statements: The body an if/else, for, while, etc., should always be wrapped in {}and have proper line breaks, even if the body is only a single line. Using braces prevents accidents like the one shown below on left. // ugh if (count == 0) error("not found"); for (int i = 0; i < n; i++) draw(i); if (condition) doFirst(); doSecond(); // inside? Indent looks so, but no braces! // better if (count == 0) { error("not found"); } for (int i = 0; i < n; i++) { draw(i); } if (condition) { doFirst(); doSecond(); } - Booleans: Boolean expressions are prone to redundant/awkward constructions. Prefer concise and direct alternatives. A boolean value is true or false, you do not need to further compare to true/false or convert a boolean expression to true/false. if (isWall == true) { ... } if (matches > 0) { return true; } else { return false; } // better if (isWall) { ... } return (matches > 0); - Favor &&, ||, and !over and, or, and not: For various reasons mostly related to international compatibility, C++ has two ways of representing the logical connectives AND, OR, and NOT. Traditionally, the operators &&, ||, and !are used for AND, OR, and NOT, respectively, and the operators are the preferred ways of expressing compound booleans. The words and, or, and notcan be used instead, but it would be highly unusual to do so and a bit jarring for C++ programmers used to the traditional operators. // non-standard if ((even and positive) or not zero) { ... } // preferred if ((even && positive) || !zero) { ... } - Use error to report fatal conditions: The errorfunction from the Stanford library can be used to report a fatal error with your custom message. The use of erroris preferred over throwing a raw C++ exception because it plays nicely with the debugger and our SimpleTest framework. // raw exception if (arg < 0) { throw arg; } // preferred if (arg < 0) { error("arg must be positive!"); } Efficiency In CS106B, we value efficient choices in data structure and algorithms especially where there is significant payoff, but are not keen on micro-optimizations that serve to clutter the code for little gain. Better BigO class: Given a choice of options for implementing an algorithm, the preference is generally for the one with better Big O, i.e. an O(NlogN) algorithm is preferable to quadratic O(N^2), constant O(1) or logarithmic O(logN) beats out linear O(N). Choose best performing ADT for situation: For example, if you need to do many lookup operations on collection, Setwould be preferable to Vectorbecause of efficient containsoperation. All Stack/Queue operations are O(1) making Stackan ideal choice if you only add/remove at top or Queueperfect if you remove from head and add at tail. There is a small win for choosing HashSet/ HashMapover Set/ Mapwhen you do not require access to elements in sorted order. Save expensive call result and re-use: If you are calling an expensive function and using its result multiple times, save that result in a variable rather than having to call the function multiple times. This optimization is especially valuable inside a loop body. // computes search twice if (reallySlowSearch(term) >= 0) { remove(reallySlowSearch(term)); } // avoid recompute int index = reallySlowSearch(term); if (index >= 0) { remove(index); } - Avoid copying large objects: When passing an object as a parameter or returning an object from a function, the entire object must be copied. Copying large objects, such as collection ADTs, can be expensive. Pass the object by reference avoid this expense. The client and the function then share access to the single instance. // slow because of copying void process(Set<string> data) { ... } Vector<int> fillVector() { Vector<int> v; // add data to v ... return v; // makes copy } // improved efficiency void process(Set<string>& data) { ... } // shares vector without making copy void fillVector(Vector<int>& v) { // add data to v ... } Unify common code, avoid redundancy When drafting code, you may find that you repeat yourself or copy/paste blocks of code when you need to repeately perform the same/similar tasks. Unifying that repeated code into one passage simplifies your design and means only one piece of code to write, test, debug, update, and comment. - Decompose to helper function: Extract common code and move to helper function. // repeated code if (g.inBounds(left) && g[left] && left != g[0][0] ) { return true; } else if g.inBounds(right) && g[right] && right != g[0][0] ) { return true; } // unify common into helper bool isViable(GridLocation loc, Grid<bool>& g) { return g.inBounds(loc) && g[loc] && loc != g[0][0]); } ... return isViable(left, g) || isViable(right, g); - Factoring out common code: Factor out common code from different cases of a chained if-else or switch. // repeated code if (tool == CIRCLE) { setColor("black"); drawCircle(); waitForClick(); } else if (tool == SQUARE) { setColor("black"); drawSquare(); waitForClick(); } else if (tool == LINE) { setColor("black"); drawLine(); waitForClick(); } // factor out common setColor("black"); if (tool == CIRCLE) { drawCircle(); } else if (tool == SQUARE) { drawSquare(); } else if (tool == LINE) { drawLine(); } waitForClick(); Function design A well-designed function exhibits properties such as the following: - Performs a single independent, coherent task. - Does not do too large a share of the work. - Is not unnecessarily entangled with other functions. - Uses parameters for flexibility/re-use (rather that one-task tool). - Clear relationship between information in (parameters) and out (return value) Function structure: An overly long function (say more than 20-30 lines) is unwieldy and should be decomposed into smaller sub-functions. If you try to describe the function's purpose and find yourself using the word "and" a lot, that probably means the function does too many things and should be subdivided. Value vs. reference parameters: Use reference parameters when need to modify value of parameter passed in, or to send information out from a function. discr = sqrt((b * b) -(4 * a * c); root1 = (-b + discr) / (2 * a); root2 = (-b - discr) / (2 * a); } Prefer return value over reference 'out' parameter for single value return: If a single value needs to be sent back from a function, it is cleaner to do with return value than a reference out parameter. // harder to follow void max(int a, int b, int& result) { if (a > b) { result = a; } else { result = b; } } // better as int max(int a, int b) { if (a > b) { return a; } else { return b; } } Avoid "chaining" calls, where many functions call each other in a chain without ever returning to main. Here is a diagram of call flow with (left) and without (right) chaining: // chained control flow main | +-- doGame | +-- initAndPlay | +-- configureAndPlay | +-- readCubes | +-- playGame | +-- doOneTurn // better structured as main | +-- welcome | +-- initializeGame | | | +-- configureBoard | | | +-- readCubes | +-- playGame | | | +-- doOneTurn Commenting Some of the best documentation comes from giving types, variables, functions, etc. meaningful names to begin and using straightforward and clear algorithms so the code speaks for itself. Certainly you will need comments where things get complex but don't bother writing a large number of low-content comments to explain self-evident code. The audience for all commenting is a C++-literate programmer. Therefore you should not explain the workings of C++ or basic programming techniques. Some programmers like to comment before writing any code, as it helps them establish what the program is going to do or how each function will be used. Others choose to comment at the end, now that all has been revealed. Some choose a combination of the two, commenting some at the beginning, some along the way, some at the end. You can decide what works best for you. But do watch that your final comments do match your final result. It's particularly unhelpful if the comment says one thing but the code does another thing. It's easy for such inconsistencies to creep in the course of developing and changing a function. Be careful to give your comments a once-over at the end to make sure they are still accurate to the final version of the program. File/class header: Each file should have an overview comment describing that file's purpose. For an assignment, this header should include your name, course/section, and a brief description of this file's relationship to the assignment. Citing sources: If your code was materially influenced by consulting an external resource (web page, book, another person, etc.), the source must be cited. Add citations in a comment at the top of the file. Be explicit about what assistance was received and how/where it influenced your code. - Function header: Each function should have a header comment that describes the function's behavior at a high level, as well as information about: - Parameters/return: Give type and purpose of each parameter going into function and type and purpose of return value. - Preconditions/assumptions: Constraints/expectations that client should be aware of.("this function expects the file to be open for reading"). - Errors: List any special cases or error conditions the function handles (e.g. "…raises error if divisor is 0", or "…returns the constant NOT_FOUND if the word doesn't exist"). Inline comments: Inline comments should be used sparingly where code complex or unusual enough to warrant such explanation. A good rule of thumb is: explain what the code accomplishes rather than repeat what the code says. If what the code accomplishes is obvious, then don't bother. // inline babbling just repeats what code already says, don't! int counter; // declare a counter variable counter++; // increment counter while (index < length) // while index less than length TODOs: Remove any // TODO:comments from a program before turning it in. Commented-out code: It is considered bad style to submit a program with large.
https://web.stanford.edu/class/cs106b/resources/style_guide.html
CC-MAIN-2021-21
refinedweb
2,778
62.27
The V rand module provides two main ways in which users can generate pseudorandom numbers: randmodule. import rand- Import the randmodule. rand.seed(seed_data)to seed (optional). rand.int(), rand.u32n(max), etc. import rand.pcg32- Import the module of the PRNG required. mut rng := pcg32.PCG32RNG{}- Initialize the struct. Note that the mutis important. rng.seed(seed_data)- optionally seed it with an array of u32values. rng.int(), rng.u32n(max), etc. You can change the default generator to a different one. The only requirement is that the generator must implement the PRNG interface. See get_current_rng() and set_rng(). For non-uniform distributions, refer to the rand.dist module which defined functions for sampling from non-uniform distributions. These functions make use of the global RNG. Note: The global PRNG is not thread safe. It is recommended to use separate generators for separate threads in multi-threaded applications. If you need to use non-uniform sampling functions, it is recommended to generate them before use in a multi-threaded context. For sampling functions and generating random strings, see string_from_set() and other related functions defined in this top-level module. For arrays, see rand.util. A PRNG is a Pseudo Random Number Generator. Computers cannot generate truly random numbers without an external source of noise or entropy. We can use algorithms to generate sequences of seemingly random numbers, but their outputs will always be deterministic. This is often useful for simulations that need the same starting seed. If you need truly random numbers that are going to be used for cryptography, use the crypto.rand module. The following 21 functions are guaranteed to be supported by rand as well as the individual PRNGs. seed(seed_data)where seed_datais an array of u32values. Different generators require different number of bits as the initial seed. The smallest is 32-bits, required by sys.SysRNG. Most others require 64-bits or 2 u32values. u32(), u64(), int(), i64(), f32(), f64() u32n(max), u64n(max), intn(max), i64n(max), f32n(max), f64n(max) u32_in_range(min, max), u64_in_range(min, max), int_in_range(min, max), i64_in_range(min, max), f32_in_range(min, max), f64_in_range(min, max) int31(), int63() There are several additional functions defined in the top-level module that rely on the global RNG. If you want to make use of those functions with a different PRNG, you can can change the global RNG to do so. All the generators are time-seeded. The helper functions publicly available in rand.seed module are: time_seed_array()- returns a []u32that can be directly plugged into the seed()functions. time_seed_32()and time_seed_64()- 32-bit and 64-bit values respectively that are generated from the current time. Note that the sys.SysRNG struct (in the C backend) uses C.srand() which sets the seed globally. Consequently, all instances of the RNG will be affected. This problem does not arise for the other RNGs. A workaround (if you must use the libc RNG) is to: Please note that math interval notation is used throughout the function documentation to denote what numbers ranges include. An example of [0, max) thus denotes a range with all posible values between 0 and max including 0 but excluding max. fn ascii(len int) string ascii returns a random string of the printable ASCII characters with length len. fn byte() byte byte returns a uniformly distributed pseudorandom 8-bit unsigned positive byte. fn f32() f32 f32 returns a uniformly distributed 32-bit floating point in range [0, 1). fn f32_in_range(min f32, max f32) f32 f32_in_range returns a uniformly distributed 32-bit floating point in range [min, max). fn f32n(max f32) f32 f32n returns a uniformly distributed 32-bit floating point in range [0, max). fn f64() f64 f64 returns a uniformly distributed 64-bit floating point in range [0, 1). fn f64_in_range(min f64, max f64) f64 f64_in_range returns a uniformly distributed 64-bit floating point in range [min, max). fn f64n(max f64) f64 f64n returns a uniformly distributed 64-bit floating point in range [0, max). fn get_current_rng() &PRNG get_current_rng returns the PRNG instance currently in use. If it is not changed, it will be an instance of wyrand.WyRandRNG. fn hex(len int) string hex returns a hexadecimal number of length len containing random characters in range [a-f0-9]. fn i64() i64 i64 returns a uniformly distributed pseudorandom 64-bit signed (possibly negative) i64. fn i64_in_range(min i64, max i64) i64 i64_in_range returns a uniformly distributed pseudorandom 64-bit signed i64 in range [min, max). fn i64n(max i64) i64 i64n returns a uniformly distributed pseudorandom 64-bit signed positive i64 in range [0, max). fn int() int int returns a uniformly distributed pseudorandom 32-bit signed (possibly negative) int. fn int31() int int31 returns a uniformly distributed pseudorandom 31-bit signed positive int. fn int63() i64 int63 returns a uniformly distributed pseudorandom 63-bit signed positive i64. fn int_in_range(min int, max int) int int_in_range returns a uniformly distributed pseudorandom 32-bit signed int in range [min, max). Both min and max can be negative, but we must have min < max. fn intn(max int) int intn returns a uniformly distributed pseudorandom 32-bit signed positive int in range [0, max). fn new_default(config PRNGConfigStruct) &PRNG new_default returns a new instance of the default RNG. If the seed is not provided, the current time will be used to seed the instance. fn seed(seed []u32) seed sets the given array of u32 values as the seed for the default_rng. The default_rng is an instance of WyRandRNG which takes 2 u32 values. When using a custom RNG, make sure to use the correct number of u32s. fn set_rng(rng &PRNG) set_rng changes the default RNG from wyrand.WyRandRNG (or whatever the last RNG was) to the one provided by the user. Note that this new RNG must be seeded manually with a constant seed or the seed.time_seed_array() method. Also, it is recommended to store the old RNG in a variable and should be restored if work with the custom RNG is complete. It is not necessary to restore if the program terminates soon afterwards. fn string(len int) string string returns a string of length len containing random characters in range [a-zA-Z]. fn string_from_set(charset string, len int) string string_from_set returns a string of length len containing random characters sampled from the given charset fn u32() u32 u32 returns a uniformly distributed u32 in range [0, 2³²). fn u32_in_range(min u32, max u32) u32 u32_in_range returns a uniformly distributed pseudorandom 32-bit unsigned u32 in range [min, max). fn u32n(max u32) u32 u32n returns a uniformly distributed pseudorandom 32-bit signed positive u32 in range [0, max). fn u64() u64 u64 returns a uniformly distributed u64 in range [0, 2⁶⁴). fn u64_in_range(min u64, max u64) u64 u64_in_range returns a uniformly distributed pseudorandom 64-bit unsigned u64 in range [min, max). fn u64n(max u64) u64 u64n returns a uniformly distributed pseudorandom 64-bit signed positive u64 in range [0, max). fn ulid() string ulid generates an Unique Lexicographically sortable IDentifier. See . NB: ULIDs can leak timing information, if you make them public, because you can infer the rate at which some resource is being created, like users or business transactions. () fn ulid_at_millisecond(unix_time_milli u64) string ulid_at_millisecond does the same as ulid but takes a custom Unix millisecond timestamp via unix_time_milli. fn uuid_v4() string uuid_v4 generates a random (v4) UUID See interface PRNG { seed(seed_data []u32) u32() u32 u64() u64 u32n(max u32) u32 u64n(max u64) u64 u32_in_range(min u32, max u32) u32 u64_in_range(min u64, max u64) u64 int() int i64() i64 int31() int int63() i64 intn(max int) int i64n(max i64) i64 int_in_range(min int, max int) int i64_in_range(min i64, max i64) i64 f32() f32 f64() f64 f32n(max f32) f32 f64n(max f64) f64 f32_in_range(min f32, max f32) f32 f64_in_range(min f64, max f64) f64 } PRNG is a common interface for all PRNGs that can be used seamlessly with the rand modules's API. It defines all the methods that a PRNG (in the vlib or custom made) must implement in order to ensure that all functions can be used with the generator. struct PRNGConfigStruct { seed []u32 = seed.time_seed_array(2) } PRNGConfigStruct is a configuration struct for creating a new instance of the default RNG. Note that the RNGs may have a different number of u32s required for seeding. The default generator WyRand used 64 bits, ie. 2 u32s so that is the default. In case your desired generator uses a different number of u32s, use the seed.time_seed_array() method with the correct number of u32s.
https://modules.vlang.io/rand.html
CC-MAIN-2021-31
refinedweb
1,439
65.12
Okay, so you just finished your coding bootcamp. You bought this book, Cracking the Coding Interview. You open it up, try to do some problems. When you go to look at the solution, you see some strange things. For example, to check to see if two strings are equal, the solution suggests string1.equals(string2). Why can't we just use string1 == string2? The book's solutions, detailed as they are, might as well be written in Russian if the only languages you know are JavaScript and Python, and you just learned them a couple months ago. After struggling to make sense of the book, you give up and set it aside, letting it become a big green paperweight on the end of your desk. System.out.println("Am I having a stroke?"); As it turns out, the solutions are in Java, the object oriented language mainly taught at a number of universities. At least, they taught it at the university I went to, where the CS program is so competitive that a 3.9 grade in the intro course might be egregious enough for them to reject your major application. Perhaps kicking off the first day of class with public static void main(String[] args) is their first attempt to scare off potential CS students. But in bootcamp, we ain't like that. Everyone is welcome. Anyone can learn to code if they're willing to put in the effort. But also, the program is only 12 weeks, so instead of Java, we'll start with something easier, like Python. Why do bootcamps teach Python instead of Java? A number of factors make Python simpler to learn than Java. As stated before, Java is an object oriented language. This means everything must be defined in a class, i.e. everything must be an object. For Java to run your code, you have to put it in the main method (that gibberish I said was used to scare away potential CS majors). Java is also a compiled language, meaning that before it runs, it has to compile for syntax errors. Remember in JavaScript when you missed a semicolon, and everything ran fine anyway? Not so with Java. One missing semicolon means your program won't compile, which means it won't run, either. And finally, Java is typed, meaning every variable has to be declared with its type. Remember the let keyword in JavaScript? Like "let this variable be like, whatever type idk"? Java wants to know that you want it to be an integer, so instead, you would use the keyword int-- and that variable would have to stay an integer as long as you use it. Taking all this into consideration, Python syntax looks much cleaner. For example, let's look at what's needed to print "Hello World" to the console with Java versus Python 3. Main.java public class Main { public static void main(String[] args) { System.out.println("Hello World"); } } hello_world.py print("Hello World") So it's no wonder that they taught you Python instead. Powerful built in methods make for great data analysis, so there are plenty of places to use it in industry. But learning data structures and algorithms isn't about the language you use; it's about the theory behind it. I know, not something they spent a lot of time on in your bootcamp. That's why I'm here. Every week, I'll be posting simple, easy-to-understand explanations of sample whiteboarding problems, approaching them how you would as a Python developer. Solutions will be posted to a github repo here. Of course, you won't be an expert over night, but I hope some weekly coffee reading will help put you on track to cracking those coding interviews in Python. See you next time! Sheamus Heikkila is formerly a Teaching Assistant at General Assembly Seattle. This blog is not associated with GA. Top comments (0)
https://dev.to/pythonwb/how-to-learn-data-structures-and-algorithms-when-you-re-fresh-out-of-your-bootcamp-2bob
CC-MAIN-2022-40
refinedweb
659
74.9
At AlgoTech Solutions, we are always interested in modern technologies and what makes them unique. Previously we’ve covered why converting to another web framework is not such a difficult task. However, the previous article only scraped the tip of the iceberg, covering topics such as installing and bootstrapping a project in Symfony and Django and some basics of route generation, handling and templating. But it’s not a web application yet. For a proper web application, we need to delve into the data models and how they are used to manipulate the underlying databases. So let’s see how that goes. Connecting to a database Modern web frameworks use a single connection to the database, which is injected wherever the developer needs to use it. The configuration is usually straightforward, by defining the database driver, name, password (and sometimes host and port, in case you’re not using the defaults). As covered before, web frameworks usually have one or more settings files where you can specify these details. In our example, we will hook both the Symfony app and the Django app to mysql databases running with default parameters (host 127.0.0.1, port 3306, user “root” without a password). In Symfony, we edit the parameters file in app/config/parameters.yml and add the database parameters as following: By running the comand php app/console doctrine:database:create , you can directly create an empty database called jobeet from your Symfony project. In Django, use the settings file corresponding to your app and add the connection details there. In our case, the app name is jobeet_py , so the settings file is in jobeet_py/settings.py. We can define more database connections, but for our app a single one will do. We call it the default connection. Besides the Python syntax as opposed to Symfony’s YAML, there is also a semantic difference, with “drivers” being called “engines”. Another difference between the two is that, in Django, the database can not be created through a Django command, but only from MySQL separately. Run the following: and exit the MySQL console using Ctrl+D. Now our empty database schemas were created and our frameworks are properly connected to an underlying database. But surely we will not run raw SQL queries on these databases, since we can use the powerful modelling tools that web frameworks provide, which hydrate our database rows into objects. This is called an Object Relational Mapper (ORM). ORMs are found in almost every Object-oriented programming language and web framework, which means if you have any experience with Hibernate (Java), ActiveRecord (Ruby) or any other ORM, you will find it easy to understand data modelling in any framework. For the following part of this article I will assume you are familiar with Object Modelling and all its intricacies such as types, abstraction, encapsulation, database relationship, keys etc.. Models Now that we have a running connection to a database, we can start designing our models. The correspondence between objects in our application and rows in our database tables will be based on specific rules. Each class corresponds to a table in our database, while fields usually correspond to the properties (even if the string representing the column name and the one representing the property do sometimes suffer some transformations to make them compliant to coding standards in each programming language). Models in Symfony In the Symfony web framework, we can define our models in various ways, the most common being: - annotated PHP models - separate PHP and YAML models While previous versions of Symfony favoured the former, the recommended approach is now to have a YAML file to define the model, which will later be transferred to the database. In addition to that, a PHP model represents the object itself. You might think that there is a lot of mindless code to be written, with getters, setters and property names, but don’t worry. You will see that there are automatic generators for parts of this code. So, let’s start by creating our YML models. In /src/Ens/JobeetBundle/Resources/config/doctrine/ , define yml files related to each of your models. For example, in Category.orm.yml we will write: The first line represents the namespace of the PHP model we will eventually generate. The table key defines the name of the database table where the objects will be stored. We create an auto-generating id, specific fields related to our object and details about relationships with other objects. In Symfony, we need to define both ends of a relationship between two models, so that means in Job.orm.yml we will have a portion related to categories: For the sake of space in this article, I will not include all the models defined, but you can find them on my github page and the Jobeet Day 3 tutorial. After adding all your YML entities, the next command will create the PHP models automatically, but you can add custom handling which will not be overwritten next time the command runs. To reflect the changes in your app in the database as well, run: Models in Django It is somewhat simpler to create the models in Django, since Python’s lack of getters and setters makes for more concise models, which can be located in a single file. In models.py, add the model classes separated by two blank lines. For example, the Category class will look like this: In Django, we need not define the relationships at both ends. This means that the one-to-many relationship between Categories and Jobs can be defined simply by adding a ForeignKey in the Job class: You can find the complete models.py file for this article on my github page. To reflect the changes in your database, you need to run an initial migration, by running the following commands: If these commands get you into any trouble, it might be that you don’t have the mysql driver installed, so run pip install pymysql if you get any errors. Remember models are highly customisable and you can enhance them by adding signal-based logging, finite state machines for complex transitions and many more useful functionalities to achieve your project’s goal. Now, you might ask why Symfony updates the models directly (by considering the differences between the existing database and the current models) and Django forces you to use migrations (by checking in the database which migration was run last and running only the subsequent migrations). The answer is complicated. In fact, you should never use the schema update functionality form Symfony in production, but it’s more suitable for a tutorial. In production, Symfony also has migration utilities which can be integrated as libraries. Django forces you to take the moral high ground from the start, even if it is more difficult for beginners to understand migrations. We have previously covered all about using migrations, so make sure you read our article in case your feelings are “clouded” in this debate. Fixtures Fixtures are ways to add test data to development environments. We will add some data to our database using Symfony’s Doctrine Fixtures package and Django’s inbuilt fixtures loading functionality. In Symfony, we first install the package that handles fixtures: And we create the fixtures as explained in the same Jobeet Day 3 tutorial. You can also find the complete php fixtures in this folder on my github. I will not go into more depth about fixtures since the Jobeet tutorial does a great job at explaining them and the concept is straightforward over other frameworks, such as Django, as well. In Django we will create a JSON or YML file with the fixtures and load them using a special command. In jobeet_py/jobeet/fixtures/load_data.json : (full fixtures file on my github page). After adding and editing your fixtures file containing all the objects and their properties, run:python manage.py loaddata load_data.json Congratulations! You now have some test data in your database. Keep in mind never to use these commands in a production environment, since it may delete all the “real” data in the system. CRUD Now, for the last part of this article, let’s do something interesting with our models. Up until now, we just defined some models, echoed our progress to a database, and added some test data, but now let’s try to add, edit and remove some objects from the apps themselves. Symfony is a godsent this time. It features a command for generating the entire CRUD process for an entity. Run the command: This will automatically generate a JobController which handles CRUD, routes in the src/Ens/JobeetBundle/Recources/config/routing/job.yml file, and the corresponding templates. This means that just by importing the newly-created routes to our main routing file, clearing the cache, and running the application, we will be able to see a full CRUD at . For filling the Category drop-downs, we also need to define the __toString() method on the Category object to return the name property. So, in src/Ens/JobeetBundle/Resources/config/routing.yml: Then run: And in your browser you can admire your work. Try adding, deleting and editing jobs, and all your changes will hold. In Django, there is no CRUD autogeneration command, but it does provide some cool generic functions which help a lot. As you have sen, url generation is custom in the PHP version as well, so we will write in our jobeet_py/jobeet_urls.py file: Next we make use of generic views which handle CRUD without any extra logic. In views.py : By inheriting the List, Create, Update and DeleteView, each of our CRUD classes only needs to define the model on which the operation is performed (and sometimes, the route name for the success of the operation), and the classes themselves will know what to do internally. Next we create the similar HTML files for each of these actions. You may think that, compared to the Symfony method, this is where you spend most time coding. However, it is an unfair comparison at this point, since usually such views are customised anyway, so they will neeed more work in both frameworks. For now, you can copy-paste them from my github page. Run the Django server ( python manage.py runserver ) and admire your work on . It looks just like the Symfony experiment we did before. Remember that the examples presented illustrate local environments. For details on deployment and server choices, you can check out our other articles: “How powerful are AWS t2 instances?”, “How to keep PHP Background Jobs alive” and Oana’s article on why Django migrations are a must on production servers and in the codebase. Conclusion While these two frameworks may see more different from the data point of view, the basic concepts remain the same. To recap, we first defined our database connection parameters (in a fairly similar manner), we modelled our objects using modelling best practices and we imprinted the design on the database itself. Here, we investigated two methods for updating the database structure: with direct differential update and migrations. We then added some test data to our databases using fixtures, and generated the CRUD routes, actions and views for the Job objects. What do you think? Do you have any experience with database and/or object modeling? Which database structure update method do you prefer in your development environments? If you have any improvements, suggestions or ideas for follow-up articles, let us know in the comments section.
https://www.algotech.solutions/blog/php/working-with-data-in-web-frameworks/
CC-MAIN-2020-34
refinedweb
1,933
60.65
From: Gennadiy Rozental (gennadiy.rozental_at_[hidden]) Date: 2007-06-09 23:45:05 "Rene Rivera" <grafikrobot_at_[hidden]> wrote in message news:466B3A16.9050308_at_gmail.com... > Gennadiy Rozental wrote: >> It there any problems >> with unnamed namespace usage within Boost.Test? > > I have no clue. The point is to try and push /you/ into looking at them > ;-) The only thing this "point" achieves is me now ignoring any reports. You can't report "N problems detected" if you just suspect something might be misused. Gennadiy Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2007/06/123152.php
CC-MAIN-2020-29
refinedweb
104
71.21
User talk:Famine From Uncyclopedia, the content-free encyclopedia To Do I plan on fixing up the articles here. Don't delete them, if you can help it. Sir Famine, Gun ♣ Petition » 01:54, 19 June 2006 (UTC) Gratuitous bans Have you read this about 4chan: "For 1 April 2005 (a Friday), the moderators created a fake furry board as an April Fools joke and left it up until April 3rd. Every person who posted to the joke board was then banned from 4chan for an extended period of time. This incident is referred to as April Furs Day." A ban trap, has this been tried here? I thought you were the man to ask. Worth a thought - if you could think of a decent honey-pot, you could ban all the users who... well, ban all the users. FreeMorpheme 20:26, 11 August 2006 (UTC) - Indeed, there are numerous honeypots. One of my favorites is Uncyclopedia:Complaints Department. There are others, but I'd have to infini-ban you if I told you where they were. Sir Famine, Gun ♣ Petition » 00:26, 14 August 2006 (UTC) Random Instanity/name It was me who put it on QVFD, sorry. I thought it was just a one liner, until I went to look back at QVFD and saw that you would ban the person who listed it. I'm not a noob anymore, but I still have tendencies I guess. I looked at the page again and it kept changing, hence the Randomness. I'll be more careful. -- Sir Severian (Sprich mit mir!) 16:26, 14 August 2006 (UTC) - More importantly, check "what links here" - that might make its importance a little more clear. Sir Famine, Gun ♣ Petition » 17:24, 14 August 2006 (UTC) - Will:38, 14 August 2006 (UTC) Forgive me father, for I have sinned Just letting you know that Rc unblocked me. I apologize for not tagging that article, I slipped, as last night I put maintainence tags on an unprecedented amount of crap. Actually, Hinoa promoted me to 2nd LT last night for my work in getting rid of crap last night (See my talk page). Anyway, just letting you know that I'm sorry, and I'll do my best to make sure I don't do that again. Also, good luck on winning UOTM. --:17, 15 August 2006 (UTC) - Wow, that was nice of RC. Now let me dig up the weird ass double-redirect pile of crap you made the other day and determine a ban length for that. It's not that you're on my bad list - I just have lots of ban reasons listed under your name already. Don't worry - you're just replacing some other users who either shaped up or left. Someone's got to fill in the "OMG Famine hates me" hole. Sir Famine, Gun ♣ Petition » 14:28, 16 August 2006 (UTC) - Aaaah...I see...it was you redirecting both How to be a russian and How to be a Russian to HowTo: Be a Russian. Not a double redirect - just somewhat unecessary ones. But since I deleted all the articles in question, I guess I'll have to scratch that off my "Ban Sir Cornbread For..." list. Don't worry - I'll find some other reasons. Sir Famine, Gun ♣ Petition » 14:35, 16 August 2006 (UTC) - Yeah, actually when I moved that page, I listed those redirects on QVFD to no avail...I tried though. OMG YOU HATE:31, 16 August 2006 (UTC) Oh noes plz Sorry about not tagging that VFD article...and my first offence was due to technical difficulties (AKA Wireless Internet), just letting you know Codeine unblocked me. -- 13:49, 16 August 2006 (UTC) - Thanks for being so viligant on VFD - but be warned about it. We've already had one admin burnout due to it, and it tends to make whoever is babysitting it pretty grouchy. The smoother it runs, the happier I am. You've made up a pretty good excuse for ban #1, so I'll let that one go....now tag articles so people watching them know to come vote. Nothing as sad as to lose an article and find out three months later that it got deleted and you never knew to vote on it. Sir Famine, Gun ♣ Petition » 14:28, 16 August 2006 (UTC) - Alright I will, *goes to put that article back up the right 18:33, 16 August 2006 (UTC) Poor Edits I notice that you RV'd the crappy edits on Jebus. I'd messaged Algorithm and Chronarion earlier, but perhaps you could take a look at a similar set of, in my opinion, poor edits on Original Jesus that were/are by the same user. Algorithm thinks that the edits clean up a messed up page and Chronarion hasn't replied yet. Al thinks I shouldn't let it bother me (tho' it does) so, as of yet, I've left the edits alone (as it would result just in a revert war, which wouldn't help the page). I find the "His Birth" section is particularly vile, as jokes about pendulous tits and abortion are difficult to make funny (as the text proves). I would like your opinion, as so far it would appear that I'm alone in despising the recent butchery (although it decrufts somewhat, it recrufts in other area, incidently defunnying wherever it crufts), and as such would simply result in a revert war which, again, doesn't help the page. Obviously I prefered the page before, although it was unfocused and varied wildly in quality. The unfocussedness meant it had something for everyone (like that Jesus guy) and the wildly varying quality encouraged edits. Eerily, it appears that the editor just pushed the good stuff out of Original Jesus and pushed the "old" text from Jebus to Original Jesus. Cheesy. Anyway, I'm on a self-imposed Uncyc-limiting vacation at the moment, so take your time to consider whatever it was that I said. I, meanwhile, will grab a beer and return to the book that's waiting for me on my patio. Also, as you may have surmised, I like commas. I believe that they, the humble bastion of the pause, are by far the most underrated pi character. Of course I'm biased somewhat, as I type as I speak. If I don't pause every five words or so, my cursed childhood stutter makes an unwelcome appearance. But enough about me... Carry on.--Sir Modusoperandi Boinc! 06:24, 22 August 2006 (UTC) - I'm trying to care, but the more I try, the more I'm tempted to just delete both articles. In fact, There's a good chance of that happening to at least one of them the longer I look at the two. Because they both suck ass now, and have sucked for quite some time. Sir Famine, Gun ♣ Petition » 13:07, 22 August 2006 (UTC) - Ah. Just make sure to commit the "Bend it Like Bethlehem" part to memory first. I'm sure that'll make it into that book that the guys on Sunday morning TV someday. --Sir Modusoperandi Boinc! 14:46, 22 August 2006 (UTC) OK, I'm officially not caring any more. Because I may end up deleting half this website. And banning a metric fuckload of users. Including Sir Cornbread, for no apparent reason. Well, after reverting a bunch of pages, protecting them, and then deleting them. Just to be sure that the funny versions were the ones which got deleted. But really - these two pages, I feel, are beyond saving. So here's my offer: - If someone rewrites them, and they are actually of modest length, funny, illustrated, and not filled with stupid shit, I'll protect them with my life. If not, I'll not care. Case closed. Sir Famine, Gun ♣ Petition » 19:44, 22 August 2006 (UTC) - I think the reason it got my bp up was the editor was a jerk about it (I've put messages on his userpage requesting sanity. He hasn't replied, instead leaving snarky comments in the comments section of OJ, ala "It looks like all of the blood is going to Dick's head). - As for your comments; Deal. If suitably inspired (doubtful) I'll try my hand at a rewrite. If not, then it'll be time to move on. Aw, fuck it...there are way better articles to write; it's off my watchlist.--Sir Modusoperandi Boinc! 20:09, 22 August 2006 (UTC) - Excuse me? Butcher?! I'll have you know I rather object to that label! Personally, I didn't really like the "Bend It Like Bethlehem" section, just like I didn't like the "Chibi Jesus" section; they seemed rather childish. I'm actually a little pissed that you reverted the "His Birth" section, and completely removed "His Name". The only reason I started hacking away at those parts was because there was an obscure "L.A. Law" reference that irked me. As for the bit about Mary, I slipped that in basically to see if anyone would catch and get it. I mean, really, we're all adults here, aren't we? I had typed the Mary bit out, then actually went to the "Mary" article to see if it compared. Not even close. Hence, the Virgin Rewrite. I'm surprised you found it vile, as this isn't a religious site. It's for jokes, jokes of all kinds; no seriousness allowed. I'm glad you kept "Jebus" the way it was, as I enjoyed working on it. I spent many a night attempting to fix pictures, sentence tense, and finally adding the "Death" section. But, really, Famine, could you put the bit about the shepherds, angels, and "MacArthur Park" back in the "Original Jesus" article? I don't think I can replicate it as well as could be. With all due respect... - Master Betty applauds you! Nyah! Master Pain (also known as Betty) 01:46, 24 August 2006 (UTC) Re: Vacation? I just went a while without being on here. I don't know if I'll be very active, though. --Evice 20:12, 22 August 2006 (UTC) The Anti-Vandal Tool Greetings, I see you added the anti-vandal tool to your namespace. Judging by your comment when you added it, I surmise you are not fully aware of what it is. If you do know, ignore my rambling and eat some fudge. :P When you add the anti-vandal code, new options will appear in your left menu. These are live monitoring tools that monitor the recent changes. "Filter Recent Changes" filters the recent changes for a list of 'bad words' that's listed on a separate page. You can do an easy rollback using this tool. All Recent Changes is self explanatory, along with Recent IP edits. It's not that hard to use and it's extremely popular on Wikipedia (*insert scream*). Hope this helps! - Enzo Aquarius 22:12, 23 August 2006 (UTC) - You forgot to mention the "makes firefox use 98% of my CPU requiring me to kill -9 it" feature. I've been debating trying to code up something like this for awhile, but I am far too lazy to do it. I will steal from other people without hesistation tho. Thanks for the info. -- Sir Famine, Gun ♣ Petition » 22:25, 23 August 2006 (UTC) Heh, seems to work fine for me, but to each his own. It's not the most perfect code :-P - Enzo Aquarius 22:33, 23 August 2006 (UTC) You sure went about adding this thing in quite a complicated way. You do only need those 3 lines at the top to make it work... And maybe the link box function for adding your own to the side too. That'd give several hundred less lines of code in the script. • Spang • ☃ • talk • 19:04, 27 August 2006 (UTC) I have some grand plan to mess around with it, customize it, and make it do amazing things...if I was a betting man, I'd put 20 pounds on me not ever getting around to it. The problem is that I'm moving back into my "busy as hell" time of year, and will probably never get around to actually hacking on this. Perhaps next june or something...but for now, I might as well trim it down, as I have an easy-to-find copy in the history. The question is, do functions in my uncyclopedia.js override the ones in recent2.js? If so, then I can just keep the toolbox function. I'd wait for an answer, but I can try it myself in less time, I'd guess. -- Sir Famine, Gun ♣ Petition » 19:18, 27 August 2006 (UTC) - If they do it'd either give an error or just use the first one. Or both. I'm not sure. Why not just use a different toolbox function? • Spang • ☃ • talk • 19:27, 27 August 2006 (UTC) - Or rename your function and have it run after the original. Both do exactly the same thing, and no conflicts. Easy peasy. • Spang • ☃ • talk • 19:38, 27 August 2006 (UTC) Thanks for the mammaries, erm, vote Thanks for supporting me as a potential admin. Coming from you, it's a real:13, 27 August 2006 (UTC) - I think I'm sick. I don't remember the last time I supported someone for admin. But I do retain the right to revoke my support if you suddenly decide that admin tasks are more important than what you've been doing around here. - Let me be very clear: For the longest time, I never bothered with UnNews, because it sucked. Now I check in often, because it's been funner than hell for the last few months. I'd hate to see that go downhill, as I feel that it is at least, if not more important than getting general admin duties done a little more quickly. We can always hire another admin to delete expired NRV articles - we can't necessarily get someone to jump into UnNews and do the sort of job you've been doing. That sort of shit takes talent. Sir Famine, Gun ♣ Petition » 20:27, 27 August 2006 (UTC) Ligature articles Why did you delete all of the ligature articles?!? Was there some sort of vote on it I didn't notice? I thought they were one of the most hilarious series of articles on Uncyclopedia. --LOLMOLE 22:41, 27 August 2006 (UTC) - As I noted at Talk:Ligature, many of the articles barely counted as stubs. A good number were 1-3 lines and a template. They were in no way strong enough to stand on their own. Feel free to recreate them in paragraphs in the main Ligature article, as it's a far more fitting place than on individual pages. There was a vote on VFD regarding the fate of the Ligature series around the end of June. In the two months they have been gone, you are the second person to notice and complain. That will give you an idea of the appeal of the series. Sir Famine, Gun ♣ Petition » 22:59, 27 August 2006 (UTC) Ninjastar? I... don't get it • Spang • ☃ • talk • 06:02, 28 August 2006 (UTC) - Try here: User_talk:Rcmurphy#Can..... Sir Famine, Gun ♣ Petition » 21:57, 28 August 2006 (UTC) Block query Hello, I am speaking on kenmcfa's behalf on this ban: Your user name or IP address has been blocked by The Right Honourable Famine. You were banninated due to the following reasons: If you are offended, self-blinding is the answer, not blanking. You may contact Famine or one (1) of the other administrators to discuss the block. It will give them a good laugh.* Your IP address is 62.252.128.25. If all you have is an IP address, you're not coming back any time This user has taken no part in blanking and/or vandalism, as somehow someone managed to attain the same IP and use it as a vector for damage. The user asks if you may reinstate his IP as he means no harm, he will take better care of his network options. Yours Neo Zidane)}" > Headbutt extraordinaire 20:32, 28 August 2006 (UTC) - User:kenmcfa must share an IP with the anon individual who blanked a couple of pages. Let me see....how long was that ban? Two weeks? Well, Template:SPL-08 is definitely worth a short ban....say three days? We really try to discourage excessive use of templates, and templates filled with red links are the worst. - So the sentence has been reduced, not commuted. When kenmcfa shows back up, he can remove his crappy template from the articles it's in and submit it to QVFD. By all means, keep a link to the main SPL article in the related articles - it just doesn't need a massive template filled with red links to get people there. A short, useful paragraph is much prefered. - And User:ununbilium - for gods sake fix your sig. It looks like you have the stuff you need at User:ununbilium/sig, but you need to place {{SUBST:nosubst|User:ununbilium/sig}} in your preferences under "nickname" and check the "Raw Signatures" box. TY. Sir Famine, Gun ♣ Petition » 22:15, 28 August 2006 (UTC) Regarding SU182 At this point, it's at your discretion. Special rules say what they will, but there's no mention of "whining." However, th fact that he's been so disruptive despite his short leash leaves me little recourse but to agree. If you feel the need to be merciful, so be it, but I heretofore wash my hands of this matter.-- 01:26, 29 August 2006 (UTC) - Damn, I guess he's fucked if we both wash our hands of the matter. Oh, well. I'm sure we won't want for disruptive users if he stays banned forever. Sir Famine, Gun ♣ Petition » 01:41, 29 August 2006 (UTC) - I guess that's what happens when you run afoul of the 02:29, 3 September 2006 (UTC) Thanks for your vote - Eh, I was just hoping you could keep UnNews spiffy, and stop bothering us about those who mess with it. I'm pretty damn lazy. That or it was an accident. Not sure which. Sir Famine, Gun ♣ Petition » 19:17, 4 September 2006 (UTC) Darth Destruction The vote for both of these was rewrite. I rewrote them both, I posted in AFD that they had been rewritten and you still huffed them? I would have deleted the votes up to that point, but I did not know it was allowed, perhaps you should have as an administrator? Did you even read the AFDs before you huffed? --Darkfred 04:01, 9 September 2006 (UTC) Err...Famine, you huffed D'arthangnan without any VFD or notice...Now I know it wasn't great but still, I think it deserved to live. So why the huffing without the VFD? --:16, 9 September 2006 (UTC) - I went Darth hunting. We had lots and lots of them. In fact, we even had Template:Darth, which listed more than a dozen Darths. About half the articles in that template got dropped into VFD. Most of them got solid delete votes. Riding the wave of public distaste, I went through and deleted anything Darth related that wasn't haliarous and long. - D'arthangnan got my axe due to a 2 to 1 ratio of red links to blue links, along with references to Chuck Norris, a left testicle, Death By Stinky Cheese, and a tatoo of "I love Darth" on some buttocks. Overall, juvinile randomness at it's worst. Although an article could be written around the phrase "Battle of the bulge (also known as: "the battle of are you happy to see me or is there an armored division in your pocket?")". That was good. Very good. - Darth Cow and Darth Hitler were both rewritten. And they probably should have had their votes blanked and been revoted on. I thought that they hadn't substantially improved. Spang agreed on one of the two. So that's like +10 votes for deletion. If you really want, I'll revive them and repost them on VFD. My bet is that the Darth purge will continue, but if you can stand watching some potentially soul-crushing votes, I'll let them happen. Sir Famine, Gun ♣ Petition » 15:08, 10 September 2006 (UTC) - You are lying. Neither Darth Cow or Darth Hitler even met the 6 vote limit for deletion. Darth Cow was a tie and Darth Hitler had an overwhelming keep vote 6:3 with only 1 rewrite vote. +10 delete votes? What are you smoking? Since you admit to being on a Darth Hunt" my guess is that you simply didn't care what the VFD was, you didn't bother to count. The stupidest thing is that your hunt only affected mediocre articles. The vast majority of the site remains simply bad and unfunny. On the whole your hunt netted a step backwards in funniness. Yay progress. - What? Huh? Darth Hitler had 3 Keep votes, 4 delete votes (including mine via deltion) and 3 "Rewrite" votes. I disregard all "Rewrite" votes everywhere, everytime because nobody (except you) ever rewrites anything. So using my modified counting strategy I count 4 to 3 delete. - And there is a 6 vote limit for deletion? When did the king of VFD make that ruling? I must have missed it. Sorry. - Do me a favour and revive D'arthangnan into my namespace so I can rewrite. And I'd say that next time you might consider putting a rewrite tag or some kind of notice before killing...Thanks. --:41, 10 September 2006 (UTC) - I don't think I've ever put a rewrite tag on any article. Either it's good enough to stay, or I kill it. Or I kill it, lots of people bitch about me killing it, and then I restore it to userspace and it sits there and rots till the end of time. Aaah, such is the life of a cold-hearted admin. Sir Famine, Gun ♣ Petition » 19:45, 10 September 2006 (UTC) Let's agree that if I ever recreate it as a reasonable article you buy me a beer. If not, I get lashing in the town's:51, 10 September 2006 (UTC) - That's a deal. Makes you like the sixth person here I either owe or potentially owe drinks to. Good thing I live in a cave. If I ever come out, it's going to be really expensive. Sir Famine, Gun ♣ Petition » 20:04, 10 September 2006 (UTC) I am Starting again down here since the ident is crazy. As for the 6 vote minimum please see this note by Hinoa [1]. Hinoa also reads the comments in VFD boxes and writes the resolution when he has made a decision. Perhaps you should ask for admin mentoring. The problem now with Darth Hitler is this. The article is fine, with a few edits it could be perfect and even as it is it won the vote. If I put it back up later, some asshole, meaning you, is gonna delete it without any thought. I doubt you even read the article, so what good would doing anything now do? The only way this article or any similar to it will ever live again is if you admit you were wrong. --Darkfred 22:25, 10 September 2006 (UTC) - I'm terribly sorry...I did not know that you had been here for TWO WEEKS and therefore know everything. I will strive to do everything the way you tell me to, oh great one. Sir Famine, Gun ♣ Petition » 23:04, 10 September 2006 (UTC) On a (slightly) different subject, both Darth Hogan and Darth Dickens didn't even receive six votes in total, much less six votes to delete, and yet you deleted them in your darth holocaust! What are you, some kind of horseman of the apocalypse? Darth H8er. Mr. Briggs Inc. 22:37, 10 September 2006 (UTC) Eh?
http://uncyclopedia.wikia.com/index.php?title=User_talk:Famine&oldid=1063945
CC-MAIN-2014-41
refinedweb
4,053
81.63
The Wipe-Eyeglasses is a new invention that wipes automatically your eyeglasses or your sunglasses. Today, eyeglasses and sunglasses are very important in our life. Seeing well is a priority, so having clean eyeglasses is necessary. Then, when we wipe our glasses, we mechanically use what we have at hand : a piece of t-shirt or a tissue and that's not really convenient. When your home, there is the solution to not forget and to keep your eyeglasses cleaned: the Wipe-Eyeglasses : a machine that can wipe your eyeglasses or sunglasses automatically. Moreover, it can be a new place for your eyeglasses so they will always be immaculate. The Wipe-Eyeglasses wipes both sides of your eyeglasses or sunglasses with a smart system composed by an Arduino Uno, servo motors and others electrical components, and optical tissue. Here is the steps to realise it with wood boards, 3D printing or cardboard. Step 1: What You Need to Build It Components: • Wood boards, cardboard or filaments to 3D print the structure • Cotton • Optical tissue • An Arduino Uno • A breadboard • 2 servo motors • Wires of different sizes • 2 capacitors of 100 microF • A switch • A USB-Arduino wire to power the circuit Tools: • A cutter, a wood saw, or a 3D printing machine • Eventually a welder Note: Prepare all your components and tools before starting to build an object, it will help you to stay organised and efficient! Step 2: How to Build the Structure First, you need to create the structure of the Wipe-Eyeglasses : you can either do it by cutting and gluing wood or cardboard, or by 3D print the parts that compose it. Here are the 3D designs of the structure which you need to recreate with the material of your choice and the picture of the result with wood boards. However, these 3D models are designed for 3D printing so If you choose to build the Wipe-Eyeglasses with wood boards or cardboard (as I did it), you will need to adapt them. Click on "Edit 3D" and use the ruler to have the mesurements (The structure is designet at real scale) : it will help you to cut your wood boards or cardboard in order to create the different parts that you will assemble after with glue. (For exemple, the "support_servo" piece can be a simple 16x2 cm sized wood board). Note: To realize my invention, I have built the box with wood and I have only 3D printed the "rotor" design. It is easier and cheaper. You can also find here the .stl files of this designs if you want to 3D print it: Step 3: How to Build the Circuit The electronic circuit of the Wipe-Eyeglasses is based on the Arduino. Here is what you need to reproduce it with the components. Note : You can obviously adapt it depending how you realized the structure! Step 4: How to Program It I used the Arduino Software to program the Wipe-Eyeglasses. Here is the code: #include <Servo.h> Servo Droite; Servo Gauche; const int switchPin = 2; const int greenLed = 3; const int redLed = 4; int switchVal; void setup() { Droite.attach(9); Gauche.attach(10); pinMode(greenLed, OUTPUT); pinMode(redLed, OUTPUT); pinMode(switchPin, INPUT); } void loop() { switchVal = digitalRead(switchPin);(90); Gauche.write(90); delay(100000000); } Note: I used servo motor with continue rotation and I needed to set the speed and the time, so you will need if you use basic servos to make some changes to the program. Step 5: How to Assemble Everything 1. Put the Arduino and the circuit inside the structure (with all its parts) 2. Glue or fix with another way the "rotor" pieces (printed in 3D or designed by yourself) (2) to the servos 3. Put a cotton around the ball of the "rotor" piece 4. Put an optical tissue around the cotton 5. Glue the servos to the structure and verify everything is solid and functionnal, then plug the cable to the Arduino and to a computer or a USB charger 6. Here it is! You have built the Wipe-Eyeglasses! Step 6: How It Works 1. (Wet your eyeglasses or sunglasses with water or with a special optical spray) 2. Put your eyeglasses on the Wipe-Eyeglasses. 3. Turn on the switch, it starts to work : cotton balls covered with optical tissue go 30° to the left and then 30° to the right. 4. The Wipe-Eyeglasses stops to run; it is finished. 5. You eyeglasses are clean :) Step 7: Watch It Working in a Video! Recently, I was on a french TV channel, M6 for this invention because it won the "Innovez" contest! In the video, I explain how it works! (In french but I'm sure you can undersatand easily the concept) Note: "Innovez" is an invention contest opened to young people and organizes by a scientific periodic made for teens : Sciences et Vie Junior! Here you can find the Video! Thank you for reading! Check out my new Instructables: "The Wipe-Eyeglasses"!— Victor Badoual (@VictorBadoual) March 4, 2016
http://www.instructables.com/id/The-Wipe-Eyeglasses-1/
CC-MAIN-2017-22
refinedweb
844
70.43
Question: What is a platform? Answer: A platform is the hardware or software environment in which a program runs. Most platforms can be described as a combination of the operating system and hardware, like Windows 2000/XP, Linux, Solaris, and MacOS. Question: What is the main difference between Java platform and other platforms? Answer: The Java platform differs from most other platforms in that it's a software-only platform that runs on top of other hardware-based platforms. The Java platform has two components: 1. The Java Virtual Machine (Java VM) 2. The Java Application Programming Interface (Java API) Question: What is the Java Virtual Machine? Answer: The Java Virtual Machine is a software that can be ported onto various hardware-based platforms. Question: What is the Java API? Answer: The Java API is a large collection of ready-made software components that provide many useful capabilities, such as graphical user interface (GUI) widgets. Question: What is the package? Answer: The package is a Java namespace or part of Java libraries. The Java API is grouped into libraries of related classes and interfaces; these libraries are known as packages. Question: What is native code? Answer: The native code is code that after you compile it, the compiled code runs on a specific hardware platform. Question: Is Java code slower than native code? Answer: Not really. As a platform-independent environment, the Java platform can be a bit slower than native code. However, smart compilers, well-tuned interpreters, and just-in-time bytecode compilers can bring performance close to that of native code without threatening portability. Question: What is the serialization? Answer: The serialization is a kind of mechanism that makes a class or a bean persistence by having its properties or fields and state information saved and restored to and from storage. Question: How to make a class or a bean serializable? Answer: By implementing either the java.io.Serializable interface, or the java.io.Externalizable interface. As long as one class in a class's inheritance hierarchy implements Serializable or Externalizable, that class is serializable. Advertisements Posted on: January Interview Questions - Page 2 View All Comments Post your Comment
http://www.roseindia.net/interviewquestions/java-interview-questions-2.shtml
CC-MAIN-2016-30
refinedweb
361
50.02
This tutorial shows one how to use the CORBA Application Wizard by creating a simple application that will simulate a Stock Quoter. I will assume that you know how to create a new project using Application Wizards and have already provided an appropriate name for your project. I will also assume that you have some knowledge about CORBA. This tutorial is based on the online tutorial that is included with the TAO source code. A big THANK YOU has to go to Doug Schmidt for giving me permission to use the code found in this tutorial. Initially this tutorial was to consist of just one part but with the increasing length of the tutorial it was decided to divide the tutorial into two parts. This is part one and takes one through all the steps of the application wizard. Part two is found here and deals with completing the Stock Quoter application. For this tutorial we will create a simple console application. So we will select the Console Application option and then press the Next button to move onto Step 2. The other two options MFC Application and Win32 Application aren't yet fully functional. Console Application MFC Application Win32 Application Actually we need to create two applications. One application will act as the server while the other application will act as the client. We will first create the server by selecting the Server option. Selecting the Client option will create a client application, while selecting the Server and Client will result in your application acting as both a server and a client. After we have created the server application we will then need to run the CORBA Application Wizard again and create a client application. Server Client Server and Client Selecting the debug version will cause the application to be linked with the debug versions of the TAO-associated libraries. (Coming to think of it, I am wondering if I should give the user the option of having the individual server and client applications being created at the same time to save the user of having to run the AppWizard again.) I have entered a module name Quoter that will result in all the interfaces being included in a single idl file, namely Quoter.idl. Leaving out a module name will result in each interface being placed in its own .idl file. I have also named the interfaces to be created: Stock and StockFactory. The Quoter.idl file will contain the following code immediately after creation: Quoter Stock StockFactory #ifndef Quoter_module #define Quoter_module module Quoter { interface Stock { }; interface StockFactory { }; }; #endif //Quoter_module The Stock interface will be used to query the prices of stock whereas the StockFactory interface will be used to "gain access to the Stock object references from their symbols". For a more detailed description about the Quoter Application see the "Building a Stock Quoter with TAO" tutorial by Carlos O'Ryan. We don't have to worry about anything here, so just click on the Next button. To keep this tutorial simple, use will be made of an Interoperable Object Reference (IOR) string. The transfer of the IOR string between the server and the client will be done using a file. I'll show you how this can be done at the end of the second part of this tutorial. Here we will make use of the Static Invocation Interface. For this tutorial, this is the easiest. Now we have reached the end of the steps and are ready to create the application. So click on the Finish button. The application framework will now be generated and displayed on the Visual Studio IDE. Now repeat the above steps but in step 2 select the client option instead of the server option. To complete the application go to the second part.
http://www.codeproject.com/Articles/5373/CORBA-Application-Wizard-A-Tutorial-on-usage-Part?msg=862658
CC-MAIN-2014-52
refinedweb
633
52.8
Hi Nathan, Thanks for sending this along. I think that 2019 is pretty reasonable. The only hard holdup for me on Py3 is hg, and it's looking more and more likely it'll be Py3 ready sooner than 2019. We should be thinking about signing such a statement, and personally 2019 would be fine with me. Python3 started out as a net loss in a lot of ways; speed, for instance, and the annoyances of bytes/strings. This has gotten better, to the point that it's become an attractive target. Async functiosn are neat, but not something we can probably use very much. But, type annotations are pretty awesome, and the improvement to the stdlib is also getting compelling. What does 2019 sound like to everyone else? I also don't think we should be maintaining a backport branch, since we have reasonably limited resources. -Matt On Thu, May 19, 2016 at 11:10 AM, Nathan Goldbaum [email protected] wrote: Since we directly depend on sympy at a low level, this is something we need to think about dealing with. ---------- Forwarded message ---------- From: Aaron Meurer [email protected] Date: Thu, May 19, 2016 at 10:51 AM Subject: [sympy] Feedback requested: The future of Python 2.7 support in SymPy To: "[email protected]" [email protected] Hi all. Some of us in the broader scientific Python community have been discussing the future of Python 2 support for various libraries. As you may know, Python 2.7 will cease to be supported by the core Python development team in 2020, meaning all updates to it will cease, including security updates. However, even though we are six major versions into Python 3, the larger community as a whole is still slow on uptake for supporting it. The proposal is for libraries to let the community know now when they plan to drop Python 2.7 support, so that they will better prepared for it, and hopefully so as an encouragement to start transitioning now, if they haven't already. I propose that we put it on our roadmap to drop Python 2.7 support in 2019. That is, the first release we do in 2019 will be the last to support Python 2.7. This is consistent with what we've done so far, which is to drop support for Python versions once they cease being supported by core Python. Other libraries, such as IPython and likely matplotlib, are also joining together to sign a formal statement about this, which is drafted at Some libraries, such as IPython and matplotlib, are proposing to support a patchfix branch for an older version that supports Python 2.7, but I am opposed to any plan for SymPy that means supporting more than one version at a time, as I don't think we have the development effort for it. I would like to hear feedback on this, both positive and negative. It isn't an official decision yet, until the community agrees on it. Here is my rationale for doing this. I also plan to publish a blog post about this soon, which goes into more detail: As you also probably know, SymPy, like other Python libraries, has done extra work to support Python 2 and 3 in the same codebase. While this work is easier than it used to be, it does put a maintainence burden on SymPy, and it prevents us from using language features which are Python 3-only. One language feature in particular that I would love to be able to use in SymPy is keyword-only arguments. This lets you write, for instance def function(x, y, *, flag=False): ... and then function(x, y, True) is a TypeError. Only function(x, y, flag=True) will work. This future-proofs the API, e.g., you can easily change it to function(x, y, z, *, flag=False) without any API breaks, and it forces explicitness in keyword arguments. That's one example. There are other Python 3-only features that we may or may not be able to make use of as well (like function annotations). And even without that, the maintenance burden of supporting both versions is nontrivial. It means all developers have to know about the quirks of Python 2 and 3, regardless of which one they use primarily. It means that we always have to remember to add all the right compatibility imports at the top of files, and avoid things which are one version-only. And it means extra builds in the test matrix. Aaron Meurer -- You received this message because you are subscribed to the Google Groups "sympy" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at To view this discussion on the web visit For more options, visit yt-dev mailing list [email protected]
https://mail.python.org/archives/list/[email protected]/message/Q4XV2IJ4WXPBG2BSLVFDITR3JMPZXL2W/
CC-MAIN-2022-21
refinedweb
832
72.16
The function definition of "correct()" goes after and outside of "main()". [edit] Rather than retype it, see post #4 here: Printable View The function definition of "correct()" goes after and outside of "main()". [edit] Rather than retype it, see post #4 here: I mean I know since a is an int, then it just gives us 7...but I changed the int to float (or double), and it gave me the result of the fraction(78/10) 7.00000!!!!!! which is obviously not!!! (if someone going to to the favor and explain that to me, please explain what's the difference between double and float!!!!!! I just seem to understand you guys language better than my book( C by Discovery!!!, it sucks!) ) Code: int main() { float a,b; a=76; b= 76/10; printf("%f",b); return 0; } Yes, though it does have the weakness of introducing a slight bias in most cases.Yes, though it does have the weakness of introducing a slight bias in most cases.Quote: Originally Posted by Ashl7 Yes, though you probably wanted (rand() % 90) + 10 unless you want to exclude 99.Yes, though you probably wanted (rand() % 90) + 10 unless you want to exclude 99.Quote: Originally Posted by Ashl7 You probably did not print the result correctly.You probably did not print the result correctly.Quote: Originally Posted by Ashl7 Yes, because contrary to your claim that you "changed the int to float (or double)", you didn't: 76/10Yes, because contrary to your claim that you "changed the int to float (or double)", you didn't: 76/10Quote: Originally Posted by Ashl7 You probably intended to write: 76.0f / 10.0f ok, with the help of you guys and this website...I finally managed to write this damn code...all the answers I got lead to this...so thank you so much this was the question: Suppose you want to develop a program to play lottery. The program randomly generates a Lottery of a two-digit number, prompts the user to enter a two-digit number, and determines whether the user wins according to the following rule: 1. If the user matches the lottery in exact order , the awards is $10,000. 2. If the user input matches the lottery, the awards is $3,000. 3. If one digit in the user input matches a digit in the lottery, the awards is $1,000. this is the code I wrote: thanks thanks thanks...more questions to come :)thanks thanks thanks...more questions to come :)Code: #include <stdio.h> #include <stdlib.h> int main() { srand(time(NULL)); int lott,guess,ldiglot,rdiglot,ldigguess,rdigguess; lott=(rand() % 90) + 10; printf("what's your guess for the lottary number?! It's a TWO digit number...\n"); scanf("%d",&guess); rdiglot=lott % 10; ldiglot= lott/10; rdigguess= guess %10; ldigguess= guess/10; if(guess==lott) printf("WoW, you've won $10,000\n"); else if( rdiglot==ldigguess && ldiglot==rdigguess) printf("hmmmm, close...you won $3000"); else if(rdiglot==rdigguess || rdiglot==ldigguess || ldiglot==ldigguess || ldiglot==rdigguess ) printf("not the right answer,but you won $1000"); else printf("wrong answer"); return 0; } however I'm still struggling with the question I asked at the beginning of the thread... You mean like Code: switch ( rand() % 4 ) { case 0: // is it obvious yet? } "Very good!", "Excellent!", "Nice Work!", "Keep up the good work!" one of the above, randomly...we have to use rand() function here with the help of switch!!!!!! :/ the one u gave wasn't really obvious( to me, a retarded guy ;) ) to be honest Print something. Then case 1 and print something else. alright...with the help of u guys, I wrote this program for the question at the beginning of the thread: now a few questions:now a few questions:Code: #include <stdio.h> #include <stdlib.h> #include <time.h> void func(void) { srand(time(NULL)); int a,b,c; a= rand() % 10; b= (rand() % 10); printf("How much is the product of %d and %d? \n",a,b); scanf("%d",&c); } void wrong ( void ) { const char *responses[] = { "No. Please try again.", "Wrong. Try once more.", "Nope,Don’t give up!", "No...Keep trying." }; printf("%s\n", responses[rand()%4] ); } void correct( void ) { const char *responses[] = { "Very good!", "Excellent!", "Nice Work!", "Keep up the good work!" }; printf("%s\n", responses[rand()%4] ); } int main() { int a,b,c,d; printf("Hello, this is computer assited instruction for multiplication...\n"); func(); d=a*b; if(c==d) correct(); else wrong(); return 0; } my program fails the part of the exercise that says "A separate function should be used to generate each new question. This function should be called once when the application begins execution and each time the user answers the question correctly"...I'm trying to make the func() in my codes the function we want, but the answer is going to be always wrong!!! even when I enter the write answer it says you put the wrong answer....what am I doing wrong here?! the other part I have problem with is "and let the student try the same question repeatedly until the student finally gets it right." how am suppose to make a piece of code to enable this in my program?! thanks in advance The 'c' in func and 'c' in main are two different variables. Try something in main like c = func(); And make func() return a result. I can't quite understand that, could u pls explain more?! I guess that way the function will not be the one the assignment needs it to be!!
https://cboard.cprogramming.com/c-programming/151313-developing-easy-cai-programm-c-2-print.html
CC-MAIN-2018-05
refinedweb
933
73.98
The following are a set of functions for creating simple properties - like Ruby's attr_reader, attr_writer, and attr_accessor. If, inside a class definition, you write:         attribute(foo=1, bar=2) simple properties named 'foo' and 'bar' are created for this class. Also, private instance variables '__foo' and '__bar' will be added to instances of this class. By "simple properties", I mean something like the following:         ''' assume we're inside a class definition and         self.__foo and self.__bar have been instantiated.         '''         def get_foo(self):                 return self.__foo         def set_foo(self, value):                 self.__foo = value         def del_foo(self):                 del self.__foo         def get_bar(self):                 return self.__bar         def set_bar(self, value):                 self.__bar = value         def del_bar(self):                 del self.__bar         foo = property(fget=get_foo, fset=set_foo, fdel=del_foo, doc="foo")         bar = property(fget=get_bar, fset=set_bar, fdel=del_bar, doc="bar") Discussion For simple properties, defining the get, set, and/or del methods you'll pass to property() can be repetitive. attribute.py provides functions which can be used in any new-style class to simplify (and reduce the code required for) the creation of these properties. You may also be interested in the following metaclass implementation: correction. should have been: def set_baz(self, value): self.__baz = value baz = property(fset=set_baz) not "bar" sorry for any confusion, Sean correction. should have been: def set_baz(self, value): self.__baz = value baz = property(fset=set_baz) not "bar" sorry for any confusion, Sean re: correction. The corrections noted above have been made. Breaks on inheritance. mangle's reliance on obj.__class__ means that it'll be unable to find the private variable if someone attempts to fetch a property via a subclass. To fix it, we need to get the class name somehow. Fortunately, that's in the stack, too. :) Posted updated module as source distribution. I've posted a source distribution for this [1], with an explanation on my blog [2]. To avoid treading on Sean's code, I've used distutils to give appropriate credit: him as the author, with me in as a maintainer. Sean, if you want to take back over and maintain at your site, let me know. 1: 2: Small Problem. Hi. First, thank you for pointing out this algorithms limitation with regard to inheritance. I've taken your suggestions and incorporated them into the code(as you'll notice). I've made some changes; mostly for personal taste. For instance, I changed __classAndLeftOperands__() to __outside__() (yours is more descriptive, but it is quite long) Anyway, there was one change I made that was not purely cosmetic: className = stack[1][2] became cls = stack[-3:][0][-2] This is important. On my system, for instance, to gain access to the appropriate stack frame from the front(?) of the stack, I need to call: className = stack[6][2] to get the intended results. My system has more overhead, seemingly. Anyway, accessing the data from the back of the stack seems more reliable, as is evidenced by the original __leftoperands__() having worked on your system. Other than that, this is a fine improvement. Thanks. Now, if you can figure out how to remove the requirement for having to provide the private instance variables for each property, that'd be great. Aha! I'd noticed that my code was extremely brittle -- readonly() would fail if I compiled my application with py2exe, and intermittently if I tried to reload a module. I'll try it with your modifications! No go. Py2exe strips source lines... ... so __outside__ can't get the source line. I'm exploring aloud on my blog, and will report back if I make any progress. Or, not. :) Got it! Try something along these lines: ... and then... ... to create the read-only attributes. Not depending on the source, it works properly under py2exe. That said, the syntax leaves a lot to be desired. Better yet! This works under python, python -OO, and py2exe: When you update the code, set __version__ to "1.7"; using a float also breaks py2exe. :) Sorry. Unfortunately, the new __outside__() doesn't work on my system. Using the test code I provide in the comments, but with all code in __main__ below test = Test() commented out and the new __outside__() with print statements, i.e., def __outside__():  ...  cls, names = code.co_name, list(code.co_names)  print "class: ", cls  print "names: ", names  ...  args = names[:names.index(caller)]  print "args: ", args  ... I get the following output: class: Test names:['__name__', '__module__', 'readonly', 'foo', 'bar', 'writeonly', 'fro', 'boz', '__init__'] caller: readonly args: ['__init__', 'boz', 'fro', 'writeonly', 'bar', 'foo'] class: Test names: ['__name__', '__module__', 'readonly', 'foo', 'bar', 'writeonly', 'fro', 'boz', '__init__'] caller: writeonly args: ['__init__', 'boz', 'fro'] You'll notice that the args for readonly is _not_ ['foo', 'bar']. So, unfortunately, __outside__() remains brittle, so long as your intent is to use py2exe. Version 1.6 seems ...ok..., otherwise. deprecate __outside__(). For the moment __outside__() remains brittle. I propose that it be deprecated and that readonly2() replace readonly(). readonly2() appears to be stable. The syntax is ok, for now. I do have one minor change: def __readonly(name):  name = mangle(cls, name)  def get(obj):  return property(fget=get) I realized that mangle() was being called every time a property was used, not just when the property is created, which is, of course, unnecessary. Until __outside__() can be made stable, I recommend that it not be used. Let me know what you think. If you agree, I will update the current version to reflect these decisions. Just one more idea: What I'd like to see is def readonly(**kwds): ... so that the usage would be readonly(foo=1, bar=2) More importantly, I'd like to remove the requirement for providing the private instance variables '__foo', '__bar', i.e., I'd prefer not to have to do this: readonly('foo', 'bar') ... def __init__(self):  self.__foo = 1  self.__bar = 2  ... In the meantime, readonly2() will suffice. Let me know your thoughts. Sean This works. Try this: def readonly(**kwds):  stack = traceback.extract_stack()  cls = stack[-2:][0][-2]  frame = sys._getframe(1)  def __readonly(name, default):  for name,value in kwds.items(): usage: readonly(foo=1, bar=2) Note: does not require you to provide private instance variables __foo and __bar (!) It's a bit of a bodge, because you have to set the private instance variable every time the property is called. It's nicer in writeonly, where this bodge is not required. I like this. Let me know what you think before I update the recipe. fixed bodge. def __readonly(name, default):  name = mangle(cls, name)  def get(obj):  return property(fget=get) This avoids calling setattr() every time property is called. It is only called if getattr() fails, i.e., on the first call. All subsequent calls to getattr() will succeed, and setattr() will not be called. Version 1.8. I've decided to go ahead and post version 1.8. I've changed __outside__() so that now it simply takes care of the repetitive code that was in both readonly() and writeonly(). Let me know if this latest version gives you any problems. For now, I'm very happy with this version. This is what I was hoping to make all along. Thanks for your help. Sean Aah. I think I get it, now. The problem with my __outside__ is that it assumes that readonly() or writeonly() are the LAST items called. In your test code, __init__ is defined lastish, hence the problem. I guess we could search through the frame above's locals to check to see whether the names are defined. The new keyword argument style certainly saves us having to grope around for the names, but can we stick with the use of sys._getframe()? Its results seem a little more deterministic than wading through the trace. co_stacksize? code.co_stacksize might have the number of lvars we need. ... but it's still breaking if I use readonly() twice, so I'm not quite there yet. Note also that I missed that my earlier code was accidentally reversing the arguments. Whups. I'm not so allergic to having to define the instance variables myself in __init__, especially as I'd be using self.__varname inside my class for performance reasons. I'm also worried about readability; with the first syntax, it's immediately obvious to the reader that variables are being defined, even if they're not sure exactly how. I've removed the traceback code, and have replaced it with sys._getframe() code. You can still initialize the corresponding private instance variables(CPIVs) yourself, and you can still use them as self.__foo inside your class. But, the thing I like about how readonly() and writeonly() work now, is that you don't have to initialize those variables. What does this accomplish? Well, if you look at the test code I include in the discussion, you'll see a reduction from 18 lines to 5 lines of code(from defining read/writeonly props. the old way to defining them our way). In fact, you save 3 lines of code for each read/writeonly prop. If you init your CPIVs, you still save 2 lines/prop. It's up to you whether you want to declare them yourself or not. Regarding readability, I agree that foo, bar = readonly() more obviously appears to be assigning readonly properties to the class. Still, readonly(foo=1, bar=2) has the benefit of creating self.__foo and self.__bar, "automagically", with default values. The previous syntax would continue to require that CPIVs be hand-coded. That is not what I want. Explicit may be better than implicit, but in this case, I would say: Is it more obvious from foo, bar = readonly() that you must make CPIVs by hand, or is it more obvious from readonly(foo=1, bar=2) that CPIVs have been created for you? When I think about it, neither is overly obvious, but I think the former is less so than the latter. You could combine them, I suppose. foo, bar = readonly(foo=1, bar=2) This does appear to be more explicit, if not entirely more obvious. And it allows you to supply default or initial values for your readonly properties. Note: foo, bar, fro, boz = readonly(1,2,3,4), is just wrong foo, bar, fro, boz = readonly(foo=1, bar=2, fro=3, boz=4), I think, is better. While readonly(foo=1, bar=2, fro=3, boz=4) alone, without assignment, is acceptable to me. Or, perhaps __readonly__(foo=1, bar=2, fro=3, boz=4) to let people know there's "automagic" happening... I wish ASPN had stuck with linear, non-heirachial discussion threads. I think we're vectoring in on a choice between foo, bar = readonly(foo=2)( foodefaulted, barrequiring manual attribute creation) and something like __readonly__('bar', foo=2). I like the double-underscorage making it obvious that we're doing something a little different. Aaaah. Of course. If we make initialisation mandatory, then we know how many items to kick out, and we don't need to grope through frames trying to figure out the names of our lvars... It seems to me the ideas in this recipe could be accomplished more easily using a metaclass. First, suppose that __readonly__ creates one or more temporary instances of some class and stores them in its caller's locals dict. Assuming this dict is the locals of the class-defining function, it is the same dict which is passed to the metaclass's __new__ function, along with the new class's name. So in the __new__ function, you simply hunt through the dict to look for instances of your temporary class, and then replace them with properly formed property instances. E.g.: Then real classes can use ExtProp as a mixin base class to get this special property support. I like this solution. It's clever, and reading it was an education. The only advantage I see to Garth's and my solution over yours is the fact that our solution does not require users to make their classes subclass ExtProp to get the desired behaviour. If they put rowo.py in their path, they can just import the behaviour, e.g.  from rowo import * But, that's not much of a difference. It's just a matter of taste, I suppose. I happen to prefer:  from rowo import *  class MyClass(object): to  from ExtProp import *  class MyClass(ExtProp): Probably because the latter gives me the impression that my class is dependent upon another class. The former does not leave me with this impression. Other than that, I really did enjoy reading your code. Thank you very much for your contribution. Sean
http://code.activestate.com/recipes/157768/
crawl-002
refinedweb
2,123
66.74
pvandoorn About - Username - pvandoorn - Joined - Visits - 150 - Last Active - Roles - Member - Ok i understand, i will look into this myself. Although I want to let you know that the multiaccelerometer function does start measurering even when only one metawear is connected. - It is working pretty well now thanks! There are just 2 (small) things that i did not get from the dotnet tutorial. When i use the multiaccelerometer function, sometimes the program starts with measurements even when one of the sensors is not connect… - Thanks Eric! It was indeed a typo. On the tutorial website there is this code in the program.cs: await (Task) type.GetMethod("Run", BindingFlags.NonPublic | BindingFlags.Static) .Invoke(null, new object[] { args.TakeLast(args.Leng… - Am i missing something here? Add code blocks, what does this mean? Could you give me an example of how the program.cs should look like to be able to run ScanConnect? - Sorry i still don't get it. I ran the project from cmd but I keep getting a similar error. - thanks! I will look into this.in two metawear data streaming Comment by pvandoorn January 2019 - The gyroscope value does not change. The Accerometer and Magnometer do change depending on my calibration. Here is my full code taken from the C# tutorial: namespace StarterApp { public sealed partial class DeviceSetup : Page { private IMe…in sensorfusion calibration Comment by pvandoorn December 2018 - I have set up a TCP client/server so i can stream the data to Unity. Can someone tell me how to access the data in the following method (used from C# tutorials): protected async override void OnNavigatedTo(NavigationEventArgs e) { base.O…in Use metawear in Unity Comment by pvandoorn December 2018
https://mbientlab.com/community/profile/comments/3573/pvandoorn
CC-MAIN-2021-43
refinedweb
284
58.58
The goal of this article is to show how to share User controls between the ASP.NET Web Application and Visual Web Part for SharePoint. The basic steps to make this happen are as follows: We will look at those steps in more detail in the rest of the article. 1. Start by creating a new solution and adding a new ASP.NET Web Application, call it 'CustomUIControl'. Make sure you selected the .NET Framework 3.5 when creating a project. Since we are exposing this control to SharePoint and SharePoint project can only be created under 3.5, we have to use a lower framework for the controls project. CustomUIControl Delete Default.aspx page and add Web User Control by right clicking on the project and selecting Add/New Item. Name the Control CustomerControl.ascx. Code your ascx page as you normally would. 2. When you are done with your control, start by signing the project with the key and building the project using Visual Studio. To sign the project, go to properties of your project, Signing tab, check sign the assembly and point to existing or issue a new key. Building the project from Visual Studio will build your CustomUIControl.dll. You will need to compile the project with special settings using the Framework 2.0 compiler to get a separate ascx assembly. In order to get this assembly, open command window and change your directory to the 2.0 Framework. Make sure to run as an Administrator. Compile your project using the following line: aspnet_compiler -p c:\source\article\customuicontrol\customuicontrol -v customuicontrol -fixednames c:\source\articlebuild -keyfile c:\source\article\customuicontrol\ customuicontrol\contstest.snk Let me explain what each path means: customuicontrol CustomUIControl If you look at the compiled directory, you will see several files that were created by the compilation process. We will need both the files, CustomUIControl.dll and the App_Web_customercontrol.ascx.cdcab7d2.dll. The latter file contains the Web Control portion we are going to be using for the UI. Put both the assemblies in the GAC. Run command window from your VS2010 as an Administrator. Run the following commands to register your assemblies in the GAC: gacutil /i c:\source\articlebuild\bin\customuicontrol.dll gacutil /i c:\source\articlebuild\bin\app_web_customercontrol.ascx.cdcab7d2.dll Verify you added both of the assemblies to the GAC. You can locate registered assemblies in C:\Windows\assembly. 3. Add a new ASP.NET Web Application to test out your control. Add both assembly references to the new project; make sure to change copy local to True in the Properties window. Your web application does not have to be looking in the GAC, but you can modify your web.config to add those assemblies from GAC if you have a need for it. True In your Web Application, select the aspx page you want to drop the control into and add a tag to register the control. You can look up what the values should be in your tag by double clicking your App_Web_customercontrol.asxc.cdcab7d2.dll in the References folder. As you can see, for your register tag: <%@ Register TagPrefix="CC" Namespace="ASP" Assembly="App_Web_customercontrol.ascx.cdcab7d2" %> The assembly name is App_Web_customercontrol.ascx.cdcab7d2, namespace is ASP and the name of the control to use on the page is customercontrol_ascx. App_Web_customercontrol.ascx.cdcab7d2 customercontrol_ascx Add the new tag to your aspx page: <cc:customercontrol_ascx and run the project to verify control behaves as expected. 4. The next step is to add a new SharePoint Feature project to your solution. Add new Visual Web Part under SharePoint 2010 project and name it SPCustomerPart. SPCustomerPart Rename your web part to CustomerWebPart and your ascx control to CustomerWebPartUserControl.ascx. CustomerWebPart Your solution explorer should look similar to the screen shot below: Add CustomUIControl.dll and App_Web_customercontrol.ascx.cdcab7d2.dll to your SharePoint project, copy local should be set to False in the Properties Window. False We need to make sure our assemblies are referenced in the web.config for the SharePoint project; otherwise it would not know where to find it. Locate your web.config, under default configuration, ours is located under C:\inetpub\wwwroot\wss\VirtualDirectories\80. Open web.config file and add assembly registration to the file: You can look up version and PublicKeyToken in the GAC c:\windows\assembly for the assemblies you have registered earlier. 5. Back in the SharePoint project, open your CustomerWebPartUserControl.ascx and add the register tag and your user control on the page: This should look exactly like Web Application tags, or if you wish you could add a tag to the web.config of your SharePoint in which case you would not need Register tag on the page. At this point, you should be able to run your Feature and add it to your SharePoint page. When you validated your Feature works as expected, you can deploy your feature. In the command window, change your directory to c:\program files\common files\microsoft shared\web server extensions\14\bin or browse to where your stsadm.exe is located. Run the following command to deploy: stsadm -o addsolution -filename [full path your.wsp] stsadm -o deploysolution -name [no path just name.wsp] -allowgacdeployment -immediate -url [your url] You can run your SharePoint site and verify your web part works.
https://www.codeproject.com/Articles/295324/User-control-reuse-for-SharePoint-web-parts
CC-MAIN-2017-22
refinedweb
888
58.89
On Sun, Sep 22, 2019 at 9:28 AM Randy Dunlap <[email protected]> wrote:>> On 9/20/19 5:18 PM, Brendan Higgins wrote:> > Add a test for string stream along with a simpler example.> >> >>> > ---> > lib/kunit/Kconfig | 25 ++++++++++> > lib/kunit/Makefile | 4 ++> > lib/kunit/example-test.c | 88 ++++++++++++++++++++++++++++++++++> > lib/kunit/string-stream-test.c | 52 ++++++++++++++++++++> > 4 files changed, 169 insertions(+)> > create mode 100644 lib/kunit/example-test.c> > create mode 100644 lib/kunit/string-stream-test.c> >> > diff --git a/lib/kunit/Kconfig b/lib/kunit/Kconfig> > index 666b9cb67a74..3868c226cf31 100644> > --- a/lib/kunit/Kconfig> > +++ b/lib/kunit/Kconfig> > @@ -11,3 +11,28 @@ menuconfig KUNIT> > special hardware when using UML. Can also be used on most other> > architectures. For more information, please see> > Documentation/dev-tools/kunit/.> > +> > +if KUNIT>> The 'if' above provides the dependency clause, so the 2 'depends on KUNIT'> below are not needed. They are redundant.Thanks for catching that. I fixed it in the new revision I just sent out.> > +> > +config KUNIT_TEST> > + bool "KUnit test for KUnit"> > + depends on KUNIT> > + help> > + Enables the unit tests for the KUnit test framework. These tests test> > + the KUnit test framework itself; the tests are both written using> > + KUnit and test KUnit. This option should only be enabled for testing> > + purposes by developers interested in testing that KUnit works as> > + expected.> > +> > +config KUNIT_EXAMPLE_TEST> > + bool "Example test for KUnit"> > + depends on KUNIT> > + help> > + Enables an example unit test that illustrates some of the basic> > + features of KUnit. This test only exists to help new users understand> > + what KUnit is and how it is used. Please refer to the example test> > + itself, lib/kunit/example-test.c, for more information. This option> > + is intended for curious hackers who would like to understand how to> > + use KUnit for kernel development.> > +> > +endif # KUNITCheers
https://lkml.org/lkml/2019/9/23/173
CC-MAIN-2020-40
refinedweb
301
51.44
Hetu Script Warning Hetu is early WIP! We are focusing on making Hetu stable and feature complete right now. Discussion group: Discord: QQ 群: 812529118 Introduction Hetu is a lightweight script language purely written in Dart for embedding in Flutter apps. The main goal is to enable Flutter app have hotfix and scripting ability. We did not choose to use another existing language to achieve the goal. Because we want to keep the language simple, and keep it away from interference of other language's complex implementation and their irrelative-to-Flutter eco-system, and make the debug process pain-free and remain in Dart realms. It takes very little time to bind almost anything in Dart/Flutter into Hetu and use similar grammar to write your app. And to communicate with classes & functions in Dart is very easy. Quick start Hetu's grammar is close to typescript/kotlin/swift and other modern languages, need very little time to get familar with. - Optional semicolon. - Function is declared with 'fun, get, set, construct'. - Optional type annotation. Variable declared with 'let, const' will infer its type from its initializer expression. In your Dart code, you can interpret a script file: import 'package:hetu_script/hetu_script.dart'; void main() async { var hetu = Hetu(); await hetu.init(); await hetu.import('hello.ht', invokeFunc: 'main'); } While hello.ht is the script file written in Hetu, here is an example: // Define a class. class Person { var name: str construct (name: str) { this.name = name } fun greeting { print('Hi! I\'m', name) } } // This is where the script starts executing. fun main { var ht = Person('Hetu') ht.greeting() } Binding Hetu script is purely written in Dart, so passing object to and from script is extremely easy. Check this page for more information about how to bind external classes, functions, enums and how to passing object and functions between Dart and script. import 'package:hetu_script/hetu_script.dart'; void main() async { var hetu = Hetu(); await hetu.init(externalFunctions: { 'hello': () => {'greeting': 'hello'}, }); await hetu.eval(r''' external fun hello fun main { var dartValue = hello() print('dart value:', dartValue) dartValue['foo'] = 'bar' return dartValue }'''); var hetuValue = hetu.invoke('main'); print('hetu value: $hetuValue'); } Command line tool Hetu has a command line REPL tool for testing. You can activate by the following command: dart pub global activate hetu_script Then you can use the following command in any directory on your computer. (If you are facing any problems, please check this official document about pub global activate) hetu [file_name] [invoke_func] If file_name is provided, evaluate the file in function mode. If invoke_func is provided, evaluate the file in module mode and call a certain function with given name. If no option is provided, enter REPL mode. In REPL mode, every exrepssion you entered will be evaluated and print out immediately. If you want to write multiple line in REPL mode, use '\' to end a line. >>>var a = 42 >>>a 42 >>>fun hello {\ return a } >>>hello function hello() -> any // repl print >>>hello() 42 // repl print >>> Referrences: Apps that embedded Hetu: Libraries - hetu_script - HETU SCRIPT 0.0.1 [...]
https://pub.dev/documentation/hetu_script/latest/
CC-MAIN-2021-21
refinedweb
511
57.27
Hello World The first example is the well-known Hello World application. The application will show a window displaying 'Hello World' in its statusbar. HelloWorldApp.h - The HelloWorldApp definition Each wxWindow application needs an object derived from wxApp. Each application overrides the OnInit() method for initializing the application. You can, for example, create your main window here. #ifndef INCLUDED_HELLOWORLDAPP_H #define INCLUDED_HELLOWORLDAPP_H // The HelloWorldApp class. This class shows a window // containing a statusbar with the text "Hello World" class HelloWorldApp : public wxApp { public: virtual bool OnInit(); }; DECLARE_APP(HelloWorldApp) #endif // INCLUDED_HELLOWORLDAPP_H HelloWorldApp.cpp - The implementation of HelloWorldApp For the main window, you use the wxFrame class. This class provides a window whose size and position can be changed by the user. It has thick borders and a title bar. In addition, you can provide it a menu bar, a statusbar and a toolbar. Example 1.5 shows the implementation of HelloWorldApp. // For compilers that don't support precompilation, include "wx/wx.h" #include "wx/wxprec.h" #ifndef WX_PRECOMP # include "wx/wx.h" #endif #include "HelloWorldApp.h"; } When your compiler supports precompiled headers, you can use the wxprec header file. When it doesn't, you should include wx.h, which includes all necessary header files for wxWindow. You can also include each header file separately for each control. The macros DECLARE_APP and IMPLEMENT_APP do the following for us: - When the platform needs one, it creates a main() or WinMain() method. - It creates the global method wxGetApp(). You can use this function to retrieve a reference to the one and only application object: HelloWorldApp &app = ::wxGetApp(); You could be wondering why the frame variable isn't deleted anywhere. By setting the frame as the top window of the application, the application will delete the frame for us (for a more in-depth explanation, see Avoiding Memory Leaks). Some broken compilers don't allow NULL to be casted to wxFrame* implicitly, so that's why we do it explicitly, just to be on the safe side. - Really? Is this true even now? The C++ standard (which is 5 years old now) requires that NULL, which is 0 (or 0L, or 0s -- but not (void *)0) can be cast to any pointer type. Even MSVC 6 gets this right -- anyone using such a broken compiler should probably upgrade. After the frame is constructed, a statusbar is created with the CreateStatusBar method. The text of the statusbar is set to "Hello World". Calling Show() shows the frame. Show() is a method of the wxWindow class which wxFrame derives from. When OnInit() returns false, the application immediately stops. This way you can stop the application when something went wrong during the initialization phase. Thanks to Franky Braem for the initial content for this page Compiling To compile under linux using g++, use the following command: g++ HelloWorldApp.cpp `wx-config --libs` `wx-config --cxxflags` -o HelloWorldApp
https://wiki.wxwidgets.org/Hello_World
CC-MAIN-2018-47
refinedweb
478
58.58
The ‘if’ preprocessor directive for the compiler in C# .NET February 20, 2015 2 Comments You can decorate your C# source code with “messages” to the compiler. There are a couple of predefined preprocessors that the compiler understands. A common scenario is when you’d like to run some part of your code in Debug mode but not in Release mode or any other Build type. The following method shows the ‘if’ and ‘elif’ preprocessors: private static void TryPreprocessors() { # if DEBUG Console.WriteLine("You are running the Debug build"); # elif RELEASE Console.WriteLine("You are running the Release build"); #else Console.WriteLine("This is some other build."); # endif } If we’re running this code with the Debug build type then the Debug section will be executed. Note the gray colour code in Visual Studio for the currently unavailable paths: Now switch the Build type to Release. We’re expecting the RELEASE block to be highlighted. However, it’s the else-if block that is now the valid execution path. Open the Properties window of the project and click the Build tab: You’ll notice that DEBUG and TRACE can be selected in their respective boxes but RELEASE is missing. Enter RELEASE in the symbols textbox: If you then go back to our code then the RELEASE section should be highlighted: You can now switch between Debug and Release builds and the various execution paths will be automatically updated. View all various C# language feature related posts here. Can I have more then one confitional symbols defined? For example, RELEASE and VERSION1? I suppose it should be comma or semi-colon separated. Comma separated: //Andras
https://dotnetcodr.com/2015/02/20/the-if-preprocessor-directive-for-the-compiler-in-c-net/
CC-MAIN-2021-17
refinedweb
274
66.03
anyone has an example to call stored proc from ArcSDESQLExecute (arcpy)? any help will be greatly appreciated anyone has an example to call stored proc from ArcSDESQLExecute (arcpy)? any help will be greatly appreciated This thread is old, but maybe this will help someone. I couldn't get this function to work. It kept returning True instead of rows. From the help doc we know what a Boolean return means. This isn't particularly helpful if you know a stored procedure returns rows. But at least it is something. ....for statements that do not return rows, it will return an indication of the success or failure of the statement (True for success; None for failure). When i turned to pandas as an alternative I received a similar error message from sqlalchemy: File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\lib\site-packages\sqlalchemy\engine\result.py", line 1077, in _non_result "This result object does not return rows. " sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically. This led me to a stackoverflow comment that suggested using SET NO COUNT when working with stored procedures and sqlalchemy. I wondered if this might work with ArcSDESQLExecute. It didn't. But, it solved my problem with pandas. Here is an example if you want the result of a stored procedure and have hit a wall with this function. # Dependencies: ArcGIS PRO 1.4.1 (Python 3.5.4) # Currently using python instance: C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\python.exe # SQLAlchemy and pyodbc were added manually to arcgis pro interface import urllib import sqlalchemy as sa import pandas as pd #utilizing SET NOCOUNT sqlStatement = "SET NOCOUNT ON EXEC dbo.spPropCharaSelectByNBHD 312524" #connection string pointing to DSN created with ODBC Data Source Admin params = urllib.parse.quote_plus("DSN={yourDSNnameHERE};Trusted_Connection=yes") engine = sa.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params) #create pandas dataframe current_source = pd.read_sql_query(sqlStatement,engine) Here is an example from the help docs: ArcSDESQLExecute—Help | ArcGIS Desktop Geodatabase Python
https://community.esri.com/thread/190552-arcsdesqlexecute-arcpy-for-stored-proc
CC-MAIN-2019-22
refinedweb
340
51.24
This page contains an archived post to the Java Answers Forum made prior to February 25, 2002. If you wish to participate in discussions, please visit the new Artima Forums. System.gc() Posted by Matt Gerrans on July 01, 2000 at 2:54 AM In the section "Garbage Collection and Finalization of Objects," Venners says "Implentations [of JVMs] can decide when to garbage collect unreferenced objects -- or even decide whether to garbage collect them at all." This implies that gc() is only a suggestion that it is okay by you (performance-wise, I would imagine) for the garbage collection to do its thing. Eckel's book also implies this under "Order of garbage collection," though he doesn't specifically mention gc(), either. Several books have it outright wrong and state that gc() forces garbage collection (maybe you saw one of these). The specification is pretty clear on the subject: "Calling this method suggests that the Java Virtual Machine expend effort toward recycling discarded objects in order to make the memory they currently occupy available for quick reuse. When control returns from the method call, the Java Virtual Machine has made a best effort to recycle all discarded objects." The key word being "suggests," of course. Like the old "register" keyword in C/C++. So the answers to your questions are:- When it deems it necessary.- The implementation of the JVM you are using didn't think garbage collection was warranted at the point you suggested (which is not too surprizing, as you had only worked with a few bytes -- try an array of a few million books and see if it kicks in!). - Matt > class Book{> boolean checkedOut = false;> Book(boolean checkOut){> checkedOut = checkOut;> }> > void checkIn(){> checkedOut = false;> }> > public void finalize(){> if(checkedOut)> System.out.println("Error : checked out");> }> }> > public class DeathCondition{> public static void main(String[] args){> Book novel = new Book(true);> novel.checkIn();> new Book(true);> System.out.println("ABC"); //22> System.gc();> }> }> When garbage collecter does its job? > and why does not the above program print "Error : checked out" without Line 22 ?
https://www.artima.com/legacy/answers/Jun2000/messages/95.html
CC-MAIN-2019-13
refinedweb
345
61.56
Re: Wanted: a little class around here - From: "Keith Miller MVP" <k.miller79@xxxxxxxxxxx> - Date: Mon, 20 Feb 2006 13:46:03 -0600 I do understand what you want, but it's not trivial. If it could be done it would require writing an icon handler in C#. Don't know if explorer would like/allow an icon handler for folders since it seems to already have some hard-coded icon handling of its own. The one thing I could suggest is creating some dummy file types. These could add entries to the 'New' right-click menu. Instead of creating a new file, they could run a script that creates a new folder with your custom icon already in place. If you're interested in that, I can post a detailed example. -- Good Luck, Keith Microsoft MVP [Windows XP Shell/User] "Rev. Bob 'Bob' Crispen" <revbob@xxxxxxxxxxx> wrote in message news:Xns976F7A56FC66Erevbob@xxxxxxxxxxxxxxxx A little class for a specific type of folder, that is. Let me explain. I generally organize my Start menu pretty anally, which means an extra step after nearly every install, but at least I can find stuff without seeing a jungle of (mostly) corporate names and product names. So anyhow, in my folder (program group) that holds 2D graphics applications I've got a "Doc" subfolder. Ditto for my folder for 3D graphics, and so on. And to make things a little nicer, I've put a special icon on all program group folders. And then, one at a time, I've gone through the folders and put a special icon on my "Doc" folders so I can find them easily. Clear so far? Wouldn't it be nice, I said to myself, if I could create a special class of folders that I could use for "Doc" folders? Then I could customize the icon for the folder class once and all the folders of that class would get their icon automatically. There are things that are darn close to it: first there's the whole idea of program groups, which are just a class of folders. Then there's the way you can add things to namespaces. It looks like somebody who actually knows something about Windows should be able to create a new folder type without having to go through Active Directory (which is the only reference I found on MS Techweb for "folder class" and looks like something different anyhow). Anybody tried this? -- Rev. Bob "Bob" Crispen bob at crispen dot org Ex Cathedra weblog: The Golden Age of the Internet was last year. The Golden Age of the Internet was *always* last year. . - References: - Wanted: a little class around here - From: Rev. Bob 'Bob' Crispen - Prev by Date: Re: sorting files/folders order - Next by Date: Re: Stealth Folder - Previous by thread: Re: Wanted: a little class around here - Next by thread: Volume on Startup - Index(es):
http://www.tech-archive.net/Archive/WinXP/microsoft.public.windowsxp.customize/2006-02/msg00407.html
crawl-002
refinedweb
485
79.5
Update 12/1/11: Several changes have been made to this blog post.’. (Dec 2011 update) After listening to the feedback we have decided to allow hosters to use the On-Premises configuration of Exchange to host, we announced this here:). - Our intended audiences for this feature are organizations that: - Require some form of sub-divided address book or who wish to create several ‘virtual’ (Dec 2011 update) review the following blog post:.. General Manager, Exchange Customer Experience @Kevin Allison: We wish this would’ve come sooner. Thanks for planning to make this a feature customers can deploy easily, rather than the manual permissions hacks required earlier. Exchange team scores again for communicating transparently with customers. Must say you guys are probably the best product team inside Microsoft! Good to hear this is coming – look forward to further posts fleshing out the details of the new policy module and configuration options! Long wait on this feature! When the SP2 beta is available publicly? Great news and better late than never. Of course we all want to know what the migration path from the 2007 whitepaper environment is and how painful it might/will be. Thanks…. This is great news for us. We try to seperate out teaching staff from our students using multiple ALs for our 18 000 users. Too bad it took so long but better late than never! Premise and premises are two different words. I would hope that we will get documentation that explains how to deploy this in a variety of typologies. (E2k3) E2k7 ,ETC. Excellent! Good job listening to customers! @Charlie – We will be providing documentation to help people who are using 2007 and ACL based segmentation move to ABP’s. This is a 2010 feature though, and mailboxes will need to be on 2010 mailbox servers to benefit from ABP’s. Hi Greg, how about those of us Exchange evangelists who enthusiastically migrated our 2007 ALS supported environment to 2010 once RTM was available, months before the announcement that 2010 ALS was not supported? Hopefully there will be a workable transition (I assume this wouldn’t be much different than the changes necessary to migrate a 2007 ALS environment)… Thanks! Thank you for listening to our customer feedback … would be really helpful to let customer know in preparation the pre-requisites to implement SP2 :) Will SP2 support hosting mode for UM role? This is a limitation that prevent some customer projects to be implemented. The segregation is a dificult option since it may not have support of MS Will there be specific client side requirements for the addessbook exposure @pesos – you need to get your environment back into a supported state, that would be my advice. We’re not going to be testing scenarios we don’t currently support. I think it’s a bit early to get into specifics on how we will be helping 2007 customers move to ABP’s, but you should be prepared for some additional work to get you back to a known state. Sorry. @Fung and Luis – Watch out for the SP2 information we will be publishing soon. As we said in the post though, using ABP’s to try and host Exchange is not the goal of the feature, it will not compare to the true tenanting capabilities available when Exchange is used in /hosting mode. @Liam – no, there will not be any client side requirements. Excellent news, you made my day! If the GAL segregation could be achieved using security groups then it would make my week!!!! Thank you for listening to our customer feedback please clarify , is there any workaround for exchange 2010 on premise with gal segregration? No Tayo Dada, there is no supported workaround for Exchange 2010. The only supported solution will be that delivered with SP2. Regarding that MS decide not to provide a whitepaper "Configuring Virtual Organizations and Address List " for Exchange 2010 how is the way (or recommendation) for customers who installed a Exchange 2003 resource forest with a manually created addresslist separation (like different virtual org. in one Exchange Org) to upgrade to Exchange 2010 in Mixed Mode. @Sasha2011 – if you have a resource forest right now, and want to move those mailboxes back in to the the account forest, which would be running 2010 SP2, with the new ABP feature, the path would likely be (and we will be providing guidance for migrating to the feature) to create ABP’s that mirror what you have in 2003, then move mailboxes. Once the mailbox is on 2010 it will be subject to the ABP’s you have created, and as long as the views you have on both sides of thye system are consistent, the user experience should be consistent. That’s the expected behavior at this time. Of course, hosted mode is missing some key features that make it very unattractive in all but the most rare scenarios. So I’m surprised to see it mentioned here as a viable option, but then again, I’m continually surprised by the number of options Ex 2010 is missing has compared to earlier versions of Exchange. For the record, here are some crucial options not supported by Hosting Mode The Exchange Management Console Public Folders Unified Messaging GAL synchronization Calendar sharing Child domains Discontiguous namespaces Disjointed namespaces @JC – I believe that hosting mode really is intended for large-scale hosters that typically have the resources to build custom provisioning tools which make the EMC obsolete. It is not really usable (nor intended) for smaller-scale needs which will hopefully be well met by this new option in SP2. I still feel that the timing of the announcements re:2010 were poorly handled, as many of us had already migrated our supported 2007 ALS environments to 2010 LONG before any mention of incompatibility was brought out on Dave’s blog. These environments will hopefully be considered when documentation is provided on "getting back to a supported state" in order to properly migrate to SP2 ABP. @pesos – we will not be testing with any starting point of unsupported configurations, I’m sorry. We are sorry the announcement took a long time to come out, it was a complex issue to work through from our side. You will need to be prepared to thoroughly test your own configuration when the feature comes out and work through your own migration plan – your configuration is unique most likely. As highlighted in this post, ABP’s are not going to allow you to host Exchange in the sense most of us agree on as ‘hosting’. Enterprise, or on-prem Exchange, is not the right product for that. If you really want to host Exchange and sell hosted Exchange mailboxes, then you need to use the /hosting version and build a platform with that as the base. @pesos – your point is well taken, though I think it also reaffirms the fact that hosted mode is not a solid alternative to this missing feature. Looking forward to trying SP2 with the feature comes out. Cheers. @Greg Taylor I wan’t to move any mailbox to the account forest or wan’t install any exchange 2010 in the res.forest. I tried already install exchange 2010 into the existing exchange 2003 resource forest nearly all seems to work but from a Mapi Client connected to the old ex2k3 mailboxserver i get no GAL and therefore no resolution for any user (OWA no problem! adresslist pointed directly via msexchquerybasedn to the addressbook.The Rights are not changed into 2003 therfore it seems that exchange 2010 is changing something in the GAL Feature… Correction to my blog above: I wan’t to move any mailbox to the account forest or wan’t install any exchange 2010 into the account forest. @Sasha2011 There is a way to "Configuring Virtual Organizations and Address List " for Exchange 2010, its a bit tricky but very much possible. Get in touch with a comapny called "outlook247" and they would give a solution. I have a setup done by them and all work well on exchange 2010 and now they have upgraded the same to exchange 2010 sp1 also. @Sasha2011 – Sorry Sasha, I don’t quite understand what you are trying to do. @Winexch – if a customer were to take this approach, we wouldn’t be able to support them, so I hope that is clearly understood. If an issue were hit and this was found to be the cause, we would be stuck. Not a good place to be with your email system. @ Greg Taylor Yes I am aware of the situation but had no choice other than taking the support of Outlook247, as we have 8 companies and each company GAL should not be visible to other company users. MS did not give us a solution for 3 Months. The solution was good enough and our exchange 2010 holds more than 2300 mailboxes with heavy useage. I think what Sasha2011 wants is, he currently has a setup with resource forest with exchange 2003, and has implemented GAL segrigation. Now he wants to transition to exchange 2010, and keep the GAL segrigation the way it was on exchange 2003. He would have no issues with GAL in Webmail/OWA as it is from "msexchquerybasedn" attribute for the user to populate but when it comes to GAL in MAPI it is from "showInAddressBook" Thanks Very good news! I run a very small hosting company with a service desk that needs its EMC and customers using Public folders so the hosting version is not really an option at this point. Good News for all, I s there any new Roll ups for Exchange server 2010 Sp 1 except roll up 2 How long should Rollup 2 for Exchange 2010 SP1 take? I have a 2GHz AMD Opteron 246 with 6GB of memory running Win2008R2 Std. I’ve disabled all Exchange services, turned off antivirus, turned off DEP except for core services, set a HOSTS file with crl.microsoft.com set to 127.0.0.1, and turned over CRL revocation checking. The Rollup has been running for hours with mscorsvw.exe eating 95-100% of the CPU. Every so often (15 – 20 minutes), mscorsvw.exe errors out and I close it and then it starts back up again. How long should I expect this to run? Whenever mscorsvw.exe errors, is it restarting or just continuing on the next .NET assembly? Any comments would be welcome. I can be reached at [email protected]. Hello Exchange Team, you wrote: "This approach is not intended to provide complete tenant isolation as is provided in the Hosting mode available in Exchange 2010 SP1…" Can you explain what exactly the differences are/will be? Thx! Hi Marcus, we’ll have more details to come, but off the top of my head, /hosting provides admins for tenant orgs with scoping to objects in their own tenant org automatically, they can use remote powershell and only see/change their own objects. That kind of thing isn’t anywhere near as easy with on premise Exchange. That’s just one example. There are lots of things like this that are much easier with /hosting. Very good article! thankyou! I am currently investigating on how to incorporate Exchange 2010 in our soon-to-be multi-tenant hosting environment. We are a relatively small company focusing on the SMB market, and we want to be able to have an answer for people asking for a cloud-based solution. Exchange 2010 in hosting mode seems to be the most logical option to achieve this, since its truly "multi-tenancy" and allows you to split everything up. The only problem I have with this is the fact that we have limited R&D time to dedicate to this and that there is almost no documentation for Exchange 2010 hosting mode. It’s not possible for me to "reverse-engineer" the whole thing… I need something I can refer to. I’ve asked this question to an Exchange 2010 MvP and he/she (don’t want to name anybody) said to me that even the people at Redmond don’t know all that much about Exchange 2010 hosting mode and that it was developed outside the "normal" Exchange team. She also said for me to wait until SP2 so I could use ALS. As an Exchange developer, I would like to hear your point of view on this. And if you truly are in favor of using Exchange 2010 in hosting mode, could you please point me in the direction of some usable documentation to get started with this? Thanks a bunch! Kind regards, Bart I would start with this page Bart – And I will certainly clarify that hosting mode Exchange was most certainly not developed outside the regular team, I’m not sure why anyone would think that was the case. Hi everyone. We are currently running 2010 in non hosted mode. We currently host 5 different companies in this environment with plans to scale to 30-100. As far as we can tell the only limitation is with the gal… Which will be addressed in sp2. The alternative, hosted mode, contains way more gotchas. Why on earth, for our requirements, would I introduce more complexity with less features as suggested by those who say only hosted mode supports true multi-tenant. The companies using our solution will not be larger than 100 users, otherwise we would dedicate an exchange server for them. Am I missing something and there is more risk than I am understanding. I remember when vendors use to say running applications virtual was not supported. Is this a similar situation with multi-tenant "only" being supported in hosted mode. This will be a great feature. Will you be adjusting the capability of OOF in conjunction with this? So users seperated via Address Book Policies don't receive the "Inside" OOF but the "Outside" one. Thanks… It is amazing that Microsoft looks at its customers like the Japanese goverment looks at their residents. Give them only what you want them to hear and hide what is really going on. In the case of Exchange, it is only fathomable that Microsoft would not include GAL separation capabilites that have been available in all modern Exchange deployments prior to 2010 because they want those types of clients moved to their cloud services. No other reason would explain the unbridled complexity of doing this along with the loss of so much functionality. This constitutres abuse by Microsoft towards their customer base. It is inexcusable behavior but typical of an organization which has lost its way. As an Exchange 2003 Enterprise user hosting e-mail for many small customers, the move to 2010 is basically impossible and apparently designed this way by Microsoft. To even produce a new version of Exchange which does not support this (2010 pre SP1) is treachery and then to add it in SP1 with such cripling loss of functionality (no Public Folders, no GUI, no UM, etc.) is disgusting behavior. Users should be keenly aware that Microsoft is deceiving you in order to hold on to their dwindling market share by forcing small Exchange hosters into cloud services enriching Microsoft while desimating revenues for the small hoster. To me, it is unconscienable behavior. I know I cannot be alone in my dissatisfaction with Microsoft on these important issues. @ Adam Kessler "As an Exchange 2003 Enterprise user hosting e-mail for many small customers, the move to 2010 is basically impossible " Hi Adam, its not impossible at all. I am currently running: 1 2003,2007,2010 mixed exchange org 1 2007,2010 mixed exchange org 1 2010 exchange org All with segregation! We have a manual, multi-tenet environment and can not see any benefit in migrating to /hosting. G/AL Update does make out lives a little easier when creating/deleting users.
https://blogs.technet.microsoft.com/exchange/2011/01/27/gal-segmentation-exchange-server-2010-and-address-book-policies/
CC-MAIN-2016-30
refinedweb
2,658
59.94
#include <Servo.h>Servo servo;int test;//motor 1 int en1=11;//speed of motor 1int M11=7;int M12=8;// motor 2int en2=3; //speed of motor 2int M21=2;int M22=4;int spee=100;char g;void setup() { servo.attach(10); pinMode(en1,OUTPUT);pinMode(M11,OUTPUT);pinMode(M12,OUTPUT);pinMode(en2,OUTPUT);pinMode(M21,OUTPUT);pinMode(M22,OUTPUT);Serial.begin(9600); // put your setup code here, to run once:}void loop() {();}} so it looks like this i use serial only to test the code and I will cancel it laterthe problem is :when i upload the full code to arduino the only function that work is distance();at first i think the problem was with the connection so i upload only half the code that contain the motor functions and i try to control my car with serial .......it work?that is goodand when i upload the part that have distance() function with small modify so that i could see the distance at serial monitor and it work too? I don't really understand what you are trying to say here.It seems like the motor control works OK from Serial if you don't have the newPing library includedBut when newPing library is included you can't control the motors, but you do get correct distance readings.Is that correct?The PWM pins 3 and 11 use Timer2. Try using pins 5 and 6 (Timer 0) for your motor speed control and change newPing to pins 3 and 11. However I have no experience of the newPing library....R();}// else if(g=='a'){// Step2();} test=distance(0);Serial.println(test); forward(); delay(2000); test=distance(90);Serial.println(test); back(); delay(2000); Stop(); delay(5000); and really thanks for Your patience with me //motor 1 int en1=6;//speed of motor 1int M11=7;int M12=8;// motor 2int en2=5;int M21=2;int M22=4;int spee=100;char g;void setup() { servo.attach(9); pinMode(en1,OUTPUT);pinMode(M11,OUTPUT);pinMode(M12,OUTPUT);pinMode(en2,OUTPUT);pinMode(M21,OUTPUT);pinMode(M22,OUTPUT);Serial.begin(9600); }void loop() { forward(); delay(2000); back(); delay(2000); Stop(); delay(5000); } In the second program loop() seems to expect a value from Serial which it then uses to choose a particular movement. Does any of that work? Comment out all the distance() related code in that version and add a Serial.print() statement so you can see what value is being received with Serial.read()Serial Input Basics - simple reliable ways to receive data....R but any other character has no effect when i send 1 or 2 0r any other number it never work Without code to let you see what data is actually received you are trying to find a black cat in a dark room....R can i power servo s3003 from arduino uno only ? or i need 4xA battery 1.5v?
http://forum.arduino.cc/index.php?topic=510652.msg3482470
CC-MAIN-2018-43
refinedweb
488
52.19
import java.*: Reflection Chuck Allison Chuck does a little reflecting as he brings this column to a close but only a little. You shouldnt do much either. Ive been doing a little reflecting lately. This column began in January 1999 with these words: Hello again. import java.* invites you, the C/C++ programmer, to learn Java... . I will be examining all aspects of the language, library, and culture, but from a C/C++ perspective. To explore the entire Java library would take more years than I care to think about, but I believe Ive covered most of the language. Since I have recently accepted the position of Senior Editor for CUJ, this will have to be the last installment of import java.*, so I would like to take this opportunity to cover the one language feature I havent yet mentioned: reflection. Meta Programming The first thing to say about reflection in Java is that you should almost never use it. Reflective programming, also known as meta programming, gives you access to information you usually dont need. It allows you, for instance, to inspect an object at run time to determine its dynamic type. Whether a particular object reference actually points to an object of the indicated type or a subclass is irrelevant, however, in most situations, because you want to let polymorphism do its work. So when do you need reflection? If youre in the business of writing debuggers or class browsers or some such software, then you need it, and nothing else will do. Under the assumption that you will find yourself there someday, this article shows you the basics. Class Objects For every class in Java there is a corresponding singleton object of type Class, instantiated when the class is loaded, and which holds information about that class. You can obtain that object in four different ways. If you have the name of the class available when youre coding, you can use a class literal, as in Class c = MyClass.class; For completeness, the primitive types also have a class object, and you can get your hands on them with the class literal technique, or from the TYPE field of the corresponding wrapper class: Class c = int.class; Class c2 = Integer.TYPE; // same // object as c The forName method always requires a fully-qualified class name. A second and more flexible avenue to a class object is to give the fully qualified class name as a parameter to Class.forName, as the following statements illustrate. Class c = Class.forName("MyClass"); Class c = Class.forName("java.lang.String"); A third method for getting class objects is to call Object.getClass on behalf of an object instance: // "o" can be any object: Class c = o.getClass(); The fourth way of getting class objects is to call the reflection methods defined in Object.Class itself, which I use throughout this article starting in the next section. The program in Listing 1 illustrates the first three techniques. When you use ObjectgetClass, you always get the class corresponding to the dynamic type of the object. In Listing 1, for example, it doesnt matter that I store a reference to a Sub object in a Super variable since the actual object is an instance of Sub, the class object returned is Sub.class. The Class.forName method will throw a ClassNotFoundException if the class cant be loaded. As you can see, Class.toString prints the word class or interface as needed. You can also see that arrays are instances of classes with interesting names, formed with left brackets (as many as there are array dimensions) followed by one of the following letters, depending on the underlying component class: B byte C char D double F float I int J long L<classname>; S short Z boolean This explains why you see "class [F" and "class [LSuper;" in the output of Listing 1. Okay, now that you have a class object, what can you do with it? Lots. The only interesting information in a primitive types class object is its name, but for real classes you can find out just about everything. In Listing 2, I trace a classs ancestry all the way back to the Object root class with the Class.getSuperClass method. Since we typically depict superclasses above subclasses, I use a stack [1] to store all the class names as I walk up the inheritance tree, so I can eventually print them out in that top-down order. The class java.lang.Object is the only class that returns null for the Class.getSuperClass method; I use this fact to halt the process. The program in Listing 3 goes a little further by also detecting any interfaces a class implements by calling Class.getInterfaces, or, if the class represents an array, the program discovers the type of its components by calling Class.getComponentType. If a class object returns false for calls to isInterface, isArray, and isPrimitive, then it must be a simple class. The output reveals that arrays are implicitly cloneable and serializable, which is a Good Thing, since there is no syntax that allows you to declare them so. On Further Reflection If youre writing a browser or debugger, you will surely want more than just the name of a class. There are methods to get all the fields, constructors, methods, and modifiers of a class, which are represented by instances of the classes Field, Constructor, Method, and Modifier, respectively. They are all defined in the package java.lang.reflect. The modifiers include the access specifiers public, private, and protected, as well as static, final, and abstract. If a class represents an interface, the keyword interface appears in the list of modifiers as well. These keywords are encoded in an integer returned by Class.getModifiers. You can query the integer code with suitable Modifier class constants, or you can just get the string representation for all modifiers in a single string by passing the integer to the static method Modifier.toString, as I do in Listing 4. This program produces a listing similar in syntax to the original class definition minus the method bodies (similar to a class declaration with prototypes in C++). The first thing ClassInspector2.inspect does is determine the package, if any, that the outer class it received as a parameter resides in. Then it gets the classs modifiers and prints them along with the keyword class, if applicable, and finally the class or interface name. It then displays the superclass extended and any interfaces implemented by the class, like Listing 3 did. Now the fun begins. My doFields method calls Class.getDeclaredFields, which returns an array of Field objects representing all of the fields declared in the class, irrespective of access specification. There is another method, Class.getFields, not used in this example, which returns all of the public fields in a class as well as all of the public fields inherited from all superclasses. From each Field instance, I extract its modifiers, type, and name for display. The methods doCtors and doMethods do the analogous actions for all constructors and methods declared in the class. Constructors and methods are treated differently because constructors dont have return types, and because Constructor objects can be used to create objects dynamically through class objects. Class.getParameterTypes returns an (possibly empty) array of class objects representing the types of the parameters the method takes (see the calls to doParameters in doCtors and doMethods). You can also call getExceptionTypes for constructors and methods, although I decided not to in this example. The method Class.getDeclaredClasses returns an array of class objects containing type information for all the nested classes declared inside the class that inspect is processing. All I have to do for these is call inspect recursively, indenting appropriately for readability. (Thats what the field level is for.) The sample output from this program, found in Listing 5, is for the class java.util.TreeMap, which I chose because it illustrates all the features supported by ClassInspector2. Most of the methods and constructors have been omitted from the output listing to save page real estate. You may find it interesting that you can get information on private fields and methods. Since the methods illustrated in Listing 4 only give you declaration information, its really no different than having access to an include file containing a class definition in C++. You can view the private declarations, but you have no access to the actual data they represent. If youre writing a debugger, though, you need to be able to access the private data. How to get that access is easy to illustrate, but difficult to thoroughly explain. The program in Listing 6 inspects arbitrary objects by determining all fields at run time, including inherited fields (ignoring those in java.lang.Object). The method Field.get yields the value of a field as an instance of java.lang.Object. If the field is a primitive, the value is automatically wrapped in an instance of the corresponding object wrapper class. Whenever you try to set or get a field, the run-time system verifies your access rights, just as the compiler does with normal, non-reflected code. If I hadnt called AccessibleObject.setAccessible(fields,true) [2] in ObjectInspector.inspect, I would have been greeted with an IllegalAccessException the first time I tried to access one of the fields in the TreeMap object, since none is public. Whether you ultimately get access permission depends on the class loader and security manager in use both topics outside the scope of this article. Suffice it to say that the default case for an application (versus an applet) allows me to get the output that appears in Listing 6 without error. Meta Execution Im a little reluctant to write this section. When I tell you that you can mimic C-like function pointers in Java, I just know youre going to be tempted to use them the same way you do in C, but you shouldnt. There are better ways to pass functions in C++ and Java (e.g., function objects), but that is, alas, yet another topic for another day. Anyway, yes, you can pass method references around, and, as youd expect, you can determine which method you want to execute at run time by its name string. To get a method reference, you need its name and a list of the types of its arguments. In Listing 7, Ive defined a class Foo and a class Bar, each with like-named methods (am I creative with names, or what?). To get a method reference at run time, call Class.getMethod with two arguments: the method name as a String, and an array of argument types. You can use null or an empty Class array to match methods that take no arguments. You can only get references to public methods, but if the method youre after isnt in the class itself, the run-time system will look in superclasses to find it. Be prepared to handle a NoSuchMethodException if the method doesnt exist. To invoke the method, you pass the object to invoke it for if its a non-static method and a list of expected parameters in array of Object to Method.invoke. The program in Listing 8 shows how to invoke static methods you just use null as the first parameter to Method.invoke. Its important to remember to place the array of String, which is the argument to main, of course, in the first position of the Object array representing the arguments passed to main. This particular program is a program launcher it finds the main method for the class represented by args[0] and passes the remaining command-line arguments to that main. Summary I havent shown it here, but there is a newInstance method in the Constructor class for creating objects, and there is also an Array class in java.lang.reflect for creating and manipulating arrays dynamically. Isnt reflection fun? Its almost too fun. I hope you find little need for it. Im sure you can appreciate how it is useful for inspecting objects at run time, like in a class browser or a program that processes JavaBeans. If thats not what youre doing, let polymorphism and good object-oriented design solve your problems instead. Notes [1] Im using the LinkedList collection class to implement a stack. For more on collections see the September 2000 installment of import java.*. [2] Field, Constructor, and Method all derive from AccessibleObject. Chuck Allison is senior editor of CUJ and has been a contributing editor since 1992. He began his career as a software engineer in 1978, and for most of the 1990s was a contributing member of the C++ Standards committee. Hes the author of the intermediate-level text, C & C++ Code Capsules (Prentice-Hall, 1998), based on articles published in this journal. During the day Chuck is an Assistant Professor of Computer Science at Utah Valley State College. You can contact him through his website at <>.
http://www.drdobbs.com/reflection/184403920
CC-MAIN-2018-05
refinedweb
2,174
62.68
My iBook running 2.5.72/NPTL0.48 (from bk://ppc.bkbits.net/linuxppc-2.5) gives me figures comprable to the Redhat 9 figures. You should be able to replicate this with the attached two files. gcc -O2 -lpthread -o wakeup wakeup.c If you think the test is not measuring what it should I would be interested in why. -i #include <stdio.h> #include <unistd.h> #include <pthread.h> #include <signal.h> #include <sys/time.h> /* the time to run each test */ #define TIME_TO_WAIT 5 /* FOR EACH TEST DEFINE : */ /* define a ul in your test that says how many things it did */ extern unsigned long things_done; /* define a string in your test to say what it wasd doing */ extern char * things; /* a function (that doesn't return) to run your test */ void do_test(void); struct timeval start,end; /* on alarm print out results */ void on_alarm(int signo) { struct timeval diff ; double diff_secs; /* grab things done before we continue as we do not kill the potentially running threads until we exit below */ unsigned long stamp = things_done; gettimeofday(&end, NULL); timersub( &end, &start , &diff ); diff_secs = diff.tv_sec + diff.tv_usec*1e-6; printf("%d %s in %.4g sec = ", stamp, things, diff_secs); printf("%.0f per second\n", stamp / diff_secs ); exit(0); } /* main */ int main(int argc, char *argv[]) { /* setup alarm handler */ static struct sigaction alarm_m; alarm_m.sa_handler = on_alarm; sigfillset(&(alarm_m.sa_mask)); sigaction(SIGALRM , &alarm_m, NULL); alarm( TIME_TO_WAIT ); gettimeofday( &start , NULL ); do_test(); sleep(TIME_TO_WAIT + 10); } #include "thread.c" /* = 0; #ifdef DEBUG printf("EMPTIED!"); #endif pthread_cond_signal( &condition.empty ); pthread_mutex_unlock( &condition.mutex ); } } void do_test(void) { pthread_t threads[10]; int i = 0; /* have 10 workers */ for ( ; i < 10 ; i++ ) pthread_create( &threads[i], NULL, thread, (void*)NULL ); /* fill a queue, signal to threads to empty it */ while ( 1 ) { pthread_mutex_lock( &condition.mutex ); /* if the queue is full, signal for worker to clean it */ if ( condition.value ) { pthread_cond_signal( &condition.full ); pthread_cond_wait( &condition.empty , &condition.mutex); } pthread_mutex_unlock( &condition.mutex ); /* fill it back up */ pthread_mutex_lock( &condition.mutex ); #ifdef DEBUG printf("FILLED\n"); #endif condition.value = 1; pthread_mutex_unlock( &condition.mutex ); } }
https://listman.redhat.com/archives/phil-list/2003-June/msg00046.html
CC-MAIN-2019-22
refinedweb
338
61.22
“Python for Android Development”- heard a lot about it?? Lets here see how it works and how to do with a small brief introduction. Getting up and running on python-for-android (p4a) is a simple process and should only take you a couple of minutes. Its better if referred to Python for android as p4a. Android APK packager for Python scripts and apps. The executable is called python-for-android or p4a. Installing p4a in Python for Android Development p4a is now available on Pypi, so you can install it using pip: pip install python-for-android Installing Dependencies p4a has several dependencies: - git - ant - python2 - cython (Install via pip) - Java JDK (e.g. openjdk-8) - zlib (including 32 bit) - libncurses (including 32 bit) - unzip - virtualenv (Install via pip) - ccache (optional) - autoconf (for ffpyplayer_codecs recipe) - libtool (for ffpyplayer_codecs recipe) - cmake (required for some native code recipes like jpeg’s recipe) Installing Android SDK You need to fetch and unpack the Android SDK and NDK to a directory (let’s say $HOME/Documents/): - Android SDK - Android NDK For the Android SDK, you can get the ‘command line tools’. When you have extracted these you’ll see only a directory named tools, and you will need to run extra commands to install the SDK packages needed. Using Python for Android Development The following are some of the applications of P4A ; 1- Build a Kivy or SDL2 application To build your application, you need to specify name, version, a package identifier, the bootstrap you want to use (sdl2 for kivy or sdl2 apps) and the requirements: p4a apk --private $HOME/code/myapp --package=org.example.myapp --name "My application ˓→" --version 0.1 --bootstrap=sdl2 --requirements=python3,kivy 2- Build a WebView application To build your application, you need to have a name, version, a package identifier, and explicitly use the webview bootstrap, as well as the requirements: p4a apk --private $HOME/code/myapp --package=org.example.myapp --name "My WebView ˓→Application" --version 0.1 --bootstrap=webview --requirements=flask --port=5000 Commands The following are set of commands that is generally used in android dev; - –debug Print all the logging information. - –sdk_dir The directory for Android SDK location. - –android_api The android API level to use. - –ndk_dir Complete file path where Android NDK is present. - –name Name of the app distribution. - –requirements List of all python modules in the app. - –force-build [BOOL] Forcibly build the app from scratch. - –arch Architecture for which you need to build your app. Examples, arm7, arm64, armeabi-v7a, etc. - –bootstrap BOOTSTRAP The complete List of bootstrap for your requirements. Usually, this command is optional and it automatically detects the List of bootstraps for your project. Working on Python for Android Development 1- Runtime permissions] ) 2- Dismissing the splash screen explicitely in your code, use the android module: from android import hide_loading_screen hide_loading_screen() 3- Handling the back button Android phones always have a back button, which users expect to perform an appropriate in-app function. If you do not handle it, Kivy apps will actually shut down and appear to have crashed. In SDL2 bootstraps, the back button appears as the escape key (keycode 27, codepoint 270). You can handle this key to perform actions when it is pressed. For instance, in your App class in Kivy: from kivy.core.window import Window class YourApp(App): def build(self): Window.bind(on_keyboard=self.key_input) return Widget() # your root widget here as normal def key_input(self, window, key, scancode, codepoint, modifier): if key == 27: return True # override the default behaviour else: # the key now does nothing return False 4- Pausing the App The above are some of the common and most used usages in Python for android development. Must Read XML to CSV Conversion Using Python Python Code to Convert a Table to First Normal Form Python Spectrogram Implementation in Python from scratch
https://www.pythonpool.com/an-introduction-to-python-for-android-development/
CC-MAIN-2021-43
refinedweb
643
52.39
How would you go about using a C style string in inline assembly? I have been working on this for quite awhile now...but cant get it to work...here is my code, its not long: It is supposed to input a string, then print it, wait for a keypress, and exit..It is supposed to input a string, then print it, wait for a keypress, and exit..Code: #include <iostream.h> #include <conio.h> #include <stdio.h> #include <stdlib.h> char string[20]; int main (void) { memset(string, 0, 20); asm { mov dx, seg string mov ds, dx lea dx, string mov ah, 0Ah int 21h } printf("%s", string); getch(); return 0; } however, it is not inputting, just going straight to the getch()...anybody good with inline assembly around here that could help out?
http://cboard.cprogramming.com/cplusplus-programming/10614-inline-assembly-question-printable-thread.html
CC-MAIN-2014-15
refinedweb
135
85.49
ACL_EQUIV_MODE(3) BSD Library Functions Manual ACL_EQUIV_MODE(3) acl_equiv_mode — check for an equivalent ACL Linux Access Control Lists library (libacl, -lacl). #include <sys/types.h> #include <acl/libacl.h> int acl_equiv_mode(acl_t acl, mode_t *mode_p); The acl_equiv_mode() function checks if the ACL pointed to by the argu‐ permis‐ sion bits, and is considered equivalent with the traditional file per‐ mission bits. If acl is an equivalent ACL and the pointer mode_p is not NULL, the value pointed to by mode_p is set to the value that defines the same owner, group and other permissions as contained in the ACL. On success, this function returns the value 0 if acl is an equivalent ACL, and the value 1 if acl is not an equivalent ACL. On error, the value -1 is returned, and errno is set appropriately. If any of the following conditions occur, the acl_equiv_mode() function returns the value -1 and sets errno to the corresponding value: [EINVAL] The argument acl is not a valid pointer to an ACL. This is a non-portable, Linux specific extension to the ACL manipulation functions defined in IEEE Std 1003.1e draft 17 (“POSIX.1e”, abandoned). acl_from_mode
http://man7.org/linux/man-pages/man3/acl_equiv_mode.3.html
CC-MAIN-2017-22
refinedweb
194
50.77
blit man page blit — Copies a rectangular area from one bitmap to another. Allegro game programming library. Synopsis #include <allegro.h> void blit(BITMAP *source, BITMAP *dest, int source_x, int source_y, int dest_x, int dest_y, int width, int height); Description Copies a rectangular area of the source bitmap to the destination bitmap. The source_x and source_y parameters are the top left corner of the area to copy from the source bitmap, and dest_x and dest_y are the corresponding position in the destination bitmap. This routine respects the destination clipping rectangle, and it will also clip if you try to blit from areas outside the source bitmap. Example: BITMAP *bmp; ... /* Blit src on the screen. */ blit(bmp, screen, 0, 0, 0, 0, bmp->w, bmp->h); /* Now copy a chunk to a corner, slightly outside. /* blit(screen, screen, 100, 100, -10, -10, 25, 30);(3), stretch_blit(3), draw_sprite(3), gfx_capabilities(3), set_color_conversion(3) Referenced By draw_sprite(3), ex12bit(3), ex3d(3), exaccel(3), exalpha(3), exbitmap(3), exblend(3), excamera(3), excolmap(3), exconfig(3), excustom(3), exdata(3), exdbuf(3), exexedat(3), exflip(3), exjoy(3), exkeys(3), exlights(3), exmem(3), expackf(3), expat(3), exquat(3), exscale(3), exscn3d(3), exshade(3), exsprite(3), exstars(3), exswitch(3), extrans(3), extrans2(3), exunicod(3), exupdate(3), exxfade(3), exzbuf(3), masked_blit(3), masked_stretch_blit(3), stretch_blit(3).
https://www.mankier.com/3/blit
CC-MAIN-2017-47
refinedweb
228
53.21
Introduction MatPlotLib is a module to produce nice-looking plots in Python using a wide variety of back-end packages, at least one of which is likely to be available for your system. Data Files matplotlib requires some data files: (This works for recent versions of matplotlib; for older versions, see this page's history for alternative approaches.) Backends matplotlib has many backends; if all of them get included, your distribution could get large. You might want to do something like: If you omit a backend, you must make sure it isn't the default - for example if you package the wx backend and not Tcl/Tk. The default backend is configured in mpl-data/matplotlib.conf. You can override the configuration in your program by doing, for example: Other things you may need Below are snippets that others have, at one time or another, needed to get matplotlib working. Includes and excludes Data Files 1 from distutils.core import setup 2 import py2exe 3 4 from distutils.filelist import findall 5 import os 6 import matplotlib 7 matplotlibdatadir = matplotlib.get_data_path() 8 matplotlibdata = findall(matplotlibdatadir) 9 matplotlibdata_files = [] 10 for f in matplotlibdata: 11 dirname = os.path.join('matplotlibdata', f[len(matplotlibdatadir)+1:]) 12 matplotlibdata_files.append((os.path.split(dirname)[0], [f])) 13 14 15 setup( 16 console=['test.py'], 17 options={ 18 'py2exe': { 19 'packages' : ['matplotlib', 'pytz'], 20 } 21 }, 22 data_files=matplotlibdata_files 23 ) Setup.py using py2exe with Python2.5 and matplotlib 0.91.2 --~~~~ In this example, a simple program was created where a matplotlib figure canvas is placed in a PyQt child window. In order to compile it with py2exe and matplotlib 0.91.2 with Python 2.5, it is necessary to include the necessary modules and then add the data files properly. On this system, the matplotlib was installed to the folder Python25\Lib\site-packages. Within the matplotlib folder, the matplotlib-data is saved in mpl-data. Using the above methods would result in the classic "RuntimeError: Could not find the matplotlib data files" error. Furthermore, using the method with data_files = matplotlib.get_py2exe_datafiles(), py2exe returns an error saying that 'split' is not a valid method for this object. Another problem when using glob, the and a * is located in the argument, glob will search for everything, including folders. Doing this will give you an error when compiling saying that 'fonts' is not a file. So you need to add the contents from mpl-data\fonts and mpl-data\images individually. Ensure that the first entry in the tuple in the list 'data_files' is matches the actual matplotlib data folder; in this case 'mpl-data' (and then later mpl-data\fonts and mpl-data\images). Also, the file matplotlibrc file is not returned by 'glob' so it is also added manually. This is modified from the above setup.py under Special content for setup.py to use matplotlib 1 # Used successfully in Python2.5 with matplotlib 0.91.2 and PyQt4 (and Qt 4.3.3) 2 from distutils.core import setup 3 import py2exe 4 5 # We need to import the glob module to search for all files. 6 import glob 7 8 # We need to exclude matplotlib backends not being used by this executable. You may find 9 # that you need different excludes to create a working executable with your chosen backend. 10 # We also need to include include various numerix libraries that the other functions call. 11 12 opts = { 13 'py2exe': { "includes" : ["sip", "PyQt4._qt", "matplotlib.backends", "matplotlib.backends.backend_qt4agg", 14 "matplotlib.figure","pylab", "numpy", "matplotlib.numerix.fft", 15 "matplotlib.numerix.linear_algebra", "matplotlib.numerix.random_array", 16 "matplotlib.backends.backend_tkagg"], 17 'excludes': ['_gtkagg', '_tkagg', '_agg2', '_cairo', '_cocoaagg', 18 '_fltkagg', '_gtk', '_gtkcairo', ], 19 'dll_excludes': ['libgdk-win32-2.0-0.dll', 20 'libgobject-2.0-0.dll'] 21 } 22 } 23 24 # Save matplotlib-data to mpl-data ( It is located in the matplotlib\mpl-data 25 # folder and the compiled programs will look for it in \mpl-data 26 # note: using matplotlib.get_mpldata_info 27 data_files = [(r'mpl-data', glob.glob(r'C:\Python25\Lib\site-packages\matplotlib\mpl-data\*.*')), 28 # Because matplotlibrc does not have an extension, glob does not find it (at least I think that's why) 29 # So add it manually here: 30 (r'mpl-data', [r'C:\Python25\Lib\site-packages\matplotlib\mpl-data\matplotlibrc']), 31 (r'mpl-data\images',glob.glob(r'C:\Python25\Lib\site-packages\matplotlib\mpl-data\images\*.*')), 32 (r'mpl-data\fonts',glob.glob(r'C:\Python25\Lib\site-packages\matplotlib\mpl-data\fonts\*.*'))] 33 34 # for console program use 'console = [{"script" : "scriptname.py"}] 35 setup(windows=[{"script" : "scriptname.py"}], options=opts, data_files=data_files) Updating _sort confusion. If you get an error that numarray module has no functionDict in numarraycore.py at line 176, then you'll know that py2exe confused the _sort modules that are in both numpy and numarray and need to do this step. MPL datafiles etc with Python 2.5 and MPL 0.99 Using matplotlib.get_py2exe_datafiles() helps to make this really easy now. Following is a setup.py for the MPL example "embedding_in_wx2.py" (you should comment the wxversion check in that file or make it conditional on sys.frozen). from distutils.core import setup import py2exe # Remove the build folder, a bit slower but ensures that build contains the latest import shutil shutil.rmtree("build", ignore_errors=True) # my setup.py is based on one generated with gui2exe, so data_files is done a bit differently data_files = [] includes = [] excludes = ['_gtkagg', '_tkagg', 'bsddb', 'curses', 'pywin.debugger', 'pywin.debugger.dbgcon', 'pywin.dialogs', 'tcl', 'Tkconstants', 'Tkinter', 'pydoc', 'doctest', 'test', 'sqlite3' ] packages = ['pytz'] dll_excludes = ['libgdk-win32-2.0-0.dll', 'libgobject-2.0-0.dll', 'tcl84.dll', 'tk84.dll'] icon_resources = [] bitmap_resources = [] other_resources = [] # add the mpl mpl-data folder and rc file import matplotlib as mpl data_files += mpl.get_py2exe_datafiles() setup( windows=['embedding_in_wx2.py'], # compressed and optimize reduce the size options = {"py2exe": {"compressed": 2, "optimize": 2, "includes": includes, "excludes": excludes, "packages": packages, "dll_excludes": dll_excludes, # using 2 to reduce number of files in dist folder # using 1 is not recommended as it often does not work "bundle_files": 2, "dist_dir": 'dist', "xref": False, "skip_archive": False, "ascii": False, "custom_boot_script": '', } }, # using zipfile to reduce number of files in dist zipfile = r'lib\library.zip', data_files=data_files ) In 0.99 there is a problem with some of the documentation strings, which shows up if you use "optimize=1or2", the work around is quite easy and according to the MPL list this is fixed in trunk. This kind of code: psd.__doc__ = psd.__doc__ % kwdocd Causes a TypeError exception if one uses ""optimize": 1or2," in ones setup.py. My work around is to change that type of code to: if psd.__doc__ is not None: psd.__doc__ = psd.__doc__ % kwdocd else: psd.__doc__ = "" I got this problem only with mpl/mlab.py, in which four lines need to be changed. Another issue I found if you need to use the backend_wx, it does a version check for wxPython 2.8 which always fails in the py2exe version, my work around is to add "if not hasattr(sys, 'frozen'):" at line 113 of backend_wx.py and indent the lines 114 to 131. I will report this to the mpl list and they will hopefully fix this too.
http://www.py2exe.org/index.cgi/MatPlotLib?highlight=TypeError
CC-MAIN-2016-07
refinedweb
1,211
58.79
Problems with Python __version__ parsing As stated by Armin and commenters [*] the change from 0.9 to 0.10 is a convention in open source versioning, and the fault seems to lie more on the version-parsers than the version-suppliers. [†] Armin also notes that the appropriate solution is to use: from pkg_resources import parse_version Despite it not being the fault of the version supplier, we've recognized that this can be an issue and can certainly take precautions against letting client code interpret __version__ as a float. Right now there are two ways that I can think of doing this: Keep __version__ as a tuple. If you keep __version__ in tuple form you don't need to worry about client code forgetting to use the parse_version method. Use version numbers with more than one decimal. This prohibits the version from being parsed as a float because it's not the correct format — taking the current Linux kernel version as an example: >>>>> float(__version__) Traceback (most recent call last): ... ValueError: invalid literal for float(): 2.6.26 >>> tuple(int(i) for i in __version__.split('.')) (2, 6, 26) This ensures that the client code will think about a more appropriate way to parse the version number than using the float builtin; however, it doesn't prevent people from performing an inappropriate string comparison like the tuple does.
http://blog.cdleary.com/2008/07/problems-with-python-__version__-parsing/
CC-MAIN-2017-30
refinedweb
227
60.24
'Large' program structure - basic concepts I am fairly new to both Python and Pyside. I've written a few single page scripts and some which call objects or functions in modules and things generally work as planned after the usual troubleshooting. Now, I am trying to write what for me is a large application. Currently it has a main window with menu, tool bar and widgets in the central widget. These widgets (QTabWidgets) have their own seperate modules. The tabs inside these widgets have their own modules. Inside some tabs are QTreeViews and their respective Models... and of course, are written in their own modules. The net result is a heirachical chain of modules calling one another from the top down. This was done with the intent to keep the program organized and avoid clutter. So far this has worked. The problem I'm now running into is getting objects at one end of the heirarchy to see objects at the other end and interact if needed. Similar problems between objects at the bottoms of different heirarchy chains. The immediate small picture problem is working between these scopes / namespaces. The bigger picture now has me wondering whether the program has been structured in a workable fashion. If not, how does one structure larger applications? Are there different ways to structure larger applications? Any recommended reading / examples / tutorials? Thank You First, are you talking about instances or classes? If instances: Although the general rule is: don't use global variables... sometime you do. (After all, if instances 'know' about other instances, they do, you can't pretend they don't, but you could pass them around in parameters.) Commonly, in a config.py module at the top. For example, mainWindow=None in that module. The module creating the main window would: @import config ... config.mainWindow = QMainWindow()@ If classes, search for 'circular import problem.' Best to avoid, generally by rearranging your classes, but sometimes you solve by doing imports at the bottom of a module.
https://forum.qt.io/topic/31373/large-program-structure-basic-concepts
CC-MAIN-2018-05
refinedweb
333
68.16
I Made my Own Data Type (C++)! It's called "dsq::squid". The namespace "dsq" stands for "dynamic squid". If you want to see how I created this, take a look at the attached repl. Also, if you want to learn how to make your own data type, check out this tutorial. Otherwise, let's see what "squid" can do! So my data type can actually accommodate most primitive types, making it dynamic. It can hold an integer, string, float, and a few more! Here's the basic syntax: Here's the list of types it can hold: - bool - char - short, int, long - float, double - std::string And the cool thing is, each type has it's own special method! Now let's take a look at bool. Boolean Squids As you can see, with the power of overloaded operators, we can make the object boolVar function like a traditional boolean! Well, that's just bool. Next, characters! Character Squid What's special about char is that you can actually get the ascii value of it! Should come in handy sometimes. Well, next up, integers! Integer Squids What makes the int type so special is the round method! The place is based 10. So you can round it by the 10th place, 100th place, or even 100000th place. The default is 10. The method is how you would like to round it. Up, down, or normally. Up - 3.14 becomes 3.2; Down - 3.14 becomes 3.1 Normal - 3.14 becomes 3.1 There's also a to_str() method which converts the int to a string. Next up, decimals! Decimal Squids So for decimals, it's actually very similar to ints, but the round function works a little differently. You specify how many decimals places you would like to remove (default is 3). Another special feature is the decimal_places() method, which counts the number of decimal places. It returns and int. And lastly, strings! String Squids So a few things here. Subscript operator, multiplication, length, and erase. And yeah! That's all the data types of the dsq::squid data type. Comment below if you want me to add more. Also, one more thing: Don't forget to upvote! noice I don't know, I see C casts in there (int), and I can't do ++ on the bool. Some of this could've had better performance, for example, constexpr for .size. Also, why was it a function? Really, most of this uses the C++ stdlib, makes it private, and exposes even less than std:: does. Thinking about this... I might be able to make something relatively useful. Now really, I was preparing to make a type-safe output system relatively similar to (w)cout and I had seen squid& as the return value of the operator, which is a reference to an object of the type squid, yet attempts to do so in my own code fail. @DynamicSquid cout is type-safe already, but like printf, you can easily do printf("%d", "die system, die"). But can you explain how the squid& works? @StudentFires oh you mean like for this: squid& operator ++ () { ++variable; return *this; }? @DynamicSquid yes, I have no clue as to how that's working, as I'm far from an expert on C++ classes. @StudentFires oh, okay. here, think of it like this: So the reference return type basically turns the function into the actual value it's returning. WAIT: YOU ATTACKED A REPL HOW DARE YOU MONSTER @Codemonkey51 oh oops, lol. I meant "attached" xD :) @DynamicSquid
https://replit.com/talk/share/I-Made-my-Own-Data-Type-C/36833?order=votes
CC-MAIN-2021-43
refinedweb
592
77.64
C <wchar.h> - fgetws() Function The C <wchar.h> fgetws() function reads at most (num-1) characters from the given stream and stores them as a C wide string into ws. Parsing stops if a newline character is found, in which case ws will contain that newline character, or the end-of-file is reached. A terminating null wide character is automatically appended after the characters copied to ws. Syntax wchar_t* fgetws (wchar_t* ws, int num, FILE* stream); Parameters Return Value - On success, ws ws wide characters from the file and prints them on screen. #include <stdio.h> #include <wchar.h> int main (){ //open the file in read mode FILE *pFile = fopen("test.txt", "r"); wchar_t mystring [16]; //read first 15 wide characters from the file if(fgetws(mystring, 16, pFile) != NULL) fputws (mystring, stdout); //close the file fclose(pFile); return 0; } The output of the above code will be: This is a test ❮ C <wchar.h> Library
https://www.alphacodingskills.com/c/notes/c-wchar-fgetws.php
CC-MAIN-2021-43
refinedweb
158
72.05
Holy cow, I wrote a book! The. CoInitializeSecurity The next step in the cookbook is creating a connection to a WMI namespace. We create a WbemLocator and connect it to the desired namespace. WbemLocator Step three in the cookbook is setting the security context on the interface, which is done with the amusingly-named function CoSetProxyBlanket. CoSetProxyBlanket Once we have a connection to the server, we can ask it for all (*) the information from Win32_ComputerSystem. * Win32_ComputerSystem We know that there is only one computer in the query, but I'm going to write a loop anyway, because somebody who copies this code may issue a query that contains multiple results. For each object, we print its Name, Manufacturer, and Model. And that's it.
http://blogs.msdn.com/b/oldnewthing/archive/2014/01/06/10487119.aspx
CC-MAIN-2014-49
refinedweb
124
56.15
Converting a sketch or a library from Arduino IDE 0023 to Arduino 1.0 - instead of including wiring.h, WProgram.h, WConstants.h and pins_arduino.h, you only need Arduino.h - Wire.send() is now Wire.write(), Wire.receive() is now Wire.read() - If you get error messages like "call of overloaded ‘write(int)’ is ambiguous", you probably need to add "(byte)" before a constant argument, eg: Wire.write((byte)0x00); - It is possible to use conditional statements to write code that is compatible with older and newer versions, eg: #if ARDUINO >= 100 #include "Arduino.h" #else #include "WProgram.h" #endif - More complete list and additional information: - - Display comments as Linear | Threaded
http://blog.crox.net/archives/83-Converting-a-sketch-or-a-library-from-Arduino-IDE-0023-to-Arduino-1.0.html
CC-MAIN-2018-39
refinedweb
114
61.73
Reversing a Linked List is an interesting problem in data structure and algorithms. In this tutorial, we’ll be discussing the various algorithms to reverse a Linked List and then implement them using Java. Table of Contents Reverse a Linked List LinkedList is a data structure which stores the data in a linear way. Though not in a contiguous way. Every element of a LinkedList contains a data part and an address to the next element of the LinkedList. LinkedList elements are popularly known as nodes. In order to reverse a LinkedList in place, we need to reverse the pointers such that the next element now points to the previous element. Following is the input and output. The head of the LinkedList is the first node. No other element has the address stored for this. The tail of the LinkedList is the last node. The next address stored in this node is null. We can reverse a LinkedList such that the head and tail also get changed using: - Iterative Approach - Recursive Approach Iterative Approach to Reverse a Linked List To reverse a LinkedList iteratively, we need to store the references of the next and previous elements, so that they don’t get lost when we swap the memory address pointers to the next element in the LinkedList. Following illustration demonstrates how we will reverse our LinkedList by changing the references. Create 3 instances: current, next, previous. Loop the following till current is NOT null: - Save the next Node of the current element in the next pointer. - Set the next of the currentNode to the previous. This is the MVP line. - Shift previous to current. - Shift the current element to next. In the end, since the current has gone one place ahead of the last element, we need to set the head to the last element we reached. This is available in previous. Set head to previous. Thus we have our new head of the LinkedList which is the last element of the older LinkedList. Here is a very simple implementation of LinkedList. Note that this is not a production-ready implementation and we have kept it simple so that our focus remains on the algorithm to reverse the Linked List. package com.journaldev.linkedlist.reverse; public class MyLinkedList { public Node head; public static class Node { Node next; Object data; Node(Object data) { this.data = data; next = null; } } } The Java Program to reverse a Linked List iteratively and printing its elements is given below: package com.journaldev.linkedlist.reverse; import com.journaldev.linkedlist.reverse.MyLinkedList.Node; public class ReverseLinkedList { public static void main(String[] args) { MyLinkedList myLinkedList = new MyLinkedList(); myLinkedList.head = new Node(1); myLinkedList.head.next = new Node(2); myLinkedList.head.next.next = new Node(3); printLinkedList(myLinkedList); reverseLinkedList(myLinkedList); printLinkedList(myLinkedList); } public static void printLinkedList(MyLinkedList linkedList) { Node h = linkedList.head; while (linkedList.head != null) { System.out.print(linkedList.head.data + " "); linkedList.head = linkedList.head.next; } System.out.println(); linkedList.head = h; } public static void reverseLinkedList(MyLinkedList linkedList) { Node previous = null; Node current = linkedList.head; Node next; while (current != null) { next = current.next; current.next = previous; previous = current; current = next; } linkedList.head = previous; } } Output: 1 2 3 3 2 1 Reverse a Linked List Recursively To reverse a LinkedList recursively we need to divide the LinkedList into two parts: head and remaining. Head points to the first element initially. Remaining points to the next element from the head. We traverse the LinkedList recursively until the second last element. Once we’ve reached the last element set that as the head. From there we do the following until we reach the start of the LinkedList. node.next.next = node; node.next = null; To not lose a track of the original head, we’ll save a copy of the head instance. Following is the illustration of the above procedure. The Java program to reverse a LinkedList recursively is: public static Node recursiveReverse(Node head) { Node first; if (head==null || head.next == null) return head; first = recursiveReverse(head.next); head.next.next = head; head.next = null; return first; } We just pass the head.next in the recursive call in order to go to the end of the LinkedList. Once we reach the end, we set the second last element as the next of the last element and set the next pointer of second last element to NULL. The last element will be marked as the new head. Use the following code to reverse the linked list using the recursive approach. myLinkedList.head = recursiveReverse(myLinkedList.head); The output will remain as the previous iterative approach. Space Time Complexity of Reversing a Linked List Space Complexity – O(1) Looks like you haven’t tested your recursive solution…null pointer exception will be thrown for size 1.
https://www.journaldev.com/23035/reverse-a-linked-list
CC-MAIN-2021-10
refinedweb
792
67.04
Use Structure as a key of map Is it possible to use a structure as the key of a map?! If it is, then how can I do this? thanks in advance. Yes, but you have to implement the Boolean comparison operator '<' that the map class uses to order the entries. For example: struct mystruct { int x, y; bool operator < ( const mystruct &b ) { return x < b.x || ( x == b.x && y < b.y ); } }; map< mystruct, int > z; Got it.. Thanks man.. Using this just solve today's CF round D2 E.. :) With pleasure. The Boolean comparison operator< is also used by the sort() function to sort an array of structure items, and by other containers such as a set or a list when it is required to order a set or a list of structure items. For example, mystruct a[ 100 ]; set< mystuct > b; list< mystruct > c; sort( a, a + 100 ); c.sort(); This method helps in solving many CF problems as well. Thanks for your kind information :) Suppose I have a vector of a structure. It is sorted by the operator overloading method. Now, can I apply upper_bound and lower_bound operations on this vector ? how ? Yes, as long as the vector has been sorted using the Boolean operator<. Example: struct mystruct: public pair< int, int > { mystruct( const int x, const int y ) { first = x, second = y; } bool operator < ( const mystruct &b ) const { return first < b.first || ( first == b.fist && second < b.second); } friend ostream& operator << ( ostream &os, const mystruct &b ) { return os << b.first << " " << b.second; } }; int main() { vector< mystruct > z; z.emplace_back( 3, 4 ); z.emplace_back( 1, 2 ); z.emplace_back( 5, 6 ); sort( z.begin(), z.end() ); for( auto x: z ) cout << x << endl; cout << endl; cout << *lower_bound( z.begin(), z.end(), mystruct( 3, 3 ) ) << endl; cout << *upper_bound( z.begin(), z.end(), mystruct( 4, 4 ) ) << endl; } Output: 1 2 3 4 5 6 The following is the previous example augmented with struct myvector wrapper for vector< mystruct >: struct myvector vector< mystruct > struct mystruct: public pair< int, int > { mystruct( const int x, const int y ) { first = x, second = y; } bool operator < ( const mystruct &a ) const { return first < a.first || ( first == a.first && second < a.second ); } friend ostream& operator << ( ostream &os, const mystruct &a ) { return os << x.first << " " << x.second; } }; struct myvector: public vector< mystruct > { void order() { sort( begin(), end() ); } friend ostream& operator << ( ostream& os, const myvector &v ) { for( auto x: v ) os << x << endl; return os; } iterator lb( const mystruct &x ) { return lower_bound( begin(), end(), x ); } iterator ub( const mystruct &x ) { return upper_bound( begin(), end(), x ); } } z; int main() { z.emplace_back( 3, 4 ); z.emplace_back( 1, 2 ); z.emplace_back( 5, 6 ); z.order(); cout << z << endl; cout << *z.lb( mystruct( 3, 3 ) ) << endl; cout << *z.ub( mystruct( 4, 4 ) ) << endl; } And that is not a valid comparison operator. Please read . Thanks for your comment. A C++11 solution for problem 869E - The Untended Antiquity has successfully used this operator to order points and rectangles in a 2D grid, please read 31224413. Can you briefly explain the reason that this operator is invalid? Thanks in advance. The equivalence established by operator isn't transitive. For pairs a=(1, 5), b=(3,1), c=(4, 2) a=b a=c and b<c. It might work if if input is guaranteed not to contain such input or were simply lucky that with given inputs and STL implementation details output was suitable to rest of your code. So on cppreference page it says that transitivity for equiv function should be fullfilled where equiv(a,b) = !comp(a,b) && !comp(b,a) where comp is your comparison function. equiv equiv(a,b) !comp(a,b) && !comp(b,a) comp In your case it is not fullfilled. For example: What implications it may have during sort: For example: (3,2), (1,3), (2,1) is technically valid sorted sequence even though (2,1) < (3,2). If during our sort we compare only neighbour values then our sort would stop without any changes because sequence is already observably sorted. As for why it worked for the task — I don't know, it probably requires some additional investigation. But you better avoid using such operators since result is undefined. Thanks for the clarification. In the statement of the aforementioned problem, it is guaranteed that no two rectangle boundaries intersect. The solution uses a rectangle_tree class whose root node is the entire 2500 x 2500 area. The children of each node (u) in the rectangle_tree are the largest non-intersecting rectangles contained in (u). Insertions and deletions update the tree according to inserted/deleted rectangles. A walk query returns "Yes" if and only if the smallest rectangles containing the source and the target cells are the same. The Boolean operator are used to compare the corresponding upper-left and lower-right of rectangle pairs preserving the order of the corner cells, and to compare query points to those corner cells. Therefore, there is on luck in the success of the solution. It is just the case that those counterexamples mentioned in your clarification are guaranteed to never take place. The Boolean operator< is NOT used to sort an array of 2D points. It is just used to update the rectangle tree when a rectangle is inserted or deleted. Best Regards. I thought we were talking about comparator for STL functions. The requirements are what STL expects. If you only use it yourself it can do whatever you want even multipliciation. That doesn't mean it's always a good idea even more so if you give it as example of how to use it with map. Abusing operator overloading can easily cause misunderstandings like this when it is not obvious what operator X does for structure Y or it doesn't match conventions. Thanks for your precious comments. I agree with you that giving that example for sorting an array of structures was not a good choice; my apologies for the inconvenience, and my appreciation for your effort to clarify what confused you. If you check the submitted solution, you will find that the operators bool operator < ( const cell &c1, const cell &c2 ) { return c1.first < c2.first && c1.second < c2.second; } bool operator <= ( const cell &c1, const cell &c2 ) { return c1.first <= c2.first && c1.second <= c2.second; } are called from the rectangle_tree methods: bool contains( const rectangle_tree &r ) const { return first < r.first && r.second < second; } bool contains( const cell &c ) const { return first <= c && c <= second; } The former contains() function returns true when the rectangle object contains rectangle (r), and the latter contains() function returns true when the rectangle object contains the query point given by the cell (r). contains() The base class pair< cell, cell > is used to hold the upper-left and lower-right corners of the rectangle. The example for implementing bool operator< has been updated to comply with STL requirements. A word of gratitude is due to KarlisS and predelnik for their fruitful comments and efforts to elaborate the problem with the previous example. If x and y in the previous example are known to be positive integers <= 4294967295U (UINT_MAX), then it is possible to concatenate the two 32-bit unsigned integers into one 64-bit unsigned integer using c++ union as follows: #include <bits/stdc++.h> using namespace std; const unsigned one = 1U; const bool little_endian = *( (char *) ( &one ) ), big_endian = !little_endian; struct mystruct { union { unsigned u[ 2 ]; unsigned long long v; } data; mystruct( const unsigned x0 = 0, const unsigned y0 = 0 ) { data.u[ big_endian ] = x0, data.u[little_endian ] = y0; } bool operator < ( const mystruct &b ) const { return data.v < b.data.v; } void read() { scanf( "%u %u", data.u + big_endian, data.u + little_endian ); } void write() { printf( "%u %u", data.u[ big_endian ], data.u[ little_endian ] ); } }; struct myvector: public vector< mystruct > { void order() { sort( begin(), end() ); } void read( const unsigned n ) { mystruct a; for( unsigned i = 0; i < n; i++ ) a.read(), push_back( a ); } void write() { for( auto p: *this ) p.write(), putchar( '\n' ); } mystruct& lb( const mystruct &x ) { return *lower_bound( begin(), end(), x ); } mystruct& ub( const mystruct &x ) { return *upper_bound( begin(), end(), x ); } } z; int main() { unsigned n; scanf( "%u", &n ), z.read( n ), z.order(), z.write(), putchar( '\n' ); mystruct a; a.read(), z.lb( a ).write(), putchar( '\n' ); a.read(), z.ub( a ).write(), putchar( '\n' ); } The Boolean operator< compares the corresponding 64-bit numbers directly.
http://codeforces.com/blog/entry/55235
CC-MAIN-2018-30
refinedweb
1,396
65.62
Programming Reference/Librarys Question & Answer because of the inhumane working conditions at amazon in Germany, I will offer the Amazon app no more! (ARD Reportage) FILE Pointer (stream) shows on a given file The data type FILE is a structure that contains information about a file or specified data stream. It includes such information as a file descriptor, current position, status flags, and more. It is most often used as a pointer to a file type, as file I/O functions predominantly take pointers as parameters, not the structures themselves (see example below). #include <stdio.h> /* including standard library */ //#include <windows.h> /* uncomment this for Windows */ int main (void) { FILE *filestream; filestream=fopen("test.txt","r"); if (filestream == NULL) perror("cannot open file"); fclose(filestream); return 0; }
http://code-reference.com/c/keywords/file
CC-MAIN-2016-30
refinedweb
127
54.52
From: Sebastian Redl (sebastian.redl_at_[hidden]) Date: 2006-04-23 10:00:15 Tom Brinkman wrote: >Poor points. Getting this library thing ready by version 1.35 or any other >version is my last concern. I want it to be correct. Its not acceptable >that that this fairly large library will not get the full review that is >necessary for a boost library. The scope of this library is too diverse for >it that too occur in only a 10 day review period. It is very, very common >for libraries to need a second or even a third review. I think that this >library falls in that category of libraries that will need multiple reviews. > > So let's extend the review period if the library is too large to be reviewed within 10 days. But splitting it up makes little sense to me. >As this seems to be opinion that you share with one or two others, please >educate me. Why does the "property tree" require a parser. > Because, as I said, it is inappropriately named. The "property tree" library submitted is not just the data structure. It is, to me, a framework to load, store, transport and save simple hierarchical configuration (or similar) data. As such it is useful to me. As such I'm already using it. As such it needs the parsers. >I suspect that those of you who want the parsers included with this library >are just trying to sneek an XML parser through the review process without a >full review. Please tell me that I'm wrong on this point. > > You are wrong. I don't particularly care about the XML parser specifically. Replace it with a Java .properties parser, and I'll be just as happy. If ever I want an XML parser in Boost (and I want one, make no mistake), I don't want it some half-arsed thing that understands only a minor subset of XML. I want it a full-blown effort, providing generic pull- and push-parsers, a data structure complex enough that a DOM can be built on top (that could perhaps be a seperate effort, Boost.GenericTree), understanding namespaces and full DTD validation. Eventually, the library ought to evolve to have full Schema (and perhaps RelaxNG and Schematron) and XInclude support. In other words, an effort as complete as the parsers that come with Java. A parser on the level of PropertyTree's is so basic that I might as well replace those XML files with INFO or JSON. >This library will eventually get accepted, I'm fairly sure of that, possibly >even portions of the library will be approved this time around. What is >your rush to get this one through. Lets take our time and get it right. > > I have no rush to get it through. If the whole library is ready by 1.36 or 1.37 or whatever, that's fine with me. But I want to get it through as a whole. Sebastian Redl Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2006/04/103648.php
CC-MAIN-2022-40
refinedweb
524
76.72
Barcode Software vb.net create barcode image The 1990s 2000s in .NET Integrated qr barcode in .NET The 1990s 2000s Leslie doesn t realize that she represses her real feelings. She is highly emotional, but she lets her feelings build up and then tries to control them. Eventually, she has a volcanic eruption that surprises both herself and those around her. In addition, some of Leslie s feelings may be masking deeper emotions. For example, she becomes angry when she s actually hurt, and she becomes hurt when she s actually anxious. When Leslie is functioning at her normal moderate self-mastery level, an indirect challenge would be best. When she is functioning closer to high self-mastery, a direct challenge would have the most impact on her. using button microsoft excel to deploy barcode in asp.net web,windows application BusinessRefinery.com/barcode asp.net barcode reader sdk Using Barcode recognizer for tutorial .net vs 2010 Control to read, scan read, scan image in .net vs 2010 applications. BusinessRefinery.com/ bar code Below the menu items are the toolbar buttons or icons. The toolbar buttons allow you to quickly perform a common function within ASDM. Here is a description of the toolbar buttons: Home Displays the Home screen, which lets you view important information about your security appliance such as the status of your interfaces, the version of code you are running, licensing information, and performance information Configuration Displays the Configuration screen, which allows you to configure the features of the appliance use jasper barcode development to encode bar code for java width BusinessRefinery.com/ barcodes asp.net barcode generator open source using barcode implement for web.net control to generate, create barcodes image in web.net applications. output BusinessRefinery.com/ bar code 5 SRC IP = 201.201.201.2 DST IP = 200.200.200.1 using barcode generating for reportingservices class control to generate, create bar code image in reportingservices class applications. contact BusinessRefinery.com/ bar code using barcode generating for rdlc report files control to generate, create bar code image in rdlc report files applications. additional BusinessRefinery.com/barcode FUNCTION MACROS to access qr codes and qr bidimensional barcode data, size, image with .net barcode sdk namespace BusinessRefinery.com/QR use excel microsoft qr drawer to draw qr bidimensional barcode with excel microsoft address BusinessRefinery.com/QR Code 2d barcode Join Tables Using a Join Operation in t h e F R O M Clause add qr code to ssrs report use sql server reporting services qr code development to build quick response code in .net framework BusinessRefinery.com/QR Code JIS X 0510 winforms qr code use .net winforms qr maker to access quick response code with .net numeric BusinessRefinery.com/qr bidimensional barcode IPv6 ACLs qr code size property in .net BusinessRefinery.com/QRCode rdlc qr code using barcode implement for rdlc reports net control to generate, create qr barcode image in rdlc reports net applications. agent BusinessRefinery.com/QRCode use word code 128 creation to make code 128 barcode on word email BusinessRefinery.com/barcode 128a barcode 128 generator vb.net using way visual studio .net to get ansi/aim code 128 with asp.net web,windows application BusinessRefinery.com/code 128c Combining Eqs. (8.23) and (8.24) yields y y d sin q - 1 = + - cosq . r a a Squaring both sides of Eq. (8.25) and rearranging, we get sin 2 q - sin 2 q 2r 2 r r r d = + - cosq + a y a y ay crystal reports barcode 128 download using barcode generation for .net crystal report control to generate, create uss code 128 image in .net crystal report applications. images BusinessRefinery.com/Code-128 crystal reports pdf 417 use .net barcode pdf417 generating to develop pdf 417 in .net complete BusinessRefinery.com/pdf417 This page intentionally left blank rdlc pdf 417 using barcode printing for rdlc reports net control to generate, create pdf 417 image in rdlc reports net applications. protocol BusinessRefinery.com/barcode pdf417 using implementing word microsoft to make pdf-417 2d barcode in asp.net web,windows application BusinessRefinery.com/pdf417 2d barcode Do(n t) you want to . . . Are(n t) you interested in . . . Would(n t) it please you to . . . Do(n t) you want to go to the country Yes, I would. Are(n t) you interested in going to the movies No, I m not interested. Would(n t) it please you to go out Yes, it would. (No) le (te) parece . . . (No) le (te) interesa . . . (No) le (te) gustar a . . . (No) Le (Te) parece bien ir al campo generate, create code 3 of 9 protocol none in excel spreadsheets projects BusinessRefinery.com/Code 39 java data matrix generator generate, create barcode data matrix demo none in java projects BusinessRefinery.com/Data Matrix letter-spacing, text-align To center Inside Using the Built-in Function Objects Part I: (Continued) V0 1 2 LC Requests total/allowed/denied 0/0/0 Server timeouts/retries 0/0 Responses received 0 Response time average 60s/300s 0/0 seconds/request <--output omitted-->. Milestones 3. We do (a), (b), (c), (d). (a) Let u = cos x, du = sin x dx. Then the integral becomes (1 + u2 )2 2u du = (1 + u2 )3 + C. 3 Criticism of the GMPC and public acceptance issues has not really surfaced, in part, because every Malaysian citizen over the age of 12 has long been required to carry a paper identity card containing his thumbprints. By law, each Malaysian must carry his or her government-issued identity card. Moreover, the Constitution of Malaysia does not specifically recognize a right to privacy. More QR Code on .NET E l e c t r i c Ve h i c l e H i s t o r y in .NET Creator QR Code The Tesla Roadster . in .NET Generating qr-codes Buy New Buy Used Convert New in .NET Draw QRCode T h e B e s t E l e c t r i c Ve h i c l e f o r Yo u in .NET Print QR Code JIS X 0510 zxing barcode generator java example: Chassis and Design in .NET Encoder Quick Response Code in .NET Chassis and Design vb.net create barcode image: Aerodynamic Drag Force Defined in .NET Create qr barcode in .NET Aerodynamic Drag Force Defined Chassis and Design in .NET Display QR Code 2d barcode Drivetrain type Front wheel drive Rear wheel drive in .NET Integrate QR Code ISO/IEC18004 Calculation Overview in .NET Drawer QR Code ISO/IEC18004 Vehicle Speed in .NET Incoporate QR Code 2d barcode This page intentionally left blank in .NET Development QR Code JIS X 0510 vb.net create barcode image: DC Motors in General in .NET Print qr bidimensional barcode in .NET DC Motors in General Torque Characteristics in .NET Integrate QR Code ISO/IEC18004 AC Induction Motors in .NET Integrating QR Code JIS X 0510 zxing barcode generator java example: Tomorrow s Best EV Motor Solution in .NET Creation QR in .NET Tomorrow s Best EV Motor Solution Build Your Own Elec tric Vehicle in .NET Maker QR Code JIS X 0510 Zilla Models in .NET Maker QR Code JIS X 0510 Performance and torque ver sus an inter nal combustion engine (Cour tesy of T esla in .NET Compose QR Code 2d barcode Charging Chemical Reaction in .NET Creator qrcode Articles you may be interested print barcode in vb.net: *The AIA website is. The CMAA website is. in Software Generator Code 39 in Software *The AIA website is. The CMAA website is. free barcode font for vb.net: Laboratory Manual in Software Generating qr codes in Software Laboratory Manual 2d barcode vb.net: C++ from the Ground Up in .NET Drawer 2d Data Matrix barcode in .NET C++ from the Ground Up vb.net barcode generator source code: A Sampling of Methods De ned by SortedDictionary TK, TV in C# Encoder QR Code 2d barcode in C# A Sampling of Methods De ned by SortedDictionary TK, TV print barcode in vb.net: DESIGNING AN ENTERPRISE APPLICATION DELIVERY ARCHITECTURE in Software Attach qr barcode in Software DESIGNING AN ENTERPRISE APPLICATION DELIVERY ARCHITECTURE java barcode reader free download: Need for reliable communications in Software Generation qr codes in Software Need for reliable communications download barcode font for vb.net: Collectively referred to as the set of Ethernet service attributes. in Objective-C Access Data Matrix barcode in Objective-C Collectively referred to as the set of Ethernet service attributes. barcode font reporting services: Exploring the System Namespace in visual C# Build qr-codes in visual C# Exploring the System Namespace ssrs ean 13: + * / Addition Subtraction Multiplication Division in .net C# Draw QR Code 2d barcode in .net C# + * / Addition Subtraction Multiplication Division rdlc barcode: 7: Business Continuity and Disaster Recovery in Software Generation USS Code 39 in Software 7: Business Continuity and Disaster Recovery how to generate barcode in rdlc report: Making Sense of the COSO Cube and the COSO Pyramid in Software Paint 39 barcode in Software Making Sense of the COSO Cube and the COSO Pyramid barcode recognition .net open source: Parental Lock in Software Creation QRCode in Software Parental Lock barcode font in vb.net: Getting Started in Software Render qr barcode in Software Getting Started integrate barcode scanner into asp.net web application: Define the M e a n i n g of Data in Software Development QRCode in Software Define the M e a n i n g of Data 2d barcode vb.net: Virtual Functions and Polymorphism in .NET Get barcode data matrix in .NET Virtual Functions and Polymorphism generate qr code using excel: CAM DESIGN HANDBOOK in Software Drawer barcode data matrix in Software CAM DESIGN HANDBOOK barcode in vb.net source code: CERTIFICATION OBJECTIVE 16.02 in Objective-C Get QR Code ISO/IEC18004 in Objective-C CERTIFICATION OBJECTIVE 16.02 barcode printer vb.net: Global Variables in .NET Develop ECC200 in .NET Global Variables java api barcode scanner: Charge-Charge Forces in .NET Writer QR in .NET Charge-Charge Forces barcode reader using java source code: Biometrics in Software Encode barcode standards 128 in Software Biometrics
http://www.businessrefinery.com/yc3/443/31/
CC-MAIN-2022-40
refinedweb
1,699
58.89
NETWORKS CAME INTO EXISTENCE AS SOON as there was two of something: two cells, two animals and, obviously, two computers. While the overwhelming popularity of the Internet leads people to think of networks only in a computer context, a network exists anytime there is communication between two or more parties. The differences between various networks are matters of implementation, as the intent is the same: communication. Whether it is two people talking or two computers sharing information, a network exists. The implementations are defined by such aspects as medium and protocol. The network medium is the substance used to transmit the information; the protocol is the common system that defines how the information is transmitted. In this chapter, we discuss the types of networks, the methods for connecting networks, how network data is moved from network to network, and the protocols used on today’s popular networks. Network design, network administration, and routing algorithms are topics suitable for an entire book of their own, so out of necessity we’ll present only overviews here. With that in mind, let’s begin.Circuits vs. Packets In general, there are two basic types of network communications: circuit-switched and packet-switched. Circuit-switched networks are networks that use a dedicated link between two nodes, or points. Probably the most familiar example of a circuit-switched network is the legacy telephone system. If you wished to make a call from New York to Los Angeles, a circuit would be created between point A (New York) and point B (Los Angeles). This circuit would be dedicated—that is, there would be no other devices or nodes transmitting information on that network and the resources needed to make the call possible, such as copper wiring, modulators, and more would be used for your call and your call only. The only nodes transmitting would be the two parties on each end. One advantage of a circuit-switched network is the guaranteed capacity. Because the connection is dedicated, the two parties are guaranteed a certain amount of transmission capacity is available, even though that amount has an upper limit. A big disadvantage of circuit-switched networks, however, is cost. Dedicating resources to facilitate a single call across thousands of miles is a costly proposition, especially since the cost is incurred whether or not anything is transmitted. For example, consider making the same call to Los Angeles and getting an answering machine instead of the person you were trying to reach. On a circuit-switched network, the resources are committed to the network connection and the costs are incurred even though the only thing transmitted is a message of unavailability. A packet-switched network uses a different approach from a circuit-switched network. Commonly used to connect computers, a packet-switched network takes the information communicated on the network and breaks it into a series of packets, or pieces. These packets are then transmitted on a common network. Each packet consists of identification information as well as its share of the larger piece of information. The identification information on each packet allows a node on the network to determine whether the information is destined for it or the packet should be passed along to the next node in the chain. Once the packet arrives at its destination, the receiver uses the identification portion of the packet to reassemble the pieces and create the complete version of the original information. For example, consider copying a file from one computer in your office to another. On a packet-switched network, the file would be split into a number of packets. Each packet would have specific identification information as well as a portion of the file. The packets would be sent out onto the network, and once they arrived at their destination, they would be reassembled into the original file. Unlike circuit-switched networks, the big advantage of packet-switched networks is the ability to share resources. On a packet-switched network, many nodes can exist on the network, and all nodes can use the same network resources as all of the others, sharing in the cost. The disadvantage of packet-switched networks, however, is the inability to guarantee capacity. As more and more nodes sharing the resources try to communicate, the portion of the resources available to each node decreases. Despite their disadvantages, packet-switched networks have become the de facto standard whenever the term “network” is used. Recent developments in networking technologies have decreased the price point for capacity significantly, making a network where many nodes or machines can share the same resources cost-effective. For the purposes of discussion in this book, the word “network” will mean a packet-switched network.Internetworking A number of different technologies exist for creating networks between computers. The terms can be confusing and in many cases can mean different things depending on the context in which they’re used. The most common network technology is the concept of a local area network, or LAN. A LAN consists of a number of computers connected together on a network such that each can communicate with any of the others. A LAN typically takes the form of two or more computers joined together via a hub or switch, though in its simplest form two computers connected directly to each other can be called a LAN as well. When using a hub or switch, the ability to add computers to the network becomes trivial, requiring only the addition of another cable connecting the new node to the hub or switch. That’s the beauty of a packet-switched network, for if the network were circuit-switched, we would have to connect every node on the network to every other node, and then figure out a way for each node to determine which connection to use at any given time. LANs are great, and in many cases they can be all that’s needed to solve a particular problem. However, the advantages of a network really become apparent when you start to connect one network to another. This is called internetworking, and it forms the basis for one of the largest known networks: the Internet. Consider the following diagrams. Figure 1-1 shows a typical LAN. Figure 1-1. A single network You can see there are a number of computers, or nodes, connected to a common point. In networking parlance, this is known as a star configuration. This type of LAN can be found just about anywhere, from your home to your office, and it’s responsible for a significant portion of communication activity every day. But what happens if you want to connect one LAN to another? As shown in Figure 1-2, connecting two LANs together forms yet another network, this one consisting of two smaller networks connected together so that information can be shared not only between nodes on a particular LAN, but also between nodes on separate LANs via the larger network. Figure 1-2. Two connected networks Because the network is packet-switched, you can keep connecting networks together forever or until the total number of nodes on the network creates too much traffic and clogs the network. Past a certain point, however, more involved network technologies beyond the scope of this book are used to limit the traffic problems on interconnected networks and improve network efficiency. By using routers, network addressing schemes, and long-haul transmission technologies such as dense wavelength division multiplexing (DWDM) and long-haul network protocols such as asynchronous transfer mode (ATM), it becomes feasible to connect an unlimited number of LANs to each other and allow nodes on these LANs to communicate with nodes on remote networks as if they were on the same local network, limiting packet traffic problems and making network interconnection independent of the supporting long-distance systems and hardware. The key concept in linking networks together is that each local network takes advantage of its packet-switched nature to allow communication with any number of other networks without requiring a dedicated connection to each of those other networks.Ethernets Regardless of whether we’re talking about one network or hundreds of networks connected together, the most popular type of packet-switched network is the Ethernet. Developed 30 years ago by Xerox PARC and later standardized by Xerox, Intel, and Digital Equipment Corporation, Ethernets originally consisted of a single cable connecting the nodes on a network. As the Internet exploded, client-server computing became the norm, and more and more computers were linked together, a simpler, cheaper technology known as twisted pair gained acceptance. Using copper conductors much like traditional phone system wiring, twisted pair cabling made it even cheaper and easier to connect computers together in a LAN. A big advantage to twisted pair cabling is that, unlike early Ethernet cabling, a node can be added or removed from the network without causing transmission problems for the other nodes on the network. A more recent innovation is the concept of broadband. Typically used in connection with Internet access via cable TV systems, broadband works by multiplexing multiple network signals on one cable by assigning each network signal a unique frequency. The receivers at each node of the network are tuned to the correct frequency and receive communications on that frequency while ignoring communications on all the others. A number of alternatives to Ethernet for local area networking exist. Some of these include IBM’s Token Ring, ARCNet, and DECNet. You might encounter one of these technologies, as Linux supports all of them, but in general the most common is Ethernet.Ethernet Frames On your packet-switched Ethernet, each packet of data can be considered a frame. An Ethernet frame has a specific structure, though the length of the frame or packet is variable, with the minimum length being 64 bytes and the maximum length being 1518 bytes, although proprietary implementations can extend the upper limit to 4096 bytes or higher. A recent Ethernet specification called Jumbo Frames even allows frame sizes as high as 9000 bytes, and newer technologies such as version 6 of the Internet Protocol (discussed later) allow frames as large as 4GB. In practice, though, Ethernet frames use the traditional size in order to maintain compatibility between different architectures. Because the network is packet-based, each frame must contain a source address and destination address. In addition to the addresses, a typical frame contains a preamble, a type indicator, the data payload, and acyclic redundancy checksum (CRC). The preamble is 64 bits long and typically consists of alternating 0s and 1s to help network nodes synchronize transmissions. The type indicator is 16 bits long, and the CRC is 32 bits. The remaining bits in the packet consist of the actual packet data being sent (see Figure 1-3). Figure 1-3. An Ethernet frame The type field is used to identify the type of data being carried by the packet. Because Ethernet frames have this type indicator, they are known as self-identifying. The receiving node can use the type field to determine the data contained in the packet and take appropriate action. This allows the use of multiple protocols on the same node and the same network segment. If you wanted to create your own protocol, you could use a frame type that did not conflict with any others being used, and your network nodes could communicate freely without interrupting any of the existing communications. The CRC field is used by the receiving node to verify that the packet of data has been received intact. The sender computes the CRC value and adds it to the packet before sending the packet. On the receiving end, the receiver recalculates the CRC value for the packet and compares it to the value sent by the sender to confirm the packet was received intact.{mospagebreak title=Addressing}? For communication to occur, each node on the network must have its own address. This address must be unique, just as someone’s phone number is unique. For example, while two or more people might have 555-9999 as their phone number, only one person will have that phone number within a certain area code, and that area code will exist only once within a certain country code. This accomplishes two things: it ensures that within a certain scope each number is unique, and it allows each person with a phone to have a unique number.Ethernet Addresses Ethernets are no different. On an Ethernet, each node has its own address. This address must be unique to avoid conflicts between nodes. Because Ethernet resources are shared, every node on the network receives all of the communications on the network. It is up to each node to determine whether the communication it receives should be ignored or answered based on the destination address. It is important not to confuse an Ethernet address with a TCP/IP or Internet address, as they are not the same. Ethernet addresses are physical addresses tied directly to the hardware interfaces connected via the Ethernet cable running to each node. An Ethernet address is an integer with a size of 48 bits. Ethernet hardware manufacturers are assigned blocks of Ethernet addresses and assign a unique address to each hardware interface in sequence as they are manufactured. The Ethernet address space is managed by the Institute of Electrical and Electronics Engineers (IEEE). Assuming the hardware manufacturers don’t make a mistake, this addressing scheme ensures that every hardware device with an Ethernet interface can be addressed uniquely. Moving an Ethernet interface from one node to another or changing the Ethernet hardware interface on a node changes the Ethernet address for that node. Thus, Ethernet addresses are tied to the Ethernet device itself, not the node hosting the interface. If you purchase a network card at your local computer store, that network card has a unique Ethernet address on it that will remain the same no matter which computer has the card installed. Let’s look at an example using a computer running Linux. [user@host user]$ /sbin/ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:E0:29:5E:FC:BE inet addr:192.168.2.1 Bcast:192.168.2.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:35772 errors:0 dropped:0 overruns:0 frame:0 TX packets:24414 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:36335701 (34.6 Mb) TX bytes:3089090 (2.9 Mb) Interrupt:5 Base address:0x6000 Using the /sbin/ifconfig command, we can get a listing of the configura tion of our eth0 interface on our Linux machine. Your network interface might have a different name than eth0, which is fine. Just use the appropriate value, or use the –a option to ifconfig to get a listing of all of the configured interfaces if you don’t know the name of yours. The key part of the output, though, is the first line. Notice the parameter labeled HWaddr . In our example, it has a value of 00:E0:29:5E:FC:BE , which is the physical Ethernet address of this node. Remember that we said an Ethernet address is 48 bits. Our example address has six hex values. Each hex value has a maximum of 8 bits, or a value range from 00 to FF. But what does this tell us? As mentioned previously, each hardware manufacturer is assigned a 24-bit value by the IEEE. This 24-bit value (3 octets) must remain consistent across all hardware devices produced by this manufacturer. The manufacturer uses the remaining 3 octets as a sequential number to create a unique, 48-bit Ethernet address. Let’s see what we can find out about our address. Open a web browser and go to this address: . In the field provided, enter the first 3 octets of our example address, in this case 00-e0-29 (substitute a hyphen [-] for the colon [:]). Click Search, and you’ll see a reply that looks like this: 00-E0-29 (hex) STANDARD MICROSYSTEMS CORP. 00E029 (base 16) STANDARD MICROSYSTEMS CORP. 6 HUGHES IRVINE CA 92718 UNITED STATES That’s pretty descriptive. It tells us that the hardware manufacturer of our network interface is Standard Microsystems, also known as SMC. Using the same form, you can also search by company name. To illustrate how important it is that these numbers be managed, try searching with a value similar to 00-e0-29, such as 00-e0-27. Using 27, you’ll find that the manufacturer is Dux, Inc. Thus, as each manufacturer is creating their products, they’ll increase the second half of the Ethernet address sequentially to ensure that each device has a unique value. In our case, the second half of our address is 5E-FC-BE, which is our hardware interface’s unique identifier. If the results of your search don’t match the vendor of your network card, keep in mind that many companies resell products produced by another or subcontract their manufacturing to someone else. The Ethernet address can also take on two other special values. In addition to being the unique address of a single physical interface, it can be a broadcast address for the network itself as well as a multicast address. The broadcast address is reserved for sending to all nodes on a network simultaneously. Multicast addresses allow a limited form of broadcasting, where a subset of network nodes agrees to respond to the multicast address. The Ethernet address is also known as the MAC address. MAC stands for Media Access Control. Because our Ethernet is a shared network, only one node can “talk” at any one time using the network. Before a node transmits information, it first “listens” to the network to see if any other node is using the network. If so, it waits a randomly chosen amount of time and then tries to communicate again. If no other node is using the network, our node sends its message and awaits a reply. If two nodes “talk” at the same time, a collision occurs. Collisions on shared networks are normal and are handled by the network itself so as not to cause problems, provided the ratio of collisions to communications does not get too high. In the case of Ethernets, a collision rate higher than 60 percent is typically cause for concern. Each MAC address must be unique, so a node about to transmit can compare addresses to check whether another node is already transmitting. Thus, the MAC address (Ethernet address) helps control the collision rate and allows nodes to determine if the network is free to use.Gateways We’ve discussed that the Internet is a network built by physically connecting other networks. To connect our networks together, we use a special device called a gateway. Any Ethernet node can conceivably act as a gateway, though many do not. Gateways have two or more physical network interfaces and do a particular job, just as their name implies: they relay packets destined for other networks, and they receive packets destined for nodes on one of their own networks. Building on our earlier diagram, here’s how it looks when you connect two networks together with a gateway (see Figure 1-4). Figure 1-4. Two connected networks with a gateway Gateways can also be called routers, since they route packets from one network to another. If you consider that all networks are equal, then the notion of transmitting packets from one to the other becomes a little easier. No longer is it necessary for our network nodes to understand how to find every other node on remote networks. Maintaining that amount of ever-changing information on every computer connected to every network would be impossible. Instead, nodes on our local network only need to know the address of the gateway. That is, local nodes only need to know which node is the “exit” or “gate” to all other networks. The gateway takes on the task of correctly routing packets with foreign destinations to either the remote network itself or another gateway. For example, consider Figure 1-5, which shows three interconnected networks. Figure 1-5. Multiple networks, multiple gateways In this diagram, we have three networks: Red, Green, and Blue. There are two gateways, Foo and Bar. If a node on the Red network wants to send a packet to a node on the Green or Blue network, it does not need to keep track of the addresses on either network. It only needs to know that its gateway to any other network besides its own is Foo. The packets destined for the remote network are sent to Foo, which then determines whether the packet is destined for the Green network or the Blue network. If Green, the packet is sent to the appropriate node on the Green network. If Blue, however, the packet is sent to Bar, because Foo only knows about the Red and Green networks. Any packet for any other network needs to be sent to the next gateway, in this case Bar. This scenario is multiplied over and over and over in today’s network environment, and it significantly decreases the amount of information that each network node and gateway has to manage. Likewise, the reverse is true. When the receiver accepts the packet and replies, the same decision process occurs. The sender determines if the packet is destined for its own network or a remote network. If remote, then the packet is sent to the network’s gateway, and from there to either the receiver or yet another gateway. Thus, a gateway is a device that transmits packets from one network to another. Gateways seem simple, but as we’ve mentioned, asking one device to keep track of the information for every network that’s connected to every other network is impossible. So how do our gateways do their job without becoming hopelessly buried by information? The gateway rule is critical: network gateways route packets based on destination networks, not destination nodes. Thus, our gateways aren’t required to know how to reach every node on all the networks that might be connected. A particular set of networks might have thousands of nodes on it, but the gateway doesn’t need to keep track of all of them. Gateways only need to know which node on their own network will move packets from their network to some other network. Eventually, the packets reach their destination network. Since all devices on a particular network check all packets to see if the packets are meant for them, the packets sent by the gateway will automatically get picked up by the destination host without the sender needing to know any specifics except the address of its own gateway. In short, a node sending data needs to decide one thing: whether the data is destined for a local network node or remote network node. If local, the data is sent between the two nodes directly. If remote, the data is sent to the gateway, which in turn makes the same decision until the data eventually gets picked up by the recipient.{mospagebreak title=Internet Addresses} So far, you’ve seen that on a particular network, every device must have a unique address, and you can connect many networks together to form a larger network using gateways. Before a node on your network “talks,” it checks to see if anyone else is “talking,” and if not, it goes ahead with its communication. Your networks are interconnected, though! What happens if the nodes on your office LAN have to wait for some node on a LAN in Antarctica to finish talking before they can talk? Nothing would ever be sent—the result would be gridlock! How do you handle the need to identify a node with a unique address on interconnected networks while at the same time isolating your own network from every other network? Unless one of your nodes has a communication for another node on another network, there should be no communication between networks and no need for one to know of the existence of the other until the need to communicate exists. You handle the need for a unique address by assigning protocol addresses to physical addresses in conjunction with your gateways. In our scenario, these protocol addresses are known as Internet Protocol (IP) addresses. IP addresses are virtual. That is, there is no required correlation between a particular IP address and its physical interface. An IP address can be moved from one node to another at will, without requiring anything but a software configuration change, whereas changing a node’s physical address requires changing the network hardware. Thus, any node on an internet has both a physical Ethernet address (MAC address) and an IP address. Unlike an Ethernet address, an IP address is 32 bits long and consists of both a network identifier and a host identifier. The network identifier bits of the IP addresses for all nodes on a given network are the same. The common format for listing IP addresses is known as dotted quad notation because it divides the IP address into four parts (see Table 1-1). The network bits of an IP address are the leading octets, and the address space is divided into three classes: Class A, Class B, and Class C. Class A addresses use just 8 bits for the network portion, while Class B addresses use 16 bits and Class C addresses use 24 bits. Table 1-1. Internet Protocol Address Classes. You may have noticed that the table doesn’t include every possible value. This is because the octets 0 and 255 are reserved for special use. The octet 0 (all 0s) is the address of the network itself, while the octet 255 (all 1s) is called the broadcast address because it refers to all hosts on a network simultaneously. Thus, in our Class C example, the network address would be 192.168.2.0 , and the broadcast address would be 192.168.2.255 . Because every address range needs both a network address and a broadcast address, the number of usable addresses in a given range is always 2 less than the total. For example, you would expect that on a Class C network you could have 256 unique hosts, but you cannot have more than 254, since one address is needed for the network and another for the broadcast. In addition to the reserved network and broadcast addresses, a portion of each public address range has been set aside for private use. These address ranges can be used on internal networks without fear of conflicts. This helps alleviate the problem of address conflicts and shortages when public networks are connected together. The address ranges reserved for private use are shown in Table 1-2. Table 1-2. Internet Address Ranges Reserved for Private Use. Another IP address is considered special. This IP address is known as the loopback address, and it’s typically denoted as 127.0.0.1 . The loopback address is used to specify the local machine, also known as localhost. For example, if you were to open a connection to the address 127.0.0.1 , you would be opening a network connection to yourself. Thus, when using the loopback address, the sender is the receiver and vice versa. In fact, the entire 127.0.0.0 network is considered a reserved network for loopback use, though anything other than 127.0.0.1 is rarely used. Ports The final component of IP addressing is the port. Ports are virtual destination “points” and allow a node to conduct multiple network communications simultaneously. They also provide a standard way to designate the point where a node can send or receive information. Conceptually, think of ports as “doors” where information can come and go from a network node. On Linux systems, the number of ports is limited to 65,535, and many of the lower port numbers are reserved, such as port 80 for web servers, port 25 for sending mail, and port 23 for telnet servers. Ports are designated with a colon when describing an IP address and port pair. For example, the address 10.0.0.2:80 can be read as “port 80 on the address 10.0.0.2 ,” which would also mean “the web server on 10.0.0.2 ” since port 80 is typically used by and reserved for web services. Which port is used is up to the discretion of the developer, provided the ports are not already in use or reserved. A list of reserved ports and the names of the services that use them can be found on your Linux system in the /etc/services file, or at the Internet Assigned Numbers Authority (IANA) site listed here: . Table 1-3 contains a list of commonly used (and reserved) ports. Table 1-3. Commonly Used Ports Without ports, a network host would be allowed to provide only one network service at a time. By allowing the use of ports, a host can conceivably provide moer than 65,000 services at any time using a given IP address, assuming each service is offered on a different port. We cover using ports in practice when writing code first in Chapter 2 and then extensively in later chapters. This version of IP addressing is known as version 4, or IPv4. Because the number of available public addresses has been diminishing with the explosive growth of the Internet, a newer addressing scheme has been developed and is slowly being implemented. The new scheme is known as version 6, or IPv6. IPv6 addresses are 128 bits long instead of the traditional 32 bits, allowing for 2^96 more network nodes than IPv4 addresses. For more on IPv6, consult Appendix A. Network Byte Order One final note on IP addressing. Because each hardware manufacturer can develop its own hardware architecture, it becomes necessary to define a standard data representation for data. For example, some platforms store integers in what is known as Little Endian format, which means the lowest memory address contains the lowest order byte of the integer (remember that addresses are 32-bit integers). Other platforms store integers in what is known as Big Endian format, where the lowest memory address holds the highest order byte of the integer. Still other platforms can store integers in any number of ways. Without standardization, it becomes impossible to copy bytes from one machine to another directly, since doing so might change the value of the number. In an internet, packets can carry numbers such as the source address, destination address, and packet length. If those numbers were to be corrupted, network communications would fail. The Internet protocols solve this byte-order problem by defining a standard way of representing integers called network byte order that must be used by all nodes on the network when describing binary fields within packets. Each host platform makes the conversion from its local byte representation to the standard network byte order before sending a packet. On receipt of a packet, the conversion is reversed. Since the data payload within a packet often contains more than just numbers, it is not converted. The standard network byte order specifies that the most significant byte of an integer is sent first (Big Endian). From a developer’s perspective, each platform defines a set of conversion functions that can be used by an application to handle the conversion transparently, so it is not necessary to understand the intricacies of integer storage on each platform. These conversion functions, as well as many other standard network programming functions, are covered in Chapter 2.{mospagebreak title=Internet Protocol} So far, we’ve discussed building a network based on Ethernet. We’ve also discussed connecting two or more networks together via a gateway, called an internet, and we’ve covered the basic issues surrounding network and host addressing that allow network nodes to communicate with each other without conflicts. Yet how are all of these dissimilar networks expected to communicate efficiently without problems? What is it that lets one network look the same as any other network? A protocol exists that enables packet exchanges between networks as if the connected networks were a single, homogenous network. This protocol is known as the Internet Protocol, or IP, and was defined by RFC 791 in September 1981. These interconnected networks are known as internets, not to be confused with the Internet. The Internet is just one example of a global internet, albeit the most popular and the most well known. However, this does not preclude the existence of other internets that use the same technologies, such as IP. Because IP is hardware independent, it requires a hardware-independent method of addressing nodes on a network, which is the IP addressing system already discussed. In addition to being hardware independent and being a packet-switching technology, IP is also connectionless. IP performs three key functions in an internet: - It defines the basic unit of data transfer. - It performs the routing function used by gateways and routers to determine which path a packet will take. - It uses a set of rules that allow unreliable packet delivery. These rules determine how hosts and gateways on connected networks should handle packets, whether a packet can be discarded, and what should happen if things go wrong. Like the physical Ethernet frame that contains data as well as header information, the basic unit of packet transfer used by IP is called the Internet datagram. This datagram also consists of both a header portion and a data portion. Table 1-4 lists the format of the IP datagram header, along with the size of each field within the header. Table 1-4. IP Datagram Header Format Figure 5-6. IP datagram encapsulation in an Ethernet frame Most of these fields look pretty similar to the description of an Ethernet frame. What is the relationship between Ethernet frames, or packets, and IP datagrams? Remember that IP is hardware independent and that Ethernet is hardware. Thus, the IP datagram format must be independent of the Ethernet frame specification. In practice, the most efficient design would be to carry one IP datagram in every Ethernet frame. This concept of carrying a datagram inside a lower-level network frame is called . When an IP datagram is encapsulated within an Ethernet frame, it means the entire IP datagram, including header, is carried within the portion of the Ethernet frame, as shown in Figure 1-6. We’ve said that an Ethernet frame has a maximum size of 1500 octets. Yet an IP datagram has a maximum total length of 16 bits. A 16-bit number can represent a data size of up to 65,535 octets and could potentially be much higher. How do we cram 65,535 octets into a network frame that maxes out at 1500? By using a technique called fragmentation. Fragmentation is necessary in network protocols because the goal should be to hide the underlying hardware used to create the network. In our case, it’s Ethernet, but in practice it could be any number of different systems, past or future. It wouldn’t make sense to require changes to higher-level protocols every time someone invented a new hardware network technology, so to be universally compatible, the designers of IP incorporated the ability to split IP datagrams into fragments, assigning one fragment per network frame in the most efficient way possible. The IP protocol does not guarantee that large datagrams will be delivered without fragmentation, nor does it limit datagrams to some smaller size. The sending node determines the appropriate datagram size and performs the fragmentation, while the receiving node performs the reassembly. The reassembly is made possible by the fragment offset field of the datagram header, which tells the receiver where this particular fragment should go. When the datagram is fragmented, each fragment carries a header that is essentially a duplicate of the original datagram header, with some minor changes. The fragment’s header differs because, if there are more fragments, the “more fragments” flag is set, and the fragment offset will change on each fragment to prevent overwriting. Thus, an IP datagram of 4000 octets might get fragmented into three Ethernet frames, two containing the maximum data size and the third containing what’s left. On the receiving end, these fragments would be reassembled into the original datagram and would be processed. If our physical network had a smaller frame size than Ethernet, we would get more fragments, and less fragments (or no fragments at all) with a larger frame size. NOTE Gateways are responsible for converting packets from one frame size to another. Every network has a maximum transfer unit, or MTU. The MTU can be any size. If your packets are sent from a network with a large MTU value to a network with a smaller value (or vice versa), the gateway between the two is responsible for reformatting the packets to comply with each network’s specifications. For example, say you had a gateway with an Ethernet interface and a Token Ring interface. The MTU on one network is 1500 octets, while the MTU on the Token Ring network might be larger or smaller. It is the gateway’s responsibility to reformat and fragment the packets again when moving from one network to another. The downside to this is that once fragmented to accommodate the smaller MTU, the packets aren’t reassembled until they reach their destination. Thus, the receiving node will receive datagrams that are fragmented according to the network with the smallest MTU used in the transfer. This can be somewhat inefficient, since after traversing a network with a small MTU, the packets might traverse a network with a much larger MTU without being reformatted to take advantage of the larger frame size. This minor inefficiency is a good trade-off, however, because the gateways don’t need to store or rebuild packet fragments, and the packets can be sent using the best path without concern for reassembly problems for the destination node.{mospagebreak title=Protocol Layering} So far we’ve discussed the underlying physical hardware of a network, and in the case of an internet, the protocol used to ensure compatibility between different networks. More than one protocol exists, however. For example, take the acronym “TCP/IP.” You already know what “IP” stands for: Internet Protocol. But what about “TCP”? What about other protocols we use on our networks, such as HTTP or FTP? How do these protocols relate to each other if they’re not all the same? It’s practically impossible to create a single protocol that can handle every issue that might be encountered on a network. Consider security, packet loss, hardware failure, network congestion, and data corruption. These issues and more need to be addressed in any networked system, but it can’t be done with just a single “super” protocol. The solution, then, is to develop a system in which complementary protocols, each handling a specific task, work together in a standardized fashion. This solution is known as protocol layering. Imagine the different protocols involved in network communications stacked on top of each other in layers. This is also known as a protocol stack or stack. Each layer in a stack takes responsibility for a particular aspect of sending and receiving information on a network, with each layer in the stack working in concert with the other layers (see Figure 1-7). Figure 1-7. Protocol layers As shown in Figure 1-7, sending information to another computer means sending the information “down” through a stack, over the network, and then “up” through the stack on the receiving node. When the receiving node wishes to send a response, the roles are reversed: it becomes a sender and the process is repeated. Each layer on each node is the same. That is, the protocol in layer 3 on the sender is the same protocol in layer 3 on the receiver. Thus, a protocol layer is designed so that layer n at the destination receives essentially the same datagram or packet sent by layer n at the source. We say “essentially” because datagrams have components like time to live fields that will be changed by each node involved in the transfer, even though the core data payload should remain identical from sender to receiver.Protocol Layer Models The dominating standard for a protocol layer is from the International Organization for Standardization (ISO) and is known as the Open Systems Interconnection reference model, or simply the OSI model. The OSI model describes seven specific layers: Application, Presentation, Session, Transport, Network, Data Link, and Physical Hardware. A description of each layer is shown in Table 1-5. Table 1-5. OSI Seven-Layer Reference Model Figure 1-8 shows the result of applying the OSI model to our earlier layer diagram. Figure 1-8. The OSI model In practice, though, the typical protocol stack found in most networked environments today is known as a TCP/IP stack and, while perfectly compatible with the OSI model, it is conceptually different. The “TCP” in TCP/IP means Transmission Control Protocol and will be discussed later in this chapter. Just by the name alone, you can see that today’s networks require multiple protocols working together to function. In a TCP/IP environment, the network transport is relatively simple, while the nodes on the network are relatively complex. TCP/IP requires all hosts to involve themselves in almost every network function, unlike some other networking protocols. TCP/IP hosts are responsible for end-to-end error checking and recovery, and also make routing decisions since they must choose the appropriate gateway when sending packets. Using our OSI diagram as a basis, a corresponding diagram describing a TCP/IP stack would look like Figure 1-9. Figure 1-9. The TCP/IP layer model This diagram shows a TCP/IP stack as having four layers versus the seven layers in an OSI model. There are fewer layers because the TCP/IP model doesn’t need to describe requirements that are needed by older networking protocols like X.25, such as the Session layer. Looking at our TCP/IP stack, we can see that Ethernet is our Network Interface, IP is our Internet layer, and TCP is our Transport layer. The Application layer, then, consists of the applications that use the network, such as a web browser, file transfer client, or other network-enabled applications. There are two boundaries in our TCP/IP stack that describe the division of information among the application, the operating system, and the network. These boundaries correspond directly to the addressing schemes already discussed. In the Application layer, the application needs to know nothing other than the IP address (and port) of the receiver. Specifics such as datagram fragmentation, checksum calculations, and delivery verification are handled in the operating system by the Transport and Internet layers. Once the packets move from the Internet layer to the Network layer, only physical addresses are used. At first it would seem like a lookup must be performed to get the physical address of the receiver when starting communications, but this would be incorrect. Remembering the gateway rule, the only physical address that needs to be known by the sender is the physical address of the destination if the destination is on the same network, or the physical address of the gateway if the destination is on a remote network.{mospagebreak title=User Datagram Protocol} At the Internet layer in our TCP/IP protocol stack, the only information available is the address of the remote node. No other information is available to the protocol, and none is needed. However, without additional information like a port number, your receiving node is limited to conducting a single network communication at any one time. Since modern operating systems allow multiple applications to run simultaneously, you must be able to address multiple applications on the receiving node simultaneously, instead of just one. If you consider that each networked application can “listen” on one or more ports, you can see that by using an IP address and a port, you can communicate with multiple applications simultaneously, up to any limits imposed by the operating system and protocol stack. In the TCP/IP protocol stack, there are two protocols that provide a mechanism that allows applications to communicate with other applications using ports. One is the Transmission Control Protocol (TCP), which we will discuss in the next section, and the other is the User Datagram Protocol (UDP). UDP makes no guarantee of packet delivery. UDP datagrams can be lost, can arrive out of sequence, can be copied many times, and can be sent faster than the receiving node can process them. Thus, an application that uses UDP takes full responsibility for controlling message loss, reliability, sequencing, and loss of connection. This can be both an advantage and a disadvantage to developers, for while UDP is a lightweight protocol that can be used quickly, the additional application overhead needed to thoroughly manage packet transfer is often overlooked or poorly implemented. TIP When using UDP for an application, make sure to thoroughly test your applications in real environments beyond a low-latency LAN. Many developers choose UDP and test in a LAN environment, only to find their applications are unusable when used over a larger TCP/IP network with higher latencies. UDP datagrams have a simple format. Like other datagrams, UDP datagrams consist of a header and a data payload. The header is divided into four fields, each 16 bits in size. These fields specify the source port, the destination port, the length of the datagram, and a checksum. Following the header is the data area, as shown in Figure 1-10. Figure 1-10. The UDP datagram format The source port is optional. If used, it specifies the port to which replies should be sent. If unused, it should be set to zero. The length field is the total number of octets in the datagram itself, header and data. The minimum value for length is 8, which is the length of the header by itself. The checksum value is also optional, and if unused should be set to zero. Even though it’s optional, however, it should be used, since IP doesn’t compute a checksum on the data portion of its own datagram. Thus, without a UDP checksum, there’s no other way to check for header integrity on the receiving node. To compute the checksum, UDP uses a pseudo-header, which is prepended to the datagram, followed by an octet of zeros, which is appended, to get an exact multiple of 16 bits. The entire object, pseudo-header and all, is then used to compute the checksum. The pseudo-header format is shown in Figure 1-11. Figure 1-11. The UDP pseudo-header The octet used for padding is not transmitted with the UDP datagram, nor is the pseudo-header, and neither is counted when computing the length of the datagram. A number of network services use UDP and have reserved ports. A list of some of the more popular services is shown in Table 1-6. Table 1-6. Popular UDP Services Transmission Control Protocol The main difference between UDP and TCP is that, like IP, UDP provides no guarantee of delivery and does not use any method to ensure datagrams are received in a certain order or are transmitted at a certain rate. TCP, on the other hand, provides a mechanism known as reliable stream delivery. Reliable stream delivery guarantees delivery of a stream of information from one network node to another without duplication or data loss. TCP has a number of features that describe the interface between it and the application programs that use it: - Virtual circuit: Using TCP is much like making a phone call. The sender requests a connection with the receiver. Both ends negotiate the parameters of the connection and agree on various details defining the connection. Once the connection is finalized, the applications are allowed to proceed. As far as the applications are concerned, a dedicated, reliable connection exists between the sender and the receiver, but this is an illusion. The underlying transfer mechanism is IP, which provides no delivery guarantee, but the applications are removed from dealing with IP by the TCP layer. - Buffered transfer: The TCP layer, independent of the application, determines the optimal way to package the data being sent, using whatever packet sizes are appropriate. To increase efficiency and decrease network traffic, TCP typically waits, if possible, until it has a relatively large amount of data to send before sending the packet, even if the application is generating data 1 byte at a time. The receiving TCP layer delivers data to the receiving application exactly the way it was sent, so a buffer may exist at each end, independent of the application. - Stream orientation: The receiving node delivers data to the receiving application in exactly the same sequence as it was sent. - Full duplex: Connections provided by TCP over IP are full duplex, which means that data can be transmitted in both directions simultaneously via two independent packet streams. The streams can be used to transfer data or to send control information or commands back to the sender, and either stream can be terminated without harming the other. - Unstructured stream: TCP does not guarantee the structure of the data stream, even though delivery is guaranteed. For example, TCP does not honor markers that might exist in a record set sent to or from a database. It is up to the applications to determine stream content and assemble or disassemble the stream accordingly on each end of the connection. Applications do this by buffering incoming packets when necessary and assembling them in an order that the applications recognize. The method that TCP uses to guarantee reliable delivery can be described as confirmation and retransmission. The sender keeps track of every packet that is sent and waits for confirmation of successful delivery from the receiver before sending the next packet. The sender also sets an internal timer when each packet is sent and automatically resends the packet should the timer expire before getting confirmation from the receiver. TCP uses a sequence number to determine whether every packet has been received. This sequence number is sent on the confirmation message as well as the packet itself, allowing the sender to match confirmations to packets sent, in case network delays cause the transmission of unnecessary duplicates. Even with full duplex, though, having to wait for a confirmation on every packet before sending the next can be horribly slow. TCP solves this by using a mechanism called a sliding window. The easiest way to imagine a TCP sliding window is to consider a number of packets that needs to be sent. TCP considers a certain number of packets to be a window and transmits all packets in that window without waiting for confirmation on each one. Once the confirmation is received for the first packet in the window, the window “slides” to contain the next packet to be sent, and it is sent. For example, if the window size was 8, then packets 1 through 8 would be sent. When the confirmation for packet 1 was received, the window would “slide” so that it covered packets 2 through 9, and the ninth packet would be sent. A packet that has been transmitted without confirmation is called an unacknowledged packet. Thus, the total number of unacknowledged packets allowed is equal to the window size. The advantage to a sliding window protocol is that it keeps the network saturated with packets as much as possible, minimizing the time spent waiting for confirmation. The window is matched on the receiving end, so that the receiver “slides” its own window according to which confirmation messages have been sent. We noted earlier that TCP connections use what is known as a virtual circuit. Taking that abstraction further, TCP defines connections as a pair of endpoints, each endpoint consisting of an integer pair, consisting of the 32-bit integer IP address and the 16-bit integer port number. This is important, because by defining an endpoint as an integer pair, a TCP service on a given port number can be used by multiple connections at the same time. For example, even though a mail server has only one port 25 on which to receive mail, each sender making a connection offers a different integer pair because the source IP address and source port are different, allowing multiple concurrent connections on the same receiving port. The TCP datagram is also known as a segment. Segments do all of the work: establishing and closing connections, advertising window sizes, transferring data, and sending acknowledgments. Figure 1-12 shows a diagram of a TCP segment. Figure 1-12. The TCP segment format As with other datagrams, a TCP segment consists of two parts: header and data. A description of each header field is listed in Table 1-7. Table 1-7. TCP Header Fields. The acknowledgment number, on the other hand, is used by the sender to identify which acknowledgment the sender expects to receive next. Thus, the sequence number refers to the byte stream flowing in the same direction as the segment, while the acknowledgment number refers to the byte stream flowing in the opposite direction as the segment. This two-way synchronization system helps ensure that both connection endpoints are receiving the bytes they expect to receive and that no data is lost in transit. The code bits field is a special header field used to define the purpose of this particular segment. These bits instruct the endpoint how to interpret other fields in the header. Each bit can be either 0 or 1, and they’re counted from left to right. A value of 111111, then, means that all options are “on.” Table 1-8 contains a list and description of the possible values for the code bits field. Table 1-8. Possible Values for Code Bits Header Field Even though TCP is a stream-oriented protocol, situations arise when data must be transmitted out of band. Out-of-band data is sent so that the application at the other end of the connection processes it immediately instead of waiting to complete the entire stream. This is where the urgent header field comes into play. Consider a connection a user wishes to abort, such as a file transfer that occurs slower than expected. For the abort signal to be processed, the signal must be sent in a segment out of band. Otherwise, the abort signal would not be processed until the file transfer was complete. By sending the segment marked urgent, the receiver is instructed to handle the segment immediately. {mospagebreak title=The Client-Server Model} Let’s recap. We’ve discussed circuits versus packets; the concept of connecting one or many networks together via gateways; physical addresses; virtual addresses; and the IP, UDP, and TCP protocols. We’ve also used the terms “sender,” “source,” “receiver,” and “destination.” These terms can be confusing, because as you’ve seen already, a TCP connection is a connection of equals, and in a virtual circuit, the roles of sender, source, receiver, and destination are interchangeable depending on what sort of data is being sent. Regardless of which term is used, the key concept to remember is that applications are present at both endpoints of a TCP connection. Without compatible applications at both ends, the data sent doesn’t end up anywhere, nor can it be processed and utilized. Nevertheless, changing the terminology between “source” and “destination” to describe every communication between two endpoints can be pretty confusing. A better model is to designate roles for each endpoint for the duration of the communication. The model of interaction on a TCP connection, then, is known as the client-server model. In the client-server model, the term “server” describes the application that offers a service that can be utilized by any other application over the network. Servers accept connections over a network, perform their service, and respond with the result. The simplest servers are those that accept a single packet and respond with a single packet, though in many cases servers are more complex. Some features common to servers include the ability to accept more than one request at a time (multiple connections) and the ability to service requests independently of other operating system processes such as user sessions. A “client” is the application sending the request to the server and waiting for the response. Client applications typically make only one request to a particular server at any given time, though there is no restriction preventing them from making simultaneous requests, or even multiple requests to different servers. There are many different types of client-server applications. The simplest of them use UDP to communicate, while others use TCP/IP with higher-level application protocols such as File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), and Simple Mail Transfer Protocol (SMTP). The specifics of a client-server system are fairly simple. On the server, the application is started, after which it negotiates with the operating system for permission to use a particular port number for accepting requests. Assuming that the application is allowed to start on the given port, it begins to listen for incoming requests. The specifics of how to write an application that will listen on a port are covered in Chapter 2. Once listening, the server executes an endless loop until stopped: receive request, process request, assemble response, send response, repeat. In the process, the server reverses the source and destination addresses and port numbers, so that the server knows where to send the response and the client also knows that the server has responded. The client, unlike the server, typically stops sending requests once the server has responded. At this point, the client itself may terminate, or it may simply go into a wait state until another request must be sent. Ports are handled differently on clients than on servers, however. While the server typically uses a reserved port, clients do not. This is because every client must know how to reach the server, but the server does not need to know in advance how to reach every client, since that information is contained in the packets it receives from the client, in the source address and source port fields. This allows clients to use any port they wish as their endpoint in the TCP connection with the server. For example, a web page server is usually found on port 80. Even though the client must send its requests to the server’s port 80, it can use any available port as its own endpoint, such as port number 9999, 12345, 64400, or anything in between, as long as it isn’t one of the reserved ports mentioned earlier or a port already in use by another application. Thus, the two endpoints involved in a web page request might be 192.168.2.1:80 for the server endpoint and 10.0.0.4:11908 for the client endpoint. The main requirement for a client is that it knows, through configuration or some other method, the address and port of the server, since all other information needed will be sent within the packets.{mospagebreak title=The Domain Name System} We’ve discussed how it isn’t necessary for every node on every network to store information about how to reach all of the other nodes on the network. Because of the gateway rule, a node only needs to know how to reach nodes on its own network or a gateway. With IP addresses 32 bits long, there are plenty of addresses to go around (and when we run out, there’s always IPv6, which is covered in Appendix A). We still have a problem, though. While computers can take advantage of the gateway rule, people can’t. It’s not enough to instruct our computers to make a connection—we have to specify the other end of the connection, even if we don’t have to specify exactly how our packets will get there. We still need a system people can use to point computers to the other endpoint of our connections, without requiring them to remember nearly endless lists of numbers. A system exists that lets us assign easily remembered names to their corresponding IP addresses, without having to remember the addresses themselves. This system is called the Domain Name System (DNS). You can think of DNS as a type of phone book for computers. Rather than remember the addresses of every network node you may want to reach, you instead can assign meaningful names to those nodes, and use DNS to translate the names that people like to use into the numbers that computers must use. To contact the other computer, your computer performs a DNS lookup, the result of which is the address of the other computer. Once the address is known, a connection can be made. Performing a DNS lookup is much like using a phone book. You know the name of the other computer, and you want to find the number, or address. You “look up” the name, and the phone book tells you the number that has been assigned to that name. From there, you make your call, or in our case, your connection. How often do new phone books come out? What happens if someone changes his or her phone number before the new phone book is printed? Just like people change their phone numbers, computers change their addresses all the time. Some, in the case of dial-up networks, might change their addresses every day, or even several times a day. If our name-to-number system required that we all pass a new phone book around every day, our networks would come to a standstill overnight, since managing such a database of names and numbers and keeping it current for the number of computers connected together on today’s networks is impossible. DNS was designed to be easily updateable, redundant, efficient and, above all, distributed, by using a hierarchical format and a method of delegation. Just like the gateway rule, a DNS server isn’t required to know the names and addresses of every node on our networks. Instead, it’s only necessary for a DNS server to know the names and addresses of the nodes it’s managing, and the names and addresses of the authoritative servers in the rest of the hierarchy. Thus, a particular DNS server is delegated the responsibility for a particular set of addresses and is given the names and addresses of other DNS servers that it can use if it can’t resolve the name using its own information. The hierarchical nature of DNS can be seen in the names commonly used on the Internet. You often hear the phrase dot-com and routinely use domain names when making connections. The first level of our hierarchy contains what are known as the top-level domains (TLDs). Table 1-9 contains a list of the more popular TLDs and their purpose. Table 1-9. Popular Top-Level Domains. A registrar is responsible for managing domain registrations. This means keeping them current, including accepting new registration requests as well as expiring those domains that are no longer valid. An example of a domain would be “apress.com”. Domain names are read right to left, with the leftmost name typically being the host name or name of a specific network node. Thus, a name such as would be translated as “the network node known as www within the domain apress within the TLD .com.” Another node within the same domain might have a name such as www2.apress.com . The actual names used are up to the discretion of the owner, with the following restrictions: - Names are restricted to letters and numbers, as well as a hyphen. - Names cannot begin or end with a hyphen. - Not including the TLD, names cannot exceed 63 characters. The TLDs are managed by special domain name servers known as root servers. The root servers are special servers set up in redundant fashion and spread out across the Internet. The root servers are updated twice daily by the registrars. As of this writing, there are 13 root servers. You can see a list of their names, and the organizations responsible for them, at . Even though there are 13 root servers, there are actually more than 13 physical servers. The root servers are typically set up redundantly in diverse locations to spread out the load, and in general the closest one to the node making the request is the one that answers the request. Each root server has a list of the active domains and the name servers that are responsible for those domains. Remember that DNS is hierarchical and delegated. The root servers have a list of subordinate domain name servers, each one responsible for one or many domains such as apress.com or yahoo.com or google.com . Thus, the root servers for .com have a list of the servers handling a particular domain. A request for that particular name is handled by the server responsible, not by the root servers. Further delegation is possible, because a company or other organization can have its own domain name servers. Let’s look at an example. You would like to visit the Apress website to download the source code used in this book. Putting everything we’ve discussed so far in this chapter together, that means your web browser is the client application, and the web server at Apress is the server application. You know that to make a successful connection, you need four pieces of information: source address, source port, destination address, and destination port. You know from your list of reserved ports that web servers are typically available on port 80, and you know that your client application can use any port it likes as the source port for the request, as long as the port isn’t already in use on your own computer. You also know your own address. Thus, the only thing you need to determine before making your TCP/IP connection and web page request is the address of the web server that is accepting requests for the Apress website. To get the address, you’ll perform a DNS lookup on . Note that the lookup happens transparently to the user and is performed by the client application, or the application “making the call.” The goal is to translate the name into an address that your web browser can use to make its network connection. In this case, the name lookup happens in a series of steps: - The browser queries one of the root servers for .com and asks for the IP addresses of the domain name servers (there are usually two, a primary and a backup) managing the domain named “apress”. - The root server consults its database for a name matching “apress” and, if found, replies with the list of IP addresses of the domain name servers delegated to handle those requests. - The browser queries one of the servers in that list and asks it for the IP address of the network node with the name of “www”. - The domain name server managing the domain named “apress” consults its database for the name “www” and, if found, returns the IP address associated with that name. - If the browser receives an answer from the domain name server with the IP address for , the browser makes a connection to port 80 at that IP address and requests the web page. The domain name system is transparent and public. You can make DNS queries in a number of different ways, from using the host and whois commands on your Linux system to using any of the web-based query sites. Let’s walk through the query process for apress.com that we just described, using the command line. To query the root name servers for information, you use whois : [user@host user]$ whois apress.com After the whois command executes, you’ll see output that looks like this: Domain Name: APRESS.CO M Registrar: NETWORK SOLUTIONS, INC. Whois Server: whois.networksolutions.com Referral URL: Name Server: AUTH111.NS.UU.NET Name Server: AUTH120.NS.UU.NET Status: ACTIVE Updated Date: 19-apr-2002 Creation Date: 23-feb-1998 Expiration Date: 22-feb-2007 Registrant: Apress (APRESS-DOM) 2560 Ninth Street Suite 219 Berkeley, CA 94710 US Domain Name: APRESS.COM Administrative Contact, Technical Contact: Apress (23576335O) [email protected] 2560 Ninth Street Suite 219 Berkeley, CA 94710 US 510-549-5930 fax: 123 123 1234 Record expires on 22-Feb-2007. Record created on 23-Feb-1998. Database last updated on 18-Jan-2004 22:43:05 EST. Domain servers in listed order: AUTH111.NS.UU.NET 198.6.1.115 AUTH120.NS.UU.NET 198.6.1.154 The information is self-explanatory. You see that the registrar used to regis ter the domain name is Network Solutions, and you see that the domain was registered in 1998 and is paid up through 2007. This is the information held at the TLD level—it still doesn’t tell you the IP address of the web server, which is what you need. The information you do have, though, includes the names and IP addresses of the name servers delegated to handle further information for apress.com, namely auth111.ns.uu.net and auth120.ns.uu. net , with addresses of 198.6.1.115 and 198.6.1.154 , respectively. Either one of those name servers can help you find the address of the web server. The next step is to use the host command to make a specific request of one of the name servers. You want the IP address of the machine with the name : [user@host user]$ host auth111.ns.uu.net Using domain server: Name: auth111.ns.uu.net Address: 198.6.1.115#53 Aliases: has address 65.215.221.149 The host command takes two parameters: the name you want to translate into an IP address and the name (or address) of the server you want to use to make the translation. Using one of the name servers returned by the whois query, you ask for the IP address of the web server, and the name server responds with the IP address 65.215.221.149 . At this point, your web browser would have the four pieces of information it needs to make a successful TCP/IP connection. Incidentally, you can see the port information for DNS in the name server’s reply shown previously. Note the #53 tacked onto the end of the name server’s IP address. As shown earlier in Table 1-3, the port used by DNS is port 53. The host command can also be used without specifying the name server as a parameter. If you just use the name that you want to translate, you’ll get an abbreviated response with just the IP address, without the other information. You can use the host command to make all sorts of domain name queries by using command-line parameters such as –t for type. For example, using a type of “MX” will return the IP addresses of the machines handling mail for a given domain name. Using a type of “NS” will return an abbreviated version of the whois information, listing the name servers themselves. Let’s see which machines handle mail and name serving for linux.org : [user@host user]$ host -t mx linux.org linux.org mail is handled by 10 mail.linux.org. [user@host user]$ host -t ns linux.org linux.org name server ns.invlogic.com. linux.org name server ns0.aitcom.net. The two queries tell you that mail for addresses in the linux.org domain is handled by a machine named mail.linux.org . Likewise, the name servers for linux.org are listed. If you wanted to send mail to someone at linux.org , you would use the name server information to resolve the name mail.linux.org into an IP address and make your connection from there. A list of common DNS record types and their purpose is shown in Table 1-10. Table 1-10. Common DNS Record Types Table 1-10. Common DNS Record Types (continued) As you can see, DNS information is public information, and you can easily obtain it once you know what to look for and which commands to use. On your Linux system, use the command to get more information on and . Older systems use a utility called , which performs essentially the same functions as It’s also possible to have private DNS information, since any Linux system is capable of acting as a name server. Many companies and organizations use both private, or internal, DNS and public, or external, DNS. Internal DNS is used for those machines that aren’t available to the public. Summary In this chapter, we discussed the basic ingredients for today’s popular networking technologies. Here’s a summary of what we covered: - In general, networks are either packet-switched or circuit-switched, with the Internet being an example of a packet-switched network. - All networks need a common, physical medium to use for communications, the most popular of which is Ethernet. Ethernet uses a system of frames containing header and data portions. - Networks require the use of addressing so that one network node can find another network node. Addressing takes different forms, from MAC addresses used to identify physical network hardware interfaces to IP addresses used to identify virtual software addresses used by the TCP and IP network protocols. - The gateway rule means that a node does not need to know how to reach every other node on a network. It only needs to know how to reach the nodes on its own network, and how to reach the gateway between its own network and all other networks. - Using a system of protocol layering and encapsulation, IP and TCP “wrap” and “unwrap” each packet of header and data information as it travels up or down the protocol stack at each endpoint of the connection. - TCP/IP networks use the client-server model, in which the source of the communication is known as the client, and the server is the destination. The server is the network node providing services consumed by the client. Depending on whether the communication is a request or a response, the roles of client and server may change back and forth. - Because people find it easier to remember names instead of numbers, networks use name translation systems to translate familiar names into the actual IP addresses of the other network nodes. The most popular naming system is the Domain Name System (DNS), which is a collaborative, distributed, hierarchical system of managing namespaces where specific responsibilities for certain domains are delegated from root servers to subordinate servers.
http://www.devshed.com/c/a/Administration/Fundamentals-of-Linux-Networking/
CC-MAIN-2015-14
refinedweb
12,565
52.09
What is the best way to switch namespace from in-memory mode to SSD mode given that we have a live system? Is it possible that the same namespace is configured to operate in-memory on the 1 node and on SSD on another one? Change namespace from in-memory to SSD bayoukingpin #2 Hi, Based on what you asked, I take it you have two nodes? Hopefully you have replication factor of 2 on (default), and to switch namespace from in-memory mode to SSD. You will have to bring down one of the node and wait for migration to complete. After migration has completed you will want to change the in-memory to SSD. After that bring up the node and you should be fine. Also you can configured Aerospike to operate in-memory on one node and SSD on the other, which is fairly common from what we see. Hope this helps, if you need anything else let us know.
https://discuss.aerospike.com/t/change-namespace-from-in-memory-to-ssd/149
CC-MAIN-2019-09
refinedweb
163
70.73
Centroid area calculation incorrect Bug Description The Geom::centroid function can be used to calculate the area of a Piecewise. The following code creates a bounded region with a hole in it. In this case the area returned is the sum of the areas of the hole and the bounding region. I would expect that the area returned was the area of the bounding region less the area of any holes. In this particular case I've used circles to demonstrate as their area is easily calculated at pi*r^2. Note, I've used the auto keyword from C++11 so compile with -std=c++11 #include <iostream> #include <2geom/path.h> #include <2geom/circle.h> #include <2geom/piecewise.h> #include <2geom/d2.h> #include <2geom/sbasis.h> #include <2geom/ #include <2geom/ #include <2geom/ int main() { // Circle with radius 10 Geom::Circle c(0, 0, 10); Geom: Geom: pw.push_seg(d2); double area; Geom::Point p; Geom: // area should be around 3.14159 * 10 * 10 std::cout << "area: " << area << std::endl; Geom::PathVector container = Geom::path_ // Circle with radius 5 Geom::Circle c2(0, 0, 5); Geom: cd.push_ Geom::PathVector contained = Geom::path_ Geom: Geom::PathVector exclusion = pig.getAminusB(); auto ex = paths_to_ Geom: // area should be around (3.14159 * 10 * 10) - (3.14159 * 5 * 5) std::cout << "area: " << area << std::endl; return 0; } Closing in favor of https:/ /gitlab. com/inkscape/ lib2geom/ issues/ 18
https://bugs.launchpad.net/lib2geom/+bug/1564588
CC-MAIN-2022-27
refinedweb
236
60.41
I am a newbie to Java Persistence API and Hibernate. What is the difference between FetchType.LAZY and FetchType.EAGER in Java Persistence API? Sometimes you have two items and there's a connection between them. For example, you might have an item called University and another item called Student. The University item might have some fundamental characteristics such as id, name, address, etc. as well as a feature called students: public class University { private String id; private String name; private String address; private List<Student> students; // setters and getters} public class University { private String id; private String name; private String address; private List<Student> students; // setters and getters } Now when you set a University from the database, JPA loads its id, name, and address fields for you. But you hold two options for students: to load it concomitantly with the rest of the fields (i.e. eagerly) or to load it on-demand (i.e. lazily) while you call the university's getStudents() method. While a university has many students it is not capable to load all of its students with it when they aren't required. So in suchlike circumstances, you can claim that you want students to be loaded if they are needed. This is called lazy loading.
https://intellipaat.com/community/10047/difference-between-fetchtype-lazy-and-eager-in-java-persistence-api
CC-MAIN-2020-34
refinedweb
211
65.32
I am reading a book teaching me C I am using visual c++ 2005 edition express when i type the code written in the book, i get an error i dont understand it at all.i dont understand it at all.Quote: error C4430: missing type specifier - int assumed. Note: C++ does not support default-int here is the code Quote: /*Prints a charter and some numbers */ #include <stdio.h> main() { printf("A letter grade of %c\n", 'B'); printf("A test score of %d\n", 87); printf("A class average of %.1f\n", 85.9); return 0; } can you tell me how i can fix it? i think it has somethign to do with main()
https://cboard.cprogramming.com/cplusplus-programming/81465-main-printable-thread.html
CC-MAIN-2017-26
refinedweb
116
81.63
Arduino: LCD Displays (Part 1) Arduino: LCD Displays (Part 1) Want to use an Arduino to control an LCD display? You've come to the right place! We'll use an Arduino Uno and the LiquidCrystal library to display text. Join the DZone community and get the full member experience.Join For Free We have written plenty of Arduino tutorials, but none of them have been about displays. So we decided to do something about that. This is the first of a planned three-part tutorial series on how to use various displays with the Arduino. In this first part, we’re going to show you how to use character displays based on the Hitachi HD44780 LCD controller. This is a much-used standard on these kinds of displays and, together with Arduino’s LiquidCrystal library, they become super easy to use. Throughout this post, we’ll use a JHD 162A LCD display and an Arduino Uno. Wiring To get the display up and running you can use a breadboard to make the wiring a bit easier. You’ll need a couple of resistors as well as two 5V sources, so the breadboard comes in handy. Parallel Interface (With or Without Serial Backpack) These screens have a parallel interface, which takes up a lot of pins on the Arduino. There are screens with a so-called serial backpack which makes it able to use UART to communicate with the display, resulting in far fewer pins being used. We will, however, not look at that in this post. Different Setups. - Con: You need to use two cycles to write to (and read from) the display. This is done automatically, so it doesn’t raise the complexity, but it takes a bit more time. - 8 pins - Pro: Faster refresh rate. - Con: Lots of occupied GPIO-pins. The other choice you need to make is whether or not to be able to read from the display registers. This ability takes up one more GPIO-pin. We’re unsure if the LiquidCrystal library uses this pin for anything. So if you, like us, aren’t particularly interested in using this pin, you can just hook it up to GND and save a GPIO-pin. Analog Pins The display has two pins that react on analog values: contrast and background LED brightness in addition to a 5V supply pin and two GND pins. Contrast For the contrast pin, you should use a voltage divider to set a specific voltage on the pin. Voltage divider schematic (source: Wikipedia). In our case, Vin is 5 V and Vout is the voltage on the contrast pin on the display. The ratio between Z1 and Z2 defines Vout (i.e. the contrast) according to the following equation: For our JHD 162A display, we get the maximum contrast at Vout around 0.6 V. At around 1.4V, the contrast gets so low that the text disappears completely, so that’s the range you have to work with. If you replace Z1 with a potentiometer, you can adjust the contrast on the fly. LED Brightness This is a bit more straightforward. We just need to limit the current flowing through the LED. Our JHD 162A has a 100Ω resistor on board, so we can apply 5V directly. However, check that your display has an LED resistor before applying voltage. If you want less brightness, you can add a resistor or a potentiometer in series between 5V and the LED pin on the display. We’re running around 400Ω total and it looks pretty good. Digital Pins As mentioned earlier, you need to hook up either 4 or 8 data pins. In addition to this, you need to connect an Enable pin, an RS pin, and an optional R/W pin. The datasheet gives you more info on each of these pins. Schematics In the illustration below, you can see how we’ve hooked up the JHD 162A to our Arduino Uno. This setup will continue over to the coding example further down. In this setup, we use 4 data pins and a grounded R/W pin. Which GPIO pin you use to what (upper row on the Arduino) doesn’t really matter as long as you input the correct pin numbers in the firmware. Firmware LiquidCrystal The LiquidCrystal library is included in the Arduino IDE and makes everything incredibly easy. The library is for the user basically a set of functions that does all the hard work for you. Instead of going through all the functions, we’re just going to show you the code we used in the image at the top of this post. The documentation on the Arduino website is more than adequate, with plenty of examples and overview of all the functions. Our Code The code we used to display “Norwegian Creations” as in the top image is super easy and doesn’t require many lines of code: #include <LiquidCrystal.h> LiquidCrystal lcd(13, 12, 5, 4, 3, 2); void setup() { lcd.begin(16,2); lcd.print("Norwegian"); lcd.setCursor(0, 1); lcd.print("Creations"); } void loop() { } And that’s it! Be sure to carefully study the LiquidCrystal() constructor documentation carefully so that you get the parameter list in the right order. To Summarize… As you can see, hooking up an HD44780-based character display and making it show some text is really easy with an Arduino and the LiquidCrystal library. If nothing seems to work, double check your contrast voltage. Depending on your display, you might need to have the LED backlight working to get good contrast. The LiquidCrystal library makes coding easy. Use the documentation on Arduino’s website. In the next part of this blog post series, we’ll look at how we can use displays with a serial backpack to limit the pin usage. Published at DZone with permission of Mads Aasvik , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/arduino-lcd-displays-part-1
CC-MAIN-2020-05
refinedweb
1,007
72.97
On Sun, Dec 19, 1999, Klaus Weide wrote: > On Sun, 19 Dec 1999, Larry W. Virden wrote: [...] > > Lynx was compiled from a freshly downloaded source tar file > > and fresh configure run. > > > > When I say > > lynx -trace > > today, then I log into mail.yahoo.com and quit, I see: > > > > LYStoreCookies: save cookies to ~/.parms/lynx_cookies on exit > > > > as the last line of ouput. > > The '~' should have been expanded at this point, but for some reason > that hasn't happened. So LYStoreCookies() tries to open a file > literally named "~/.parms/lynx_cookies" which is bound to fail (unless > you happen to have a subdirectory named "~" and witing that a > subdirectory named ".parms"). > > LYStoreCookies() doesn't give any meaningful trace message about the > failure and just returns. > > Brian, any idea what went wrong here? I haven't traced it, but > the stuff in LYMain.c *looks* right to me. > > Larry, you can probably work around the problem by putting the full > expanded name (no '~') in COOKIE_FILE in your lynx.cfg (and possibly > COOKIE_SAVE_FILE) - untested. Yep, it was my fault. Here's a patch. Not intentionally obfuscated, but diff decided to do it this way. The old way: if (dump_output_immediately) { if (LYCookieSaveFile != NULL) { [tilde expand LYCookieSaveFile] } } else { StrAllocCopy(LYCookieSaveFile, "/dev/null"); } } else { if (LYCookieSaveFile == NULL) { StrAllocCopy(LYCookieSaveFile, LYCookieFile); } } Now it attempts to expand LYCookieSaveFile before it worries about what mode we're in. diff -ru 2.8.3dev.17/src/LYMain.c 2.8.3dev.17.bri/src/LYMain.c --- 2.8.3dev.17/src/LYMain.c Wed Dec 15 18:22:16 1999 +++ 2.8.3dev.17.bri/src/LYMain.c Sun Dec 19 14:28:37 1999 @@ -1605,29 +1605,28 @@ LYLoadCookies(LYCookieFile); } + /* tilde-expand LYCookieSaveFile */ +); + } + } + /* * In dump_output_immediately mode, LYCookieSaveFile defaults to * /dev/null, otherwise it defaults to LYCookieFile. */ - - if (dump_output_immediately) { -); - } - } else { + if(dump_output_immediately) { StrAllocCopy(LYCookieSaveFile, "/dev/null"); - } } else { - if (LYCookieSaveFile == NULL) { + if (LYCookieSaveFile == NULL) StrAllocCopy(LYCookieSaveFile, LYCookieFile); - } } #endif -- YO-YO: Something that is occasionally up but normally down. (see also Computer).
http://lists.gnu.org/archive/html/lynx-dev/1999-12/msg00524.html
CC-MAIN-2014-42
refinedweb
329
51.34
This is a pretty simple blog, addressing a very frequently asked question on XI forum as to how to remove the namespaces from the XML output of Message Mapping. This issue has been answered on the XI forums as well, and i am putting it graphically as a weblog, for the benefit of newbies in XI. While creating the Message Type in the Integration Repository, you can find a field called XML Namespace, this field will be holding the namespace value, as shown in the image below. If we do not remove the namespace from there, then the xml output will also contain the namespace in the root element, as shown in the image below. Inorder to get rid of the namespace from the XML output, we need to delete the value from XML Namespace, while creating the message type. Now the XML output will look like below, The output in SXMB_MONI as well, Hope this blog was helpful. Good blog !!! Cheers, Naveen did you take a look at the XI FAQ? question 15 (integration engine section) The specified item was not found. Regards, michal If you are looking at help in deleting your complete configuration, check the blog by Siva Maranani, How to : Delete Software Component from Integration Builder Hope it helps. cheers Sameer First of all, thanks a lot for your blog, it has been a good guide for me in order to improve about the declarations in XI, but i have another problem concerning to namespaces, the term is that i want to delete all declarations in the outbonding xml, that is:Sergio One way is to use an XSLT mapping to remove the namespace in your interface mapping after your message map. You can use the following XSLT, Hi,Norbert when i am testing the proxy to file scenario, an error is occured saying “EXCEPTION_DURING_EXECUTE” Cannot create target element /MT_iSIGHT_Employee_Details(root element). i had faced the same problem once. after going thru the payload found that its adding a new namespace at runtime “xmlns:prx=”urn:sap.com:proxy:”.. so any idea to handle this issue…? thanks, Lalitkumar.
https://blogs.sap.com/2005/12/05/how-to-remove-namespaces-in-mapping-xi/
CC-MAIN-2017-34
refinedweb
352
63.43
This article describes how to find common programming mistakes that may lead to poor startup time and how to correct them. Included is a case study demonstrating how the Runtime Spy was used to improve the startup performance of IBM WebSphere Studio Application Developer. The previous article, Part 1, introduces the Runtime Spy. Tune Eclipse’s startup performance with the Runtime Spy, Part 2 Submitted by Frank 2004-04-30 General Development 3 Comments One thing that is many times over looked is creating an object when it is not needed yet. The prefect example is creating an public/private class object and not using it in the constructor. e.g. public class Example { public ObjectX obj = new ObjectX(); public Example() {} } The problem with this is that you are wasting the time to create ObjectX when it is not used. The correct way (more efficient way) is to do the following: public class Example { public ObjectX obj; public Example() {} } This is extemely important when using SWING/AWT. Creating a ton of JPanel etc when they are not needed at that particular time kills startup time and redraws repsonsiveness. I doubt your java 101 examples have any influence on people that are core eclipse developers or eclipse plugin developers in general. I’m sure these people are well aware of not creating objects until they are needed. By the way, eclipse doesn’t even use Swing. It uses SWT. What about compiling Eclipse with gcj ? How much will the gain be ? Somewhere I noticed Red Hat shipping a gcj compiled Eclipse, anyone tried it ?
https://www.osnews.com/story/6897/tune-eclipses-startup-performance-with-the-runtime-spy-part-2/
CC-MAIN-2019-13
refinedweb
263
62.68
in reply to import duties I expected 3rd way to work just like the others, by creating a my variable in main. If you use Devel::Symdump, you'll see that my variables, aren't actually in the main package, they are local to the block.. The list outputted by the following code, shows all scalar symbols in the main package. #! /usr/bin/perl use strict; use Devel::Symdump; my $obj = Devel::Symdump->new('main'); my $var1 = 'var1'; our $var2 = 'var2'; foreach($obj->scalars){ print "$_\n"; } __OUTPUT__ ...Builtin Vars... main::var2 ...More Builtin Vars... [download] - Tom Hell yes! Definitely not I guess so I guess not Results (41 votes), past polls
http://www.perlmonks.org/?node_id=316858
CC-MAIN-2014-52
refinedweb
112
65.62
Syscall. Up to 2.6.35 the kernel was assuming that the instruction immediately preceeding the SYSCALL instruction was reloading the syscall number and was using this for syscall restarting. This meant, to keep a syscall invokation restartable it was impossible to place a SYSCALL instruction in a branch delay slot anyway. <uapi/asm/unistd.h> of your kernel." Example 2 Here is more recent version of same program which will compile with modern toolchain and headers. #include <regdef.h> #include <sys/asm.h> #include <sys/syscall.h> EXPORT(__start) .set noreorder LEAF(main) li a0,1 la a1,hello li a2,12 li v0,__NR_write syscall quit: li a0,0 li v0,__NR_exit syscall j quit nop END(main) .data hello: .ascii "Hello world!\n" And here is makefile for hello.s # # hello-1.2/Makefile # # This file is subject to the terms and conditions of the GNU General Public # License. See the file COPYING in the main directory of this archive for more # details. tool-prefix = mips-linux-gnu- CC = $(tool-prefix)gcc LD = $(tool-prefix)ld STRIP = $(tool-prefix)strip CFLAGS = -G0 -mno-abicalls -fno-pic LDFLAGS = -N -s all: hello hello: hello.o $(LD) $(LDFLAGS) -o hello hello.o hello.o: hello.S $(CC) $(CFLAGS) -c hello.s clean: rm -f core a.out *.o *.s distclean: clean rm -f hello
https://www.linux-mips.org/wiki/Syscall
CC-MAIN-2019-26
refinedweb
225
68.97
This action might not be possible to undo. Are you sure you want to continue? inputs and outputs, as well as the associated switches and controls. For the moment, just try and trace the signal path from the microphone input to the MON SEND connector. Don’t be put off by the huge range of possibilities; it’s easier than you think! If you look at the overview of the controls at the same XENYX 1222FX User Manual time, you’ll be able to quickly familiarize yourself with your mixing console and you’ll soon be making the most of all its many possibilities. 1.3 Before you get started 1.3.1 Shipment Your mixing console was carefully packed in the factory to guarantee safe transport. Nevertheless, we recommend that you carefully examine the packaging and its contents for any signs of physical damage, which may have occurred during transit. ◊ power operative. ◊ Please ensure that only qualified people install and operate the mixing console. During installation and operation, the user must have sufficient electrical contact to earth, otherwise electrostatic discharges might affect the operation of the unit.! XENYX 1222FX User Manual 2. Control Elements and Connectors This chapter describes the various control elements of your mixing console. All controls, switches and connectors will be discussed in detail. 2.1 Mono channels 2.1.1 Microphone and line inputs. ◊ Please mute your playback system before you activate the phantom power supply to prevent switch-on thumps being directed to your loud speakers. Please also note the instructions in chapter 2.5 “Rear view of 1222FX”. LINE IN Each mono input also features a balanced line input on a 1⁄4" connector. Unbalanced devices (mono jacks) can also be connected to these inputs. ◊ Please remember that you can only use either the microphone or the line input of a channel at any one time. You can never use both simultaneously! 5 INSERT Insert points enable the processing of a signal with dynamic processors or equalizers. They are sourced pre-fader, pre-EQ and pre-aux send. The second value range (+10 to -40 dB) refers to the line input and shows its sensitivity. cause no side effects such as phase shifting or bandwidth limitation. LEVEL SET This LED lights up when the optimum operating signal level is achieved. even with extreme gain settings of ±15 dB. you will need a cable fitted with mono phone plugs on the tape machine or effects device end. Set the GAIN control to the external devices’standard output level. unlike simple equalizers. The scale has 2 different value ranges: the first value range (+10 to +60 dB) refers to the MIC input and shows the amplification for the signals fed in there. During normal use. If that unit has an output signal level display. Tweaking is done using the LEVEL SET LED. The cut-off frequencies of the upper and lower band are 12 kHz and 80 Hz respectively.plug: tip = signal output. To this end. low-frequency signal components (80 Hz. This control should always be turned fully counter-clockwise whenever you connect or disconnect a signal source to one of the inputs. ring = return input). turn up GAIN slightly. The settings for equipment with standard line-level signals (-10 dBV or +4 dBu) look like this: While the GAIN control is turned all the way down. GAIN Use the GAIN control to adjust the input gain. this LED should only light up during signal peaks. connect your equipment. H SILGNEENGLISH 6 Equalizer All mono input channels include a 3-band equalizer.2: The equalizer of the input channels The upper (HIGH) and the lower band (LOW) are shelving filters that increase or decrease all frequencies above or below their cut-off frequency. for -10 dBV a bit more. Fig. All bands provide boost or cut of up to 15 dB. 18 dB/octave). For +4 dBu. and a bridged stereo phone plug on the console side (tip and ring connected). Inserts can also be used as pre-EQ direct outputs. without interrupting the signal path. All mono input channels are equipped with inserts. The mid band is configured as a peak filter with a center . The circuitry of the British EQs is based on the technology used in the best-known top-of-the-line consoles and providing a warm sound without any unwanted side effects. The result are extremely musical equalizers which. LOW CUT The mono channels of the mixing consoles have a high-slope LOW CUT filter for eliminating unwanted. the equalizer is inactive. In the central position. it should show 0 dB during signal peaks. 2. 3 Pan. to an active monitor speaker or an external effects device. This control features a constant-power characteristic. the peak filter processes a frequency range that extends upwards and downwards around its middle frequency. are sourced after the equalizer and offer up to +15 dB gain. For situations that require effects processing.2 Aux sends (MON and FX) XENYX 1222FX User Manual Both aux sends are mono. the aux sends are generally switched to pre-fader. The return from an external effects device can then be brought back into the console via the aux return connectors.3: The AUX SEND controls in the channel strips Aux sends take signals via a control from one or more channels and sum these signals to a so-called bus. they operate independently of the position of the channel fader. 2. FX The aux send labeled FX is for feeding external effects devices and is thus set up to be post-fader. ◊ If you press the MUTE switch of the respective channel. Unlike shelving filters. When setting up a monitor mix. the effects signal of the channel would remain audible even when the fader is turned to zero. 2. This bus signal is sent to an aux send connector and then routed. and you should also not have the FX SEND fader pulled down.1. 2. which means the signal is always maintained at a constant level. the aux sends (MON and FX) remain active. aux sends and returns (MON and FX) are not being muted. . for example. 2. In the 1222FX. If this were not the case. This means that the channel signal is no longer present in the main mix. Fig. you shouldn’t turn this control all the way to the left (-oo). However. the aux sends are usually switched post-fader so that the effects volume in a channel corresponds to the position of the channel fader. mute switch and channel fader Fig.1. MUTE LED The MUTE LED indicates that the relevant channel is muted.4: Channel fader and additional control elements PAN The PAN control determines the position of the channel signal within the stereo image. the FX send is routed directly to the built-in effects processor. Don’t have the FX MUTE switch pressed.e. MON In the 1222FX.5 kHz. i. To make sure that the effects processor receives an input signal. irrespective of position in the stereo panorama.frequency of 2. MUTE Use the MUTE switch to mute the channel. aux send 1 (MON) is wired pre-fader and is thus particularly suitable for setting up monitor mixes. The balance control determines the relative proportion between the left and right input signals before both signals are routed to the main stereo mix bus. so to speak. try lowering the GAIN control somewhat (counterclockwise). MUTE LED. lower apparent frequency increase on the channel EQ to avoid distortion.4 Balance. mute switch and channel fader BAL The function of the BAL(ANCE) control corresponds to the PAN control in the mono channels. of course. If you don’t wish to change the EQ settings under any circumstances. then you should also control its output signal level. the channel fader has to be turned up in order to get this channel’s signal to the effects processor! 2.1 Monitor send and FX send channels Fig.2. 2. The filter characteristics and crossover frequencies are the same as those of the mono channels. 2.2 Stereo channels 2. 2. There is often a discrepancy between the settings of the left and the right channels when using separate equalizers. For example. the signal on a stereo channel is first summed to mono before it reaches the aux bus.1 Channel inputs 7 2.3 Aux sends stereo channels In principle. In this case. The MUTE switch. g. A stereo equalizer is always preferable to two mono equalizers if frequency correction of a stereo signal is needed. stereo.5: Stereo channel inputs . The signals are. ◊ Attention: Since the aux path for the effect processor is connected post-fader. lower the mids and the highs somewhat to emphasize the bass. As aux send paths are always mono. a dynamic processor). CLIP LED and channel fader function in the same way as the mono channels.3.2. we now look at the mixing console from left to right. collected from one point on each of the channel strips and then routed to the main section all together. If you inserted an external effects processor via the insert connector (e.2.3 Connector panel and main section Whereas it was useful to trace the signal flow from top to bottom in order to gain an understanding of the channel strips. The channel fader determines the level of the channel signal in the main mix.2 Equalizer stereo channels The equalizer of the stereo channels is. 2.XENYX 1222FX User Manual CLIP LED The CLIP LED lights up when the input signal is driven too high. It should not be higher than its input signal level (0 dB). the aux sends of the stereo channels function in just the same way as those of the mono channels.2. 2. Fig. In those channels in which a mic input is present in the channel. and the FX SEND fader (see fig.6) hast to be open.” Both channels 5/6 and 7/8 feature an additional balanced XLR input for microphones with available +48 V phantom power. there is no signal at the MON SEND connector. the GAIN control has two scales: just like in the mono channels. Both external effects processors (via the FX SEND . there is a 0 to +40 dB scale that shows the preamplification of the mic signal. i. keep in mind that the subwoofer is still receiving a signal! FX TO MON You can use this control to insert an effects signal from the built-in effects processor to your monitor mix. FX SEND The FX SEND fader determines the overall level of the effects bus. you can also feed a subwoofer if you don’t require stage monitors. MON MUTE If the MON MUTE switch is pressed. Channels 9/10 and 11/12 can also be used in mono if you only use the connector labeled “L. ◊ When you use the MAIN MIX fader to reduce the overall volume. the monitor bus is muted.6: Aux send controls of the main section H SILGNEENGLISH 8 A channel signal is routed to the MON(ITOR) send bus if the MON control is turned up on the corresponding channel. Lower all frequencies above 160 Hz and assign the equalizer to“Monitor”. All stereo channel strips have a GAIN control for level setting. so that only low frequencies are fed into the subwoofer. Both inputs can also be used with balanced or unbalanced connectors. You can achieve the same effect by using the built-in graphical equalizer. i.e. Using the audio signal from this output. you should implement a crossover in your signal path presubwoofer and pre-amplifier. 2. Of course. the +20 to -20 dB scale shows the sensitivity for the corresponding input level that is applied to the line input. the FX controls in the channel strips must be turned up.+15 +15 2.e.Each stereo channel features two line-level inputs on 1⁄4" connectors for left and right channels. MON SEND The aux send control MON SEND acts as master control for the monitor bus and determines the level of the summed signal that is taken from the mixer via the MON SEND connector and that can for example be fed to an amplifier for monitor purposes. your effects processor must first receive a signal. to do this. To this end. The signal mix is created using the channels’ MON controls. 2. the effects channel is muted. because the effects signal is added to the main mix along with the “dry” channel signals. Once an effects mix is created. This also goes for the built-in effects processor. 2. If the control is turned all the way to the left. the FX MUTE switch is probably pressed and/ or the FX SEND control is too low. the aux return 1 automatically operates in mono. the FX control of the channel being used as an effects return should be turned fully counter- . ◊ If the connected effects processor receives no input signal.connector) and the built-in processor only receive an input signal if this control is open. You can connect this to the input of an external effects device in order to process the FX bus’ master signal.3 Aux return connectors Fig.8: Aux return connectors AUX RETURN 1 The AUX RETURN 1 connectors generally serve as the return path for the effects mix generated using the FX send. no effects signal can be heard. ◊ You can also use these connectors as additional line inputs. FX MUTE If the FX MUTE switch is pressed. ◊ In this instance. If only the left connector is used. If these connectors already function as additional inputs. you can route the effects signal back into the console via a different stereo channel. ◊ Adjust your external effects processor to 100% wet (effects signal only). with the added benefit that the channel EQ can be used to adjust the frequency response of the effects return signal.3. no signal is present at the FX SEND connector and the effects processor no longer receives an input signal. AUX RETURN 2 The AUX RETURN 2 connectors are used exactly the same way as the AUX RETURN 1 connectors. 2. i. the processed signal can then be routed from the effects device outputs back into the AUX RETURN connectors.3. FX TO MAIN Use the FX TO MAIN control to feed the effects signal into the main mix.2 Monitor send and FX send connector XENYX 1222FX User Manual MON SEND Connect the input of your monitor power amp or an active monitor system here to make the monitor mix audible to the musicians on the stage. FX SEND The FX SEND connector outputs the signal you picked up from the individual channels using the FX controls.e. This is where you connect the output signal of the external effects device. you can use the CD/tape inputs. The cool thing about this is that the main mix faders can remain open. so that you can play music from a CD at the same time. H SILGNEFig. Additionally. the faders for the muted channels can also remain in their position. Singers with their own band can practice singing difficult parts using a complete playback from a tape player or a CD. CD/TAPE MUTE Using this switch.9: CD/tape return channel This channel. stereo input channels 9 to 12 and the aux return inputs.clockwise. STANDBY . intended especially for connecting stereo signal sources (CD players. Similarly. Possible applications for the Voice Canceller are obvious: you can very simply stage background music for Karaoke events. 2. To bring in other sound sources.3. VOICE CANCELLER Here. Of course. CD/TAPE RET(URN) This stereo fader assigns the input signal from the CD/tape inputs into the main mix. you have a filter circuitry that lets you almost entirely remove the vocal portion of a recording. the filter seizes only the middle of the stereo image. thus minimizing rehearsal time. The filter is constructed in such a way that voice frequencies are targeted without majorly affecting the rest of the signal. 2. Such noise can in the worst-case scenario even irreparably damage loudspeaker membranes. otherwise feedback problems can occur! Fig. the input signal from the CD/tape inputs is muted. voice canceller and connection socket 10 0 10 15 20 25 30 40 60 00 9 sound system via the microphones.4 CD/tape return channel. exactly there where the vocals are typically located. DAT recorders or even sound cards) features a particularly practical feature: the VOICE CANCELLER.7: Aux send connectors MON and FX XENYX 1222FX User Manual 2. you can also do this at home or at your rehearsal room before you hit the stage. Alternatively.If the STANDBY switch is pressed. effects mix included.5 Main mix. all input channels with a mic connector (XLR connector) are muted. ENGLISH 10 2. you can easily switch between additional sources (e. the main outputs are also an alternative. cassette recorder. you can connect the following equipment: Live PA systems: A stereo dynamics processor (optional). If you connect a hi-fi amplifier with a source selection switch to the CD/TAPE INPUT. Fig. stereo equalizer (optional) and the stereo power amplifier for full-range . you can process all signals being brought into your mixing console via these connectors.). CD/TAPE OUTPUT These connectors are wired pre graphic EQ and pre XPQ surround function. They can also be used as stereo line input.g. sound card etc. DAT recorder) or also a CD player.11: Main mix fader MAIN MIX Use the high-precision quality faders to control the output level of the main mix. 2. 2. 2. MD player. Depending on how you wish to use your mixer and which gear you own. main out connectors and headphone connector MAX 10 0 10 15 20 25 30 40 60 00 Fig. If you wish to use your mixer solely for recording purposes. the output signal of a second XENYX or BEHRINGER ULTRALINK PRO MX882 can also be connected.10: 2-track connectors CD/TAPE INPUT The CD/TAPE INPUT RCA connectors are provided for connecting a 2-track machine (e.g. They carry the main mix signal (unbalanced). you can prevent noise from entering the Fig. Using the voice canceller function.3. During breaks or stage conversion. Connect the CD/TAPE OUTPUT to the inputs of your recording device.12: Main out connectors MAIN OUT The MAIN outputs carry the MAIN MIX signal and are on balanced XLR connectors with a nominal level of +4 dBu. use this connector to adjust the output signal level. In . 2.g. If you connect active monitors or an amplifier. !! CAUTION! ◊ We would like to draw your attention to the fact that extreme volumes may damage your hearing and/or your headphones or loudspeakers.loud-speakers with passive crossovers. PHONES The PHONES control adjusts the volume of the headphones connected to the PHONS/CTRL connector. XENYX 1222FX User Manual Recording: For mastering. The signal is in this case passed on from the compressor into the recorder. XENYX 1222FX User Manual +48 V The red “+48 V”LED lights up when the phantom power supply is switched on. Fig.6 Level meter and level setting Fig. Often. 2. and they divide the frequency range into several segments that are first amplified in the amplifiers and then passed onto the corresponding loudspeakers. The connector can also be used for feeding active monitor loudspeakers (or an amplifier) in your control room. limiters are already built into active crossovers (e. Use it to custom-tailor the dynamic characteristics of your signal to the dynamic range of the recording equipment you are using. For this purpose. BEHRINGER SUPER-X PRO CX2310 and ULTRADRIVE PRO DCX2496). Active crossovers are implemented directly before the power amplifier. The phantom power supply is necessary for condenser microphones and is activated using the corresponding switch on the rear of the device. you have to use an active crossover and several power amplifiers. Turn the MAIN MIX faders and the PHONES control in the main section fully down before you switch on the unit. 2. ◊ Connect microphones before you switch on the phantom power supply. Always be careful to set the appropriate volume. If you wish to use multi-way loudspeaker systems without an integrated crossover.13: PHONS/CTRL connector PHONS/CTRL connector You can connect headphones to this 1⁄4" TRS connector. the signal is taken directly before it is passed on to the main mix faders. Please do not connect microphones to the mixer (or the stagebox/wallbox) while the phantom power supply is switched on.3.14: Level meter POWER The blue POWER LED indicates that the device is switched on. using a stereo compressor such as the COMPOSER PRO-XL MDX2600 can be recommended. MAIN MIX/MONITOR This toggles the graphic equalizer between the main mix and the monitor mix. lower the frequency range in question somewhat in order to avoid feedback. FBQ FEEDBACK DETECTION The switch turns on the FBQ Feedback Detection System. for example. Due to their inertia VU meters tend to display too low a signal level at frequencies above 1 kHz.g. and inactive on the main mix. Snare drums should be driven to approx.15: The graphic stereo equalizer The graphic stereo equalizer allows you to tailor the sound to the room acoustics. EQ IN Use this switch to activate the graphic equalizer. kick drum). slightly excessive levels can create unpleasant digital distortion. the monitor/PA loudspeakers should be muted before you activate the phantom power supply. and inactive on the monitor mix. one minute to allow for system stabilization. After switching on. wait approx. The graphic stereo equalizer has to be turned on in order to use this function. the recorder’s peak meter should not exceed 0 dB. 0 dB. When activated. you can also use the FBQ Feedback Detection for monitors by placing the equalizer in the monitor bus (see MAIN MIX/MONITOR). A recording level of 0 dB is recommended for all signal types.addition. When recording to an analog device. 2. the equalizer is active in stereo on the main mix. 2.5 Rear view of 1222FX .4 Graphic 7-band equalizer Fig. 2. the VU meters of the recording machine should reach approx. a Hi-Hat should only be driven as far as -10 dB. With the switch up (not depressed). at least one (ideally several) microphone channels have to be open for feedback to occur at all! Feedback is particularly common when stage monitors (“wedges”) are concerned. LEVEL METER/CLIP The high-precision level meter accurately displays the appropriate signal level. This is because. ◊ The peak meters of your XENYX display the level virtually independent of frequency. unlike analog recordings. It uses the LEDs in the frequency band faders to indicate the critical frequencies. +3 dB with low-frequency signals (e. Therefore. the fader LEDs will illuminate. When the switch is depressed the equalizer is active in mono on the monitor mix. On a per-need basis. because monitors project sound in the direction of microphones. This is why. LEVEL SETTING: When recording to a digital device. 11 ◊ Logically. If mounting in a rack. Caution! You must never use unbalanced XLR connectors (PIN 1 and 3 connected) on the MIC input connectors if you want to use the phantom power supply. ensure that the plug is easily accessible. which meets the required safety standards. When installing the product.3.H SILGNEFig. The mains connection is made via a cable with IEC mains connector. contact the microphone manufacturer! ◊ ◊ Connect microphones before you switch on the phantom power supply. Please do not connect microphones to the mixer (or the stagebox/wallbox) while the phantom power supply is switched on. To disconnect the unit from the mains. Unplug the power cord completely when the unit is not used for prolonged periods of time. After switching on. 2. provided that they are wired in a balanced configuration. An appropriate mains cable is supplied with the equipment. you mix the channel signal (dry) and the effect signal. The POWER switch should always be in the “Off” position when you are about to connect your unit to the mains. SERIAL NUMBER Please note the important information on the serial number given in chapter 1. pull out the main cord plug.3. ensure that the mains can be easily disconnected by a plug pull or by an all-pole disconnect switch on or near the rack. In addition. wait approx. If you move the FX TO MAIN control. Blown fuses must only be replaced by fuses of the same type and rating. This also goes for mixing effects signals with the monitor . ENGLISH 12 PHANTOM The PHANTOM switch activates the phantom power supply for the XLR microphone inputs.16: Voltage supply and fuse FUSE HOLDER/IEC MAINS RECEPTACLE The console is connected to the mains via the cable supplied. the monitor/PA loudspeakers should be muted before you activate the phantom power supply. XENYX 1222FX User Manual These effect presets are designed to be added to dry signals. The red +48 V LED lights up when phantom power is on. In case of doubt. one minute to allow for system stabilization. which is required to operate condenser microphones. As a rule. POWER Use the POWER switch to power up the mixing console. ◊ Attention: The POWER switch does not fully disconnect the unit from the mains. dynamic microphones can still be used with phantom power switched on. you will find an illustration showing how to connect your footswitch correctly. the flashing stops. This way. To recall the selected preset.2. use this to switch the effects processor on and off. flanger. 3.2 XPQ surround function The surround function can be enabled/disabled with the XPQ TO MAIN switch. 3. The display flashes the number of the current preset. Fig. chorus. completely simplifying the handling. you are overloading the effects processor and this could cause unpleasant distortion. delay and various combination effects. The integrated effects module has the advantage of requiring no wiring.1: Effects presets overview 24-BIT MULTI-EFFECTS PROCESSOR Here you can find a list of all presets stored in the multi-effects processor.1 Digital effects processor 0 MAX PROGRAM SURROUND (PUSH) XPQ TO MAIN Fig. PROGRAM You can select the effect preset by turning the PROGRAM control. the danger of creating ground loops or uneven signal levels is eliminated at the outset. ◊ In chapter 4. The FX SEND fader determines the level that reaches the effects module.3: Digital Effects module and XPQ Surround Function control elements Fig. thus making the sound more lively and transparent. Digital Effects Processor and Xpq Surround Function 3. press the button. Take care to ensure that the clip LED only lights up at peak levels. This built-in effects module produces high-grade standard effects such as reverb. LEVEL The LED level meter on the effects module should display a sufficiently high level. 3. 3.mix. The main difference is that the mix ratio is adjusted using the FX TO MON control. Of course. Use the SURROUND control to determine the intensity of this effect. This is a built-in effect that widens the stereo width. A flashing dot at the bottom of the display indicates if the effects processor is muted via the footswitch. a signal has to be fed into the effects processor via the FX control in the channel strip for both applications. XENYX 1222FX User Manual .2: Connection socket for the footswitch FOOTSWITCH Connect a standard footswitch to the footswitch connector. 3. If it is lit constantly. You can also recall the selected preset using the footswitch. you can mount the mixing console in a commercially available 19" rack. 4. Be sure to use only high-grade cables. strain relief clamp sleeve tip ◊ Caution! You must never use unbalanced XLR connectors (pins 1 and 3 connected) on the MIC inputs if you intend to use the phantom power supply. Installation 4. Use these screws to fasten the two brackets onto the console. ◊ Only use the screws holding the mixing console side panels to fasten the 19" rack mounts. input 21 3 1 = ground/shield 2 = hot (+ve) 3 = cold (-ve) output 1 3 2 For unbalanced use.4.2: XLR connections Strain relief clamp Sleeve Tip 13 H SILGNEsleeve pole 1/ground Sleeve (ground/shield) tip pole 2 The footswitch connects both poles momentarily . The following illustrations show the wiring of these cables.1 Rack mounting The packaging of your mixing console contains two 19" rack mount brackets which can be installed on the side panels of the console. so as to avoid overheating. Be sure to allow for proper air flow around the unit. 4.2 Cable connections You will need a large number of cables for the various connections to and from the console. you need to remove the screws holding the left and right side panels. Before you can attach the rack mount brackets to the mixing console. pin 1 and pin 3 have to be bridged Balanced use with XLR connectors Fig. and do not place the mixing console close to radiators or power amps. being careful to note that each bracket fits a specific side. With the rack mount brackets installed. 5: Insert send return 1⁄4" TRS connector XENYX 1222FX User Manual strain relief clamp sleeve ring tip sleeve . or ensure that ring and sleeve are bridged inside the stereo plug (or pins 1 & 3 in the case of XLR connectors).1/ 4" TS footswitch connector Fig.1: 1⁄4"TS footswitch connector Tip (signal) Unbalanced 1⁄4" TS connector Fig. Balanced 1/ 4" TRS connector Fig. 4.1 Audio connections Please use commercial RCA cables to wire the 2-track inputs and outputs. ENGLISH 14 strain relief clamp sleeve ring tip sleeve ground/shield ring cold (-ve) tip hot (+ve) For connection of balanced and unbalanced plugs. 4.2. Insert send return 1/ 4" TRS connector Fig. of course. ring and sleeve have to be bridged at the stereo plug. 4. You can. 4. also connect unbalanced devices to the balanced input/outputs.3: 1⁄4" TS connector 4.4: 1⁄4"TRS connector strain relief clamp sleeve ring tip sleeve ground/shield ring return (in) tip send (out) Connect the insert send with the input and the insert return with the output of the effects device. Use either mono plugs. Specifications Mono inputs Microphone inputs (XENYX Mic Preamp) Type Mic E. (20 Hz .6: 1⁄4" TRS connector for headphones XENYX 1222FX User Manual 5.20 kHz) @ 0 Ω source resistance @ 50 Ω source resistance @ 150 Ω source resistance Frequency response Gain range Max.90 kHz <10 Hz . 4.ground/shield ring right signal tip left signal 1/ 4" TRS headphones connector Fig. input level Stereo inputs Type . 7/8 Microphone input Type Impedance Gain range Max.I. input level Impedance Signal-to-noise ratio Distortion (THD+N) Line input Type Impedance Gain range Max.N. input level Fade-out attenuation1 (Crosstalk attenuation) Main fader closed Channel muted Channel fader muted Frequency response Microphone input to main out <10 Hz .160 kHz Stereo inputs Channels 5/6. 20 kΩ balanced 10 kΩ unbalanced -10 to +40 dB +22 dBu @ 0 dB Gain 98 dB 85 dB 85 dB +0 dB / -1 dB +0 dB / -3 dB XLR microphone connector.200 kHz (-3 dB) +10 to +60 dB +12 dBu @ +10 dB Gain approx. electronically balanced.005 % / 0.7 dB A-weighted -131 dB / 133.5 dB A-weighted -129 dB / 130. 2. input level XLR.150 kHz (-1 dB). input level EQ mono channels Low Mid High Low cut EQ stereo channels Low . 40 kΩ @ 0 dB Gain -20 dB to +20 dB +22 dBu @ 0 dB Gain Channels 9/10.6 kΩ balanced 110 dB / 112 dB A-weighted (0 dBu In @ +22 dB Gain) 0. input level CD/tape in Type Impedance Max.6 kΩ balanced 0 dB to +40 dB +2 dBu 2 x 1⁄4" TRS connector. electronically balanced approx.5 dB A-weighted <10 Hz . 11/12 Type Impedance Gain range Max. discrete input circuitry -134 dB / 135.Impedance Gain range Max. 2.004 % A-weighted 1⁄4" TRS connector. unbalanced approx. <10 Hz . electronically balanced approx. 10 kΩ +22 dBu 80 Hz / ±15 dB 2. unbalanced approx. 40 kΩ @ 0 dB Gain -20 dB to +20 dB +22 dBu @ 0 dB Gain RCA connectors approx. input level Main outputs Type Impedance Max. output level Headphone output Type Max. output level Aux returns Type Impedance Max. electronically balanced approx. 10 kΩ +22 dBu XLR. 120 Ω +22 dBu 1⁄4" TRS connector. output level DSP Converter Sampling rate 15 H SILGNE2 x 1⁄4" TRS connector. 80 Hz / ±15 dB 2. unbalanced approx. output level CD/tape out Type Impedance Max.Mid High MON/FX send Type Impedance Max. unbalanced approx.5 kHz / ±15 dB 12 kHz / ±15 dB 80 Hz. 240 Ω balanced / 120 Ω unbalanced +28 dBu . 18 dB/oct.5 kHz / ±15 dB 12 kHz / ±15 dB 1⁄4" TS connector. 2: 20 Hz . 1222FX User Manual XENYX 1222FX User Manual 17 Limited Warranty § 1 Warranty .20kHz.6 A H 250 V Standard IEC receptacle approx. unity gain. channels 2/4 as far right as possible.60 Hz 40 W T 1. 1 kΩ +22 dBu Texas Instruments 24-bit Sigma-Delta. to 0 dBu. As a result of these efforts. main output. Specifications and appearance may differ from those listed or illustrated. channels 1/3 as far left as possible. Channel fader @ 0 dB Power supply Mains Voltage Power consumption Fuse Mains connection Physical Dimensions (H x W x D) Weight (net) -99 dB / -101 dB A-weighted -84 dB / -87 dB A-weighted -80 dB / -82 dB A-weighted 100 . Channels 1 .20 kHz. line input.1⁄4" TRS connector. 50 . 97 mm (3 7/8") x 345 mm (13 18 /32") x 334 mm (13 5/32") approx. modifications may be made from time to time to existing products without prior notice. measured at main output.) Measuring conditions: 1: 1 kHz rel. EQ flat. 20 Hz . Reference = +6 dBu. 3. BEHRINGER is constantly striving to manintain the highest professional standards.38 lbs. Channel fader -oo Main mix @ 0 dB. 64/128-times oversampling 40 kHz ENGLISH 16 XENYXMain mix system data2 Noise Main mix @ -oo.240 V~.4 unity gain. unbalanced +19 dBu / 150 Ω (+25 dBm) RCA connectors approx.80 kg (8. Channel fader -oo Main mix @ 0 dB. all channels on main mix. [1] This limited warranty is valid only if you purchased the product from a BEHRINGER authorized dealer in the country of purchase. [3] Upon validation of the warranty claim. After verifying the product’s warranty eligibility with the original sales receipt.behringer. THIS LIMITED WARRANTY IS VOIDWITHOUT SUCH PROOF OF PURCHASE. All inquiries must be accompanied by a description of the problem and the serial number of the product. at its discretion. please check if your problem can be dealt with by our “Online Support” which may also be found under “Support” at www. [3] Shipments without freight prepaid will not be accepted. If your country is not listed. PLEASE RETAIN YOUR SALES RECEIPT. this limited warranty shall apply to the replacement product for the remaining initial warranty period. A list of authorized dealers can be found on BEHRINGER’s website www. or you can contact the BEHRINGER office closest to you. please contact the retailer from whom the equipment was purchased. If the product shows any defects within the specified warranty period and that defect is not excluded under § 4. § 4 Warranty Exclusions [1] This limited warranty does not cover consumable parts including.. the product must be returned in its original shipping carton. [4] Warranty claims other than those indicated above are expressly excluded. the repaired or replacement product will be returned to the user freight prepaid by BEHRINGER. either replace or repair the product using suitable new or reconditioned product or parts. com under “Where to Buy”.com.behringer.behringer. Alternatively.com BEFORE returning the product. but not limited to. together with the return authorization number to the address indicated by BEHRINGER. i. [2] BEHRINGER* warrants the mechanical and electronic components of this product to be free of defects in material and workmanship if used under normal operating conditions for a period of one (1) year from the original date of purchase (see the Limited Warranty terms in § 4 below). in any country which is not the country for which the product was originally developed and manufactured. § 2 Online registration Please do remember to register your new BEHRINGER equipment right after your purchase at. please submit an online warranty claim at www. one (1) year (or otherwise applicable minimum warranty period) from the date of purchase of the original product. If the product needs to be modified or adapted in order to comply with applicable technical or safety standards on a national or local level. IT IS YOUR PROOF OF PURCHASE COVERING YOUR LIMITED WARRANTY. fuses and batteries. Registering your purchase and equipment with us helps us process your repair claims quicker and more efficiently. BEHRINGER will then issue a Return Materials Authorization (“RMA”) number. Where applicable. this modification/adaptation shall not be considered a defect in materials or workmanship.com under “Support”and kindly read the terms and conditions of our limited warranty carefully. BEHRINGER shall. In case BEHRINGER decides to replace the entire product. Should your BEHRINGER dealer not be located in your vicinity. This limited warranty does not cover any such .com. [2] This limited warranty does not cover the product if it has been electronically or mechanically modified in any way. unless a longer minimum warranty period is mandated by applicable local laws. [2] Subsequently. BEHRINGER warrants the valves or meters contained in the product to be free from defects in material and workmanship for a period of ninety (90) days from date of purchase.behringer.behringer. you may contact the BEHRINGER distributor for your country listed under “Support”at www. Thank you for your cooperation! § 3 Return authorization number [1] To obtain warranty service. fire. [9] Products which do not meet the terms of this limited warranty will be repaired exclusively at the buyer’s expense. This also applies to defects caused by normal wear and tear. BEHRINGER or its authorized service center will inform the buyer of any such circumstance. [10] Authorized BEHRINGER dealers do not sell new products directly in online auctions. H SILGNEEN ENGLISH 18 § 6 Claim for damage Subject only to the operation of mandatory applicable local laws.) shall be entitled to give any warranty promise on behalf of BEHRINGER. If the buyer fails to submit a written repair order within 6 weeks after notification.modification/adaptation. guitar strings. Any such software is provided “AS IS” unless expressly provided for in any enclosed software limited warranty. [8] If an inspection of the product by BEHRINGER shows that the defect in question is not covered by the limited warranty. Under the terms of this limited warranty. No other person (retail dealer.D. § 5 Warranty transferability This limited warranty is extended exclusively to the original buyer (customer of authorized retail dealer) and is not transferable to anyone who may subsequently purchase this product. • connection or operation of the unit in any way that does not comply with the technical or safety regulations applicable in the country where the product is used. [6] Damage/defects caused by the following conditions are not covered by this limited warranty: • improper handling. [4] This limited warranty is invalid if the factory-applied serial number has been altered or removed from the product. [7] Any repair or opening of the unit carried out by unauthorized personnel (user included) will void the limited warranty. Online auction confirmations or sales receipts are not accepted for warranty verification and BEHRINGER will not repair or replace any product purchased through an online auction. potentiometers. BEHRINGER shall have no liability to the buyer under this warranty for any consequential or indirect loss or damage of any kind. the inspection costs are payable by the customer. of faders. etc) or any other condition that is beyond the control of BEHRINGER. [3] This limited warranty covers only the product hardware. illuminants and similar parts.O. tubes. in particular. [5] Free inspections and maintenance/repair work are expressly excluded from this limited warranty. • damage/defects caused by acts of God/Nature (accident. Such costs will also be invoiced separately when the buyer has sent in a written repair order. if caused by improper handling of the product by the user. flood. crossfaders. in particular. neglect or failure to operate the unit in compliance with the instructions given in BEHRINGER user or service manuals. keys/buttons. In no event shall the liability of BEHRINGER under this limited warranty exceed the invoiced value of the product. with a separate invoice for freight and packing. regardless of whether it was carried out properly or not. Purchases made through an online auction are on a“buyer beware” basis. etc. BEHRINGER shall not be held responsible for any cost resulting from such a modification/adaptation. . It does not cover technical assistance for hardware or software usage and it does not cover any software products whether or not contained in the product. BEHRINGER will return the unit C. ALL RIGHTS RESERVED. Box 146. Wickhams Cay.com. BEHRINGER products are sold through authorized dealers only. WA 98011. including photocopying and recording of any kind. [3] This warranty does not detract from the seller’s obligations in regard to any lack of conformity of the product and any hidden defect. Colors and specifications may vary slightly from product. electronic or mechanical. No part of this manual may be reproduced or transmitted in any form or by any means. BEHRINGER accepts no liability for any loss which may be suffered by any person who relies either wholly or in part upon any description. [2] The limited warranty regulations mentioned herein are applicable unless they constitute an infringement of applicable mandatory local laws. please see complete details online at www. This manual is copyrighted. Trident Chambers. Macau. © 2009 Red Chip Company Ltd. P. Distributors and dealers are not agents of BEHRINGER and have absolutely no authority to bind BEHRINGER by any express or implied undertaking or representation. photograph or statement contained herein. British Virgin Islands XENYX 1222FX User Manual FEDERAL COMMUNICATIONS COMMISSION COMPLIANCE INFORMATION XENYX 1222FX Responsible party name: BEHRINGER USA. It supersedes all other written or oral communications related to this product. without the express written permission of Red Chip Company Ltd. Tortola.§ 7 Limitation of liability This limited warranty is the complete and exclusive warranty between you and BEHRINGER. Macau Finance Centre 9/J. Address: 18912 North Creek Parkway. Inc.O. for any purpose. USA Phone/Fax No. 202-A.: Phone: +1 425 672 0816 Fax: +1 425 673 7647 hereby declares that the product XENYX 1222FX . § 9 Amendment Warranty service conditions are subject to change without notice. BEHRINGER provides no other warranties for this product. Suite 200 Bothell.behringer. The information contained herein is correct at the time of printing. * BEHRINGER Macao Commercial Offshore Limited of Rue de Pequim No. Road Town. For the latest warranty terms and conditions and additional information regarding BEHRINGER’s limited warranty. including all BEHRINGER group companies XENYX 1222FX User Manual Legal Disclaimer Technical specifications and appearance are subject to change without notice. § 8 Other warranty rights and national law [1] This limited warranty does not exclude or limit the buyer’s statutory rights as a consumer in any way. if not installed and used in accordance with the instructions. This equipment generates. uses and can radiate radio frequency energy and. However. Swedish. • Connect the equipment into an outlet on a circuit different from that to which the receiver is connected. German. • Consult the dealer or an experienced radio/TV technician for help. pursuant to part 15 of the FCC Rules. Greek. If this equipment does cause harmful interference to radio or television reception. there is no guarantee that interference will not occur in a particular installation.complies with the FCC rules as mentioned in the following paragraph: This equipment has been tested and found to comply with the limits for a Class B digital device. Operation is subject to the following two conditions: (1) this device may not cause harmful interference. Spanish. Polish. This device complies with Part 15 of the FCC rules. • Increase the separation between the equipment and receiver. and (2) this device must accept any interference received. There may also be more current versions of this document. the user is encouraged to try to correct the interference by one or more of the following measures: • Reorient or relocate the receiving antenna. which can be determined by turning the equipment off and on. including interference that may cause undesired operation. French. Dutch. Danish. Finnish. These limits are designed to provide reasonable protection against harmful interference in a residential installation. Portuguese.com .behringer. Italian. Download them by going to the appropriate product page at: www. Japanese and Chinese. 19 H SILGNEENGLISH This manual is available in English. Russian. may cause harmful interference to radio communications. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/144113041/xeniz-manual4
CC-MAIN-2016-44
refinedweb
7,628
67.86
#include <rte_flow.h> RTE_FLOW_ACTION. Note: RSS hash result is stored in the hash.rss mbuf field which overlaps hash.fdir.lo. Since the MARK action sets the hash.fdir.hi field only, both can be requested simultaneously. Definition at line 1611 of file rte_flow.h. RSS hash function to apply. Definition at line 1612 of file rte_flow.h. Packet encapsulation level RSS hash types apply to. 0 requests. Definition at line 1638 of file rte_flow.h. Specific RSS hash types (see ETH_RSS_*). Definition at line 1639 of file rte_flow.h. Hash key length in bytes. Definition at line 1640 of file rte_flow.h. Number of entries in queue. Definition at line 1641 of file rte_flow.h. Hash key. Definition at line 1642 of file rte_flow.h. Queue indices to use. Definition at line 1643 of file rte_flow.h.
https://doc.dpdk.org/api-18.05/structrte__flow__action__rss.html
CC-MAIN-2022-27
refinedweb
138
81.39
jetsmart @jetsmart Best posts made by jetsmart - RE: Quasar CLI. Problem with creating new project posted in CLI Latest posts made by jetsmart - RE: Quasar CLI. Problem with creating new project posted in CLI - Quasar CLI. Problem with creating new project The problem appeared after updating Quasar CLI. Before that everything worked fine. Tried reinstalling. No result. - RE: Breakpoints relatively Layout thanks @dobbel I also met similar solutions. It works, but this is “standalone” solutions. I have Quasar app with template based on quasar classes like col-<breakpoint> and similar. And I wish it would work both in the window and in the сontainerized layout the same way. It would be nice if Quasar developers implemented this feature - Breakpoints relatively Layout Hi all We have a interesting example Containerized QLayout where we see that drawers breakpoint does not refer to the window width, but to the actual width of the QLayout container. It’s great! But, how can I use this wonderful container with relative breakpoints for the classes like col-<breakpoint>-<number> where <breakpoint> does not refer to the window width, but to the actual width of the QLayout container? - RE: SSR + PWA with dynamic manifest.json Hi! I have a similar case. Is there any solution? Need to disable generation of native quasar manifest. But leave GenerateSW mode and generation of serviceworker. - RE: TreeShaking using webpack @Hawkeye64 I dont use quasar-cli. For project building i use webpack(ver. 4.33) itself ( npm run build). "scripts": { "build": "webpack --mode=production --devtool source-map --progress" } TreeShaking works perfectly with simple test modules, as described in manual, but wont with Quasar. I try remove import {default as Quasar} from 'quasar'then bundle size decreases, and contain only QBtn. !() Exactly import {default as Quasar}pulls along all quasar components. !() - TreeShaking using webpack Hi! I create App with Nuxt, and using nuxt buildfor building bundle. - install Quasar into project: vue add quasar - add code into app import Vue from 'vue' import '../../assets/stylus/quasar.styl' import '@quasar/extras/material-icons/material-icons.css' import {default as Quasar} from 'quasar' import { QBtn } from 'quasar' Vue.use(Quasar, { config: {}, components: { QBtn, }, directives: { }, plugins: { } }) - create bundle npm run build - analyze bundle and see, that all quasar components is included, however i use unly one QBtn. How can i minimize bundle size by treeshaking? - RE: Unwanted tree node expansion try add to q-input element “@click.stop”
https://forum.quasar-framework.org/user/jetsmart
CC-MAIN-2021-04
refinedweb
401
58.79
Contents Question In this question, on the first line, we’re given two integers, say, ‘n’ and ‘m’. Then, we’re given ‘m’ number of instruction-lines as input, in which each line contains 3 values. We begin with a single-dimensional empty array ‘arr’ (all 0 values) of size ‘n’. Note that the index of this array begins with 1, and not 0. Each instruction-line has 3 values. The first two indicate two indices in the single dimensional array. Let’s call the three values on each line ‘a’, ‘b’, and ‘k’, respectively. Our task is to add the ‘k’ of each instruction-line to every element of ‘arr’ belonging to an index ranging between the ‘a’ and ‘b’ of that particular ‘instruction line’. The final task is to return the maximum value from the resultant array, i.e. ‘arr’. I know this sounds very confusing, but the example given below will clear things up. Explanation With Example Suppose, we get the following input: 5 2 1 3 3 1 5 5 Here, n = 5 and m = 2. We start with a blank array ‘arr’ of size ‘m’, i.e. 5, {0, 0, 0, 0, 0} and perform the following tasks: - Add 3 to all the index positions between 1 to 3. - Add 5 to all the index positions between 1 to 5. So, first of all let’s declare an array, arr = {0, 0, 0, 0, 0} We do so because the value of n is 5. Now, if we check the first instruction-line, we get {1, 3, 3}. According to the question, the first two values, i.e. 1, 3 are indices in whose range the third value, i.e. 3, must be added. So, we add 3 to the values in 1, 2, and 3 index positions of arr. Thus, arr becomes {0+3, 0+3, 0+3, 0, 0} = {3, 3, 3, 0, 0} Next, if we check the second-instruction line, we get {1, 5, 5}. Which means, we must add 5 to the values of arr in indices which fall in the range of 1 to 5. Thus, arr becomes {3+5, 3+5, 3+5, 0+5, 0+5} = {8, 8, 8, 5, 5}. This is the final resultant array. The largest value in the resultant array is 8. Thus, 8 is the final answer. Solution Explanation Naive Approach The most straightforward way of solving this problem would be to iterate through the instruction-lines one by one, and at each iteration, run an internal loop from ‘a’ to ‘b’ for that instruction-line and add the ‘k’ value to the values of arr at those indices. This approach would work exactly as shown in the above examples. However, this approach would be too slow due to the presence of nested loops, and some of the larger test cases would throw a timeout error. Thus, we need to solve this problem with a linear approach. Optimised Approach Instead of running an internal loop for the given range in each instruction-line and adding the ‘k’ value to each of those indices, we could simply add the ‘k’ value to the at the ‘a’th index and subtract the ‘k’ value from the ‘b’th index. And while finding the maximum value, we declare a max = 0 and a sum = 0 variable. We iterate through the resultant array and add each of its elements into the sum variable. At every iteration, we check whether the sum variable is bigger than the max variable or not. If yes, then we set max = sum. Let’s understand this better with the help of an example. Taking the same input as previously used: 5 2 1 3 3 1 5 5 We start with a blank array arr = {0, 0, 0, 0, 0}. In the 1st instruction-line, a = 1, b = 3, and k = 3. Thus, we put arr[a] = arr[a] + k and arr[b+1] = arr[b+1] – k. Or, arr[1] = 0 + 3 = 3, arr[4] = 0 – 4 = -4. arr = {3, 0, 0, -3, 0} In the 2nd instruction-line, a = 1, b = 5, and k = 5. Thus, we put arr[a] = arr[a] + k. But as arr[b+1] is arr[6], which is out of bounds, we do not take any further action. Thus, arr[1] becomes arr[1] + 5 = 3 + 5 = 8. We get the final resultant array as {8, 0, 0, -3, 0}. We set max = 0, sum = 0. Now we start iterating through the resultant array and adding its values in sum. At index 1, sum = sum + arr[1] = 8. sum is greater than max. Thus max = sum = 8. At index 2, sum = sum + arr[2] = 8. sum is not greater than max. Thus max stays the same. At index 3, sum = sum + arr[3] = 8. sum is not greater than max. Thus max stays the same. At index 4, sum = sum + arr[4] = 5. sum is not greater than max. Thus max stays the same. At index 5, sum = sum + arr[5] = 5. sum is not greater than max. Thus max stays the same. We return max (8) as the final answer. Code Solution The full code solution to the HackerRank Array Manipulation problem in Java and C++ are given below. HackerRank Array Manipulation: Java Code Solution import java.util.Scanner; class Solution{ public static void main(String[] args){ long n, m, a, b, k, i, j, max = 0, sum = 0; Scanner sc=new Scanner(System.in); n = sc.nextLong(); m = sc.nextLong(); long arr[] = new long[(int) (n+1)]; for(i=0; i<m; i++){ a = sc.nextLong(); b = sc.nextLong(); k = sc.nextLong(); arr[(int) a] += k; if(((int) (b+1)) <= n) arr[(int) (b+1)] -= k; } for(i=1; i<=n; i++){ sum += arr[(int) i]; max=Math.max(max, sum); } System.out.println(max); } } HackerRank Array Manipulation: C++ Code Solution #include <iostream> using namespace std; int main() { long int n, m, a, b, k, i, j, max = 0,sum = 0; cin >> n >> m; long int *arr=new long int[n+1](); for(i=0; i<m; i++) { cin >> a >> b >> k; a[a] += k; if((b+1) <= n) a[b+1] -= k; } for(i=1; i<=n; i++) { sum += arr[i]; max = std::max(max, sum); } cout << max; return 0; } Result I hope I was able to explain the solution to the Array Manipulation problem from HackerRank’s Interview Preparation Kit. If you have any doubts, feel free to comment down below. I will try my best to help you out. What is the time complexity of this solution? O(n), where n is the number of instruction lines.
https://www.techrbun.com/array-manipulation-hackerrank-solution/
CC-MAIN-2022-21
refinedweb
1,118
75.2
Need a little help with QPushButton & connect I have just started working on a plugin for the program Daz Studio and I'm stuck on what I thought would be a simple process to get an event when a button is pressed. Alas that it does not work. So I have in my main .cpp : QPushButton* button1 = new QPushButton(tr("Button 1")); mainLyt->addWidget(button1); connect(button1, SIGNAL(clicked()), this, SLOT (doSomething())); Then a function : void DzSceneInfoPane::doSomething() { QMessageBox* box = new QMessageBox(); box->setWindowTitle("Hello"); box->setText("You clicked !"); box->show(); } And in my header files : public slots: void doSomething(); Apologies if my terminology is off, I'm new to C++. Could anyone help ? Also there does not seem to be a way to format code in this message. - SGaist Lifetime Qt Champion last edited by Hi and welcome to devnet, For Signals and Slots to work you need an event loop running, usually through the call of QApplication's exec function. Do you have that ? Yes there is, the forum uses Markdown, you can surround your code with three back ticks (three back ticks, new line(s) with code and again three back ticks after a new line after your last code line) Hello there ! Thanks for your reply. I edited my message, thanks for the tip. Currently I am just playing with some examples so I am inserting code to see what it does layout-wise. Then later I can start proper. I don't at this stage know how to get a loop running, but I will look into it. - SGaist Lifetime Qt Champion last edited by The shortest way to get started is to create a new widget project and you'll have everything you need. Does my code above look right ? In this code I am playing with there are already SIGNALS and SLOTS being used. I'm just wondering if my code above is okay. After spending a few hours trying to get one button to work I'm losing the will to live ;) - JKSH Moderators last edited by Does my code above look right ? In this code I am playing with there are already SIGNALS and SLOTS being used. I'm just wondering if my code above is okay. Unfortunately, you haven't posted enough code for us to tell if your code is right or not. Without seeing your code, my advice is: - Make sure your DzSceneInfoPaneheader contains the Q_OBJECTmacro. If you don't, you can't define your own signals or slots. - Always pay attention to error and warning messages that you get. They contain valuable information about what's wrong. After spending a few hours trying to get one button to work I'm losing the will to live ;) I suggest following these tutorials. They are extremely simple, and use QPushButton: - -- This just makes a QPushButton that does nothing - -- This shows you how to connect a QPushButton to another class's slot At the botton of Tutorial 1, it says "If you have typed in the source code manually, you will need to follow these instructions"... ignore that part. Yes thank you, I think you are both right. I will have to start from fresh and learn step by step. Thanks to you both for helping, I appreciate it :) - jsulm Qt Champions 2018 last edited by You have a memory leak in: void DzSceneInfoPane::doSomething() { QMessageBox* box = new QMessageBox(); box->setWindowTitle("Hello"); box->setText("You clicked !"); box->show(); } You never delete box. I started fresh and am getting there in other areas, but this darned button is still a problem. In the examples you kindly posted there is this : #include <QApplication> #include <QPushButton> int main(int argc, char *argv[]) { QApplication app(argc, argv); QPushButton hello("Hello world!"); hello.resize(100, 30); hello.show(); return app.exec(); } Which is fine, but the examples I am working from do not use main. Other wise I have set things up as suggested, but I get nada. Here's my code... edit the forum formatting does not seem to work today. CPP : #include "myfirstplugin.h" #include <QtCore/QObject> #include <QtGui/QPushButton> #include <QtGui/QVBoxLayout> #include "dzapp.h" #include "dzbone.h" #include "dzfacetmesh.h" #include "dzhelpmgr.h" #include "dzobject.h" #include "dzscene.h" #include "dzshape.h" #include "dzskeleton.h" #include "dzstyle.h" #include "dzplugin.h" myFirstPlugin::myFirstPlugin() : DzPane("Hello there") {GridLayout *gLay = new QGridLayout(groupBox); gLay->addWidget(radio1); gLay->addWidget(radio2); gLay->addWidget(radio3); //Create QPushButton QPushButton *button1 = new QPushButton(tr("Button 1")); button1->setObjectName("button1"); QMetaObject::connectSlotsByName(this); // Define the layout for the pane QVBoxLayout *mainLyt = new QVBoxLayout(); mainLyt->addWidget(groupBox); mainLyt->addWidget(button1); mainLyt->addStretch(1); // Set the layout for the pane setLayout(mainLyt); showPane(); connect(button1, SIGNAL(pressed()), this, SLOT(doThis())); } myFirstPlugin::~myFirstPlugin(){ } void myFirstPlugin::doThis() { QMessageBox* box = new QMessageBox(); box->setWindowTitle("Hello"); box->setText("You clicked !"); box->show(); } And my header : #ifndef MYFIRSTPLUGIN_H #define MYFIRSTPLUGIN_H #include "dzpane.h" #include "dzaction.h" #include <QtGui/QPushButton> class myFirstPluginAction : public DzPaneAction { Q_OBJECT public: myFirstPluginAction() : DzPaneAction("myFirstPlugin") { } }; class myFirstPlugin : public DzPane { Q_OBJECT public: myFirstPlugin(); ~myFirstPlugin(); public slots: void doThis(); private: QPushButton *button1; }; #endif // MYFIRSTPLUGIN_H [edit: fixed coding tags, three back-ticks SGaist] hi If you don't have a main then you most likely don't have int main(int argc, char *argv[]) { QApplication app(argc, argv); return app.exec(); } the app.exec() is an event loop and it makes signals and slot works. So if its not running then no "click" will ever work. So it seem you are trying to make plugin for another program. Does this program expect it to be a DLL ? Are the samples you are using, for building a plugin with qt? when there is no main, then no much will happen when you run it. No I don't have that. But the samples I was working from are using SLOTS and SIGNALS without it. It's a plugin yes - it's for Daz Studio, a graphics program. It uses DLLs yes. I can successfully build the DLL and it loads and will show my widgets within the main program. But whatever I try I can't get it to detect when I click on the button. Maybe there is a library I am missing, but as far as I can tell I'm using all the same includes as the examples that have the working events. @mrmorph Ok so the examples are for Qt ? And all loads in Daz Studio and you can see the button etc? I assume this connect(button1, SIGNAL(pressed()), this, SLOT(doThis())); is the button in question ? try output if connect is ok. bool status = connect(button1, SIGNAL(pressed()), this, SLOT(doThis())); if (!status) QMessageBox... to see if the actual connect that fails Do you have a link for the examples ? Must be something simple missing. Hi, thanks for trying to help. And sorry yes, the samples are for Daz Studio yes and they work fine. Everything loads yes and I can see the radio buttons and button - they are in a window... as in this snip... Image to show it working in the main program... button1 is the one I am trying to get to work yes. I inserted this as suggested. bool status = connect(button1, SIGNAL(pressed()), this, SLOT(doThis())); if (status) { QMessageBox* box = new QMessageBox(); box->setWindowTitle("Hello"); box->setText("Works"); box->show(); } else { QMessageBox* box = new QMessageBox(); box->setWindowTitle("Hello"); box->setText("Does not work"); box->show(); } The connect appears to fail as I get 'Does not work'. Thanks for formatting my previous code properly, I got the wrong key. EDIT Opps, you wanted the samples. They are free here, but you would need to register. I don't think I can distribute the files myself. @mrmorph said: Hmm, one note QPushButton *button1 = new QPushButton(tr("Button 1")); you make new one but in .h you have one in class myFirstPlugin so should it be button1 = new QPushButton(tr("Button 1")); ? also, since it fails, (the connect) we need to find out why could you try the new syntax as it catch stuff compile time connect(button1, &QPushButton::released, this, &myFirstPlugin::doThis); Tried button1 = new QPushButton(tr("Button 1")); But still the same, does not work. .
https://forum.qt.io/topic/60791/need-a-little-help-with-qpushbutton-connect/10
CC-MAIN-2019-43
refinedweb
1,368
66.44
Business Law Ask a Business Lawyer. Get Business Law Questions Answered ASAP. Hello. I'll try to provide you with this information. To whom is the IRS notice addressed? The LLC or the members by name? Are you the Managing Member designated by the Operating Agreement. What does the Operating agreement say, if anything about Member liability? I haven't seen the notice yet. i guess it's for the LLC. My partner hasn't provided me a copy of the notice. The notice was sent to his address. Our relationship has strained and so is not co operating with me. The operating agreement has an indemnification clause that indemnifies the TMP but only if the majority of the votes support. In our scenario our LLC is deadlocked (50% each). My partner is saying I committed a breach of fiduciary duty since I didn't inform him about the tax return before signing. Since the error is huge ($40K in 2011 and $160K in 2012), would IRS assess a fradulent/negligent penalty even though we amended our tax returns voluntarily? Our original CPA had made several errors - for eg deducted expenses even though the business hasn't opened yet. Isn't it the fiduciary duty of my partner to share IRS notices? Will the IRS come after me (as TMP) or the LLC if we don't pay the penalty before the deadline? Thanks for the fiduciary duty link. Regarding my earlier question: >Isn't it the fiduciary duty of my partner to share IRS notices? Will the IRS come after me (as TMP) or the LLC if we don't pay the penalty before the deadline? I am concerned if I will be in trouble if we don't pay the penalty before the deadline. If the IRS assess penalties, would all the members of the LLC be responsible or just the TMP (I was the TMP for the original returns)? Actually our LLC has 4 members = 2 couples. My partner is blaming me for the incorrect tax returns and I am afraid he may file a lawsuit against me regarding this. In the eyes of IRS aren't all members of the LLC liable? The CPA who we hired for amending tax returns is a close friend of my partner. Isn't the duty of this CPA to defend the LLC? I am concerned if my partner and the CPA (who amended tax returns) would join together and get me into trouble with the IRS. Is this possible? If so, what actions should I take? Do I need to hire an attorney? One final question/clarification. I have already amended my personal tax returns and paid all taxes, interest and penalty. So, at the LLC level, since there's no taxes, can IRS assess penalty on the original tax return (that was already amended)? We amended all our tax returns (both LLC and personal) voluntarily and never received any notice or audit from the IRS/ Sorry for not being clear. What I am referring to is the penalty at the LLC level. Can they assess penalty on the original partnership return that was already amended? Since IRS knows there were errors in the original partnership return, why should they review the original return and assess).
http://www.justanswer.com/business-law/814x6-notice-penalty-irs.html
CC-MAIN-2016-22
refinedweb
547
66.44
Coffeehouse Thread12 posts Forum Read Only This forum has been made read only by the site admins. No new threads or comments can be added. IE11 part of 'Blue' Back to Forum: Coffeehouse Conversation locked This conversation has been locked by the site admins. No new comments can be made. I'm guessing this won't be available for Win7 IE11 has a very intereting F12 tools, Large part of it is written in TypeScript, and it support TypeScript debugging natively. And it looks Modern But the most interesting part of Blue is FileManager, desktop is not needed anymore @felix9: ...and a seriously improved html5score, according to winbeta.org (320). Wonder what they added. With a rumored release date just a few months away, limiting IE11 to Windows 8+ would mean confining it to a pretty small niche. I don't think that's going to happen. I guess whatever has to be done to get people interested in Windows, has to get done. Fake leaks, check. Little tiles, check. How about some more promo pricing? Install Windows 8 on the 8th at 8:00 for $8! Create 8 Windows 8 Apps and get 8 Windows 8 licenses for your friends! Trade your iPhone in for a Windows 8 phone, get a Windows 8 license. Take the fear factor route, go to a Microsoft store and eat 8 meal worms, get a license. This is really not that hard folks. -Josh I think I'm obliged to say "It's all lies, I tell you!" and claim this build was put together by some folks with too much time and a copy of ResHacker. will FIleManager have namespace extensions? IE10 has an "install new versions automatically" checkbox. Would that mean IE11 would be automatically installed? Which is what chrome does, no ? That should be rephrased as "Install new compatible versions automatically" - I would assume IE10 for 7 and 8 will get IE11 as it is made available for their respective systems, when WindowsNext is released don't expect IE11 to be available immediately, we still had to wait months for IE10 to be made available for 7 (and many were sorely disappointed when it wasn't made available for Vista). There's also debate about whether major releases like IE11 should be installed automatically like Chrome is, or if we'll stick a longer cadence (compared to Chrome and Firefox, but still far shorter than the "once every 2 years" cycle we currently have). Try as we might, there's still going to be terrible intranet webapplications that only work in a specific version of IE. For now, just assume that the checkbox is a shortcut for the "Always install Critical and Recommended Updates" in Windows Update, so minor and point releases will be installed, but you'll get a warning for any major releases. As for when IE11 actually does come out - we'll wait and see. Nothing, IE10 has also a score of 320. Apparently, IE11 makes changes to the default agent string, and even includes a mode to impersonate Firefox. The assumption seems to be that this is an attempt to stop old 'MSIE' based hacks from working, and leave IE-specific sites behind. I thought the delay was because some DirectX components had to be backported?
https://channel9.msdn.com/Forums/Coffeehouse/IE11-part-of-Blue
CC-MAIN-2017-51
refinedweb
550
72.46
I was punishing myself by using fetcher again and I ran into the look ahead bias. Now, look ahead bias is only relevant for backtesting not for live trading as you always want the latest value as we cannot look into the future. So as far as I understand all these sample algorithms will create a lag in the data the moment you will put it live. Should we not assume that any algo we develop will be able to run live and therefor all examples should have a get_environment test something like this when they time shift? def fixit(df): df = df.rename(columns={'Value': 'Signal'}) if get_environment('arena') == 'backtest': df = df.tshift(3, freq='M') df.fillna(method='ffill') df['symbol'] = 'IPI' df = df[['Signal','symbol', 'sid']] return df
https://www.quantopian.com/posts/fetcher-and-time-shift-improving-q-examples
CC-MAIN-2018-43
refinedweb
131
65.12
Note: A new version of this control is available here. All further updates to the control are posted to the project's page on SourceForge.net. CAPTCHA is short for "completely automated public Turing test to tell computers and humans apart", and is the most popular technique used to prevent computer programs from sending automated requests to Web servers. These can be for meta-searching search engines, doing dictionary attacks in login pages, or sending spam using mail servers. You might have seen CAPTCHA images in the Google register page when you do availability check on too many usernames, or in Yahoo!, PayPal, or other big Web sites. The first CAPTCHA image generator I used was written using the CAPTCHA Image article by BrainJar. After that, I read the MSDN HIP challenge article and made many changes to my code. The code used in this control is based on the MSDN HIP article, but some parts are not changed. SetCaptcha()method. This method does everything needed, using other classes. RandomTextclass generates cryptographically-strong random texts. RNGclass generates cryptographically-strong random numbers. CaptchaImageclass creates the image. Encryptorclass is used for encryption and decryption. We will discuss some of them later in this article. The main method in the control code is SetCaptcha() which is executed whenever we need to change the picture or load it. private void SetCaptcha() { // Set image string s = RandomText.Generate(); // Encrypt string ens = Encryptor.Encrypt(s, "srgerg$%^bg", Convert.FromBase64String("srfjuoxp")); // Save to session Session["captcha"] = s.ToLower(); // Set URL imgCaptcha.ImageUrl = "~/Captcha.ashx?w=305&h=92&c=" + ens + "&bc=" + color; } This encrypts a random text using an encryption key which is hard-coded in this code. To prevent hard coding, you can store this information in the database and retrieve it when needed. This method also saves text to the session for comparison with user input. To make the control style match with the page style, there are two properties used: The Style property sets the control style and the background color sets the background color for the generated image. Two event handlers handle the Success and Failure events. We use a delegate for these handlers. public delegate void CaptchaEventHandler(); When the user submits the form, btnSubmit_Click() validates the user input. protected void btnSubmit_Click(object s, EventArgs e) { if (Session["captcha"] != null && txtCaptcha.Text.ToLower() == Session["captcha"].ToString()) { if (success != null) { success(); } } else { txtCaptcha.Text = ""; SetCaptcha(); if (failure != null) { failure(); } } } The RNG class generates cryptographically-strong random numbers, using the RNGCryptoServiceProvider class. public static class RNG { private static byte[] randb = new byte[4]; private static RNGCryptoServiceProvider rand = new RNGCryptoServiceProvider(); public static int Next() { rand.GetBytes(randb); int value = BitConverter.ToInt32(randb, 0); if (value < 0) value = -value; return value; } public static int Next(int max) { // ... } public static int Next(int min, int max) { // ... } } To create a cryptographically-strong random text, we use the RNG class to randomly select each character from the array of characters. This is a very useful technique I first saw in the CryptoPasswordGenerator. public static class RandomText { public static string Generate() { // Generate random text string s = ""; char[] chars = "abcdefghijklmnopqrstuvw".ToCharArray() + "xyzABCDEFGHIJKLMNOPQRSTUV".ToCharArray() + "WXYZ0123456789".ToCharArray(); int index; int lenght = RNG.Next(4, 6); for (int i = 0; i < lenght; i++) { index = RNG.Next(chars.Length - 1); s += chars[index].ToString(); } return s; } } This is the heart of our control. It gets the image text, dimensions, and background color, and generates the image. The main method is GenerateImage() which generates the image using the information we provide. private void GenerateImage() { // Create a new 32-bit bitmap image. Bitmap bitmap = new Bitmap(this.width, this.height, PixelFormat.Format32bppArgb); // Create a graphics object for drawing. Graphics g = Graphics.FromImage(bitmap); Rectangle rect = new Rectangle(0, 0, this.width, this.height); g.SmoothingMode = SmoothingMode.AntiAlias; // Fill background using (SolidBrush b = new SolidBrush(bc)) { g.FillRectangle(b, rect); } First, we declare Bitmap and Graphics objects, and a Rectangle whose dimensions are the same as the Bitmap object. Then, using a SolidBrush, we fill the background. Now, we need to set the font size to fit within the image. The font family is chosen randomly from the fonts family collection. // Set up the text font. int emSize = (int)(this.width * 2 / text.Length); FontFamily family = fonts[RNG.Next(fonts.Length - 1)]; Font font = new Font(family, emSize); // Adjust the font size until // the text fits within the image. SizeF measured = new SizeF(0, 0); SizeF workingSize = new SizeF(this.width, this.height); while (emSize > 2 && (measured = g.MeasureString(text, font)).Width > workingSize.Width || measured.Height > workingSize.Height) { font.Dispose(); font = new Font(family, emSize -= 2); } We calculate a size for the font by multiplying the image width by 2 and then dividing it by the text length. It works well in most cases; for example, when the text length is 4 and the width is 8 pixels, the font size would be 4. But if the text length is 1, the font size would be 16. Also, when the image height is too short, the text will not fit within the image. When the calculated size is less than 2, we can be sure that it fits within the image except when the image height is very short, which we don't pay attention to. But when it is bigger than 2, we must make sure that the text fits within the image. We do that by getting the width and height the text needs for the selected font and size. If the width or height doesn't fit, then we reduce the size and check again and again till it fits. The next step would be adding the text. It is done using a GraphicsPath object. GraphicsPath path = new GraphicsPath(); path.AddString(this.text, font.FontFamily, (int)font.Style, font.Size, rect, format); The most important part is colorizing and distorting the text. We set the text color using RGB codes, each one using a random value between 0 and 255. A random color is then generated successfully. Now, we must check if the color is visible within the background color. It's done by calculating the difference between the text color R channel and the background color R channel. If it is less than 20, we regenerate the R channel value. // Set font color to a color that is visible within background color int bcR = Convert.ToInt32(bc.R); int red = random.Next(256), green = random.Next(256), blue = random.Next(256); // This prevents font color from being near the bg color while (red >= bcR && red - 20 < bcR || red < bcR && red + 20 > bcR) { red = random.Next(0, 255); } SolidBrush sBrush = new SolidBrush(Color.FromArgb(red, green, blue)); g.FillPath(sBrush, path); Lastly, we distort the image by changing the pixel colors. For each pixel, we select a random picture from the original picture (the Bitmap object which we don't change) and set a random pixel color for it. Since distort is random, we see different distortions. // Iterate over every pixel double distort = random.Next(5, 20) * (random.Next(10) == 1 ? 1 : -1); // Copy the image so that we're always using the original for // source color using (Bitmap copy = (Bitmap)bitmap.Clone()) { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { // Adds a simple wave int newX = (int)(x + (distort * Math.Sin(Math.PI * y / 84.0))); int newY = (int)(y + (distort * Math.Cos(Math.PI * x / 44.0))); if (newX < 0 || newX >= width) newX = 0; if (newY < 0 || newY >= height) newY = 0; bitmap.SetPixel(x, y, copy.GetPixel(newX, newY)); } } } This HTTP handler gets the information needed to create a CAPTCHA image, and returns one. Note that this handler receives the encrypted text and has the key to decrypt it. public class Captcha : IHttpHandler { public void ProcessRequest (HttpContext context) { context.Response.ContentType = "image/jpeg"; context.Response.Cache.SetCacheability(HttpCacheability.NoCache); context.Response.BufferOutput = false; // Get text string s = "No Text"; if (context.Request.QueryString["c"] != null && context.Request.QueryString["c"] != "") { string enc = context.Request.QueryString["c"].ToString(); // space was replaced with + to prevent error enc = enc.Replace(" ", "+"); try { s = Encryptor.Decrypt(enc, "srgerg$%^bg", Convert.FromBase64String("srfjuoxp")); } catch { } } // Get dimensions int w = 120; int h = 50; // Width if (context.Request.QueryString["w"] != null && context.Request.QueryString["w"] != "") { try { w = Convert.ToInt32(context.Request.QueryString["w"]); } catch { } } // Height if (context.Request.QueryString["h"] != null && context.Request.QueryString["h"] != "") { try { h = Convert.ToInt32(context.Request.QueryString["h"]); } catch { } } // Color Color Bc = Color.White; if (context.Request.QueryString["bc"] != null && context.Request.QueryString["bc"] != "") { try { string bc = context.Request.QueryString["bc"]. ToString().Insert(0, "#"); Bc = ColorTranslator.FromHtml(bc); } catch { } } // Generate image CaptchaImage ci = new CaptchaImage(s, Bc, w, h); // Return ci.Image.Save(context.Response.OutputStream, ImageFormat.Jpeg); // Dispose ci.Dispose(); } public bool IsReusable { get { return true; } } } There are only two points to be noted about this class: When the control loads, it executes the SetCaptcha() method. The RandomText class generates a random text, and we save the text to the Session object and encrypt it. This method then generates the image URL using the dimensions, the encrypted text, and the background color information. You may see an example of using this control in the source code. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/custom-controls/CaptchaNET_2.aspx
crawl-002
refinedweb
1,542
60.41
If you’ve ever tried Golang, you know that writing services in Go are very easy. We need a few lines of code so that you can start the http service. But what needs to be added if we want to prepare such a production application? Let’s look at this using an example of service ready to run in Kubernetes. All steps in this article can be found in a single tag, or you can follow the examples of the article commit by commit. Step 1. The simplest Golang web service So, we have a very simple application: package main import ( "fmt" "net/http" ) func main() { http.HandleFunc("/home", func(w http.ResponseWriter, _ *http.Request) { fmt.Fprint(w, "Hello! Your request was processed.") }, ) http.ListenAndServe(":8000", nil) } If we want to try running it, go run main.go will suffice. We can check how this service works with curl -i. But when we run this application, we see no information about its state in the terminal. Step 2. Adding logging with Golang First of all, let’s add logging to understand what’s going on with the service and to be able to log errors or other critical situations. We will use the most straightforward logger from the Go standard library in this example. Still, more complex solutions may be of interest for an accurate service running in production, such as glog or logrus. We may be interested in 3 situations: when the service starts, when the service is ready to process requests, and when http.ListenAndServe returns an error. The result will be something like this: func main() { log.Print("Starting the service...") http.HandleFunc("/home", func(w http.ResponseWriter, _ *http.Request) { fmt.Fprint(w, " Hello! Your request was processed.") }, ) log.Print("The service is ready to listen and serve.") log.Fatal(http.ListenAndServe(":8000", nil)) } Better! Step 3: Adding a Router For this application, we’ll most likely want to use a router to make it easier to handle different URIs, HTTP methods, or other rules. There is no router in the Go standard library, so let’s try gorilla/mux, which is entirely compatible with the net/http standard library. It makes sense to put everything related to routing into a separate package. Let’s move the initialization and setting of the routing rules and the handler functions to the handlers package (you can see the complete changes here). Let’s add a Router function that will return the configured router, and a home function that will handle the rule for the /home path. I prefer to separate such functions into separate files: handlers/handlers.go: package handlers import ( "github.com/gorilla/mux" ) // Router register necessary routes and returns an instance of a router. func Router() *mux.Router { r := mux.NewRouter() r.HandleFunc("/home", home).Methods("GET") return r } handlers/home.go: package handlers import ( "fmt" "net/http" ) // home is a simple HTTP handler function which writes a response. func home(w http.ResponseWriter, _ *http.Request) { fmt.Fprint(w, "Hello! Your request was processed.") } We also need a small change to main.go:
https://golang.ch/how-to-write-a-golang-micros-service-for-kubernetes/
CC-MAIN-2022-40
refinedweb
520
68.47
'Improved' Nickie's solution + Commenting solution in Clear category for Find Sequence by shasa.gimblett from itertools import groupby from numpy import diagonal, array, fliplr C = 4 ex = lambda l: any(len(list(c)) >= C for x, c in groupby(l)) """ This expression returns true if there is a sequence of 4+ in the 'line' (row,col,diag) list(c) in groupby(l) returns a list of consecutive elements in the 'line'. Directly from groupby docs... [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D any() simply returns TRUE is any of the expressions inside it evaluate to be TRUE, FALSE otherwise """ def checkio(A): """ I used another users solution (Nickie) to create this solution that I thought was a little clearer. In each sub expression in the return expression the grid is searched for a sequence in the rows, cols, '/' diangonal and then the '\' diagonal. The searching in the rows an columns is quite simple. list(*zip(A)) simply transposes the matrix. For the diagonals I reccomend inspecting the documentation for numpy.diagonal as there is a lot to explain. """ n = len(A) # Search rows, cols, diagonal '\', diagonal '/' return any(ex(row) for row in A) \ or any(ex(col) for col in list(zip(*A))) \ or any(ex(array(A).diagonal(diag).tolist()) for diag in range(-(n-4),n-3)) \ or any(ex(fliplr(A).diagonal(diag).tolist()) for diag in range(-(n-4),n-3)) Nov. 14, 2016 Forum Price Global Activity ClassRoom Manager Leaderboard Coding games Python programming for beginners
https://py.checkio.org/mission/find-sequence/publications/shasa.gimblett/python-3/improved-nickies-solution-commenting/share/abf13a7a712d81ed9afaa5d378df0b22/
CC-MAIN-2020-05
refinedweb
258
54.73
- Code: Select all import getopt import sys version = '1.0' verbose = False output_filename = 'default.out' print 'ARGV :', sys.argv[1:] options, remainder = getopt.getopt(sys.argv[1:], 'o:v', ['output=', 'verbose', 'version=', ]) print 'OPTIONS :', options for opt, arg in options: if opt in ('-o', '--output'): output_filename = arg elif opt in ('-v', '--verbose'): verbose = True elif opt == '--version': version = arg print 'VERSION :', version print 'VERBOSE :', verbose print 'OUTPUT :', output_filename print 'REMAINING :', remainder This code works perfectly on my Win 7 full admin rights laptop, but on my laptop where I have limited permissions, testing the above code gives me zero command line options. Would my windows permissions be causing this issue? I thought I'd tested this on that laptop successfully in the past when I also had admin rights, so the permissions is the only thing I can think of that would cause it to fail. Has anyone run into this? Thanks, Steve
http://python-forum.org/viewtopic.php?f=10&t=2856&p=3776
CC-MAIN-2014-42
refinedweb
153
61.77
This is the mail archive of the [email protected] mailing list for the libstdc++ project. > According to section 17.4.1.2 clause 5 of ISO 14882:1998 "Names which > are defined as macros in C shall be defined as macros in the C++ > Standard Library, even if C grants license for implementation as > functions." then there is a bit where it specifies, exactly, what names are macros. setjmp is one of them. > Thus <csetjmp> [i.e. ./std/csetjmp or the file it directly includes > (Aside, is it: ./c/bits/std_csetjmp.h or ./c_std/bits/std_csetjmp.h? ;-)] c_std/bits/std_csetjmp > This code should go in one of those files (I personally think it > should go right after the ``#include <setjmp.h>''): > > #ifndef setjmp > #define setjmp(env) setjmp (env) > #endif ok. but do all the required names. You'll see a set of 17_intro/*.cc files that test for this stuff: do them all please -benjamin
http://gcc.gnu.org/ml/libstdc++/2001-03/msg00244.html
crawl-001
refinedweb
157
77.74
#include <ktimezone.h> Inherited by KSystemTimeZonesPrivate. Detailed Description The KTimeZones class represents a time zone database which consists of a collection of individual time zone definitions. Each individual time zone is defined in a KTimeZone instance, which provides generic support for private or system time zones. The time zones in the collection are indexed by name, which must be unique within the collection. Different time zone sources could define the same time zone differently. (For example, a calendar file originating from another system might hold its own time zone definitions, which may not necessarily be identical to your own system's definitions.) In order to keep conflicting definitions separate, it will often be necessary when dealing with multiple time zone sources to create a separate KTimeZones instance for each source collection. If you want to access system time zones, use the KSystemTimeZones class. Represents a time zone database or collection Definition at line 308 of file ktimezone.h. Member Typedef Documentation Map of KTimeZone instances, indexed by time zone name. Definition at line 323 of file ktimezone.h. Member Function Documentation Adds a time zone to the collection. The time zone's name must be unique within the collection. - Parameters - - Returns trueif successful, falseif zone's name duplicates one already in the collection Definition at line 67 of file ktimezone.cpp. Clears the collection. Definition at line 105 of file ktimezone.cpp. Removes a time zone from the collection. - Parameters - - Returns - the time zone which was removed, or invalid if not found Definition at line 79 of file ktimezone.cpp. Removes a time zone from the collection. - Parameters - - Returns - the time zone which was removed, or invalid if not found Definition at line 92 of file ktimezone.cpp. Returns the time zone with the given name. - Parameters - - Returns - time zone, or 0 if not found Definition at line 110 of file ktimezone.cpp. Returns all the time zones defined in this collection. - Returns - time zone collection, indexed by time zone name Definition at line 62 of file ktimezone.cpp. The documentation for this class was generated from the following files: Documentation copyright © 1996-2020 The KDE developers. Generated on Sun Feb 16 2020 04:49:17 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
https://api.kde.org/frameworks/kdelibs4support/html/classKTimeZones.html
CC-MAIN-2020-10
refinedweb
385
57.67
cookielib and ClientCookie Handling Cookies in Python Note There is a French translation of this article - cookielib et ClientCookie. Follow my exploration of living a spiritual life and finding the kingdom at Unpolished Musings. Contents Introduction Some websites can't be browsed without cookies enabled. They are used for storing session information or confirming a users identity. They are sometimes used as an alternative to a scheme like basic authentication. The right Python module for fetching webpages, or other resources across the internet, is usually urllib2. It offers a simple interface for fetching resources using a variety of protocols. For a good introuduction to urllib2, browse over to the urllib2 tutorial. By default it doesn't handle cookies though. You need to use an additional library to do this. In Python 2.4 this is called cookielib and is part of the standard library. Prior to Python 2.4 it existed as ClientCookie, but it's not a drop in replacement. In Python 2.4 some of the function of ClientCookie has been moved into urllib2. It is possible to write code that will work the same in these situations. This article illustrates how to use cookielib/ClientCookie and shows code for fetching URIs that will work unchanged on : - a machine with Python 2.4 (and cookielib) - a machine with ClientCookie - a machine with neither Where either cookielib or ClientCookie is available the cookies will be saved in a file. On a machine with neither, URLs will still be fetched - but any cookies sent won't be handled or saved. Cookies When a website sends a page to a client, it sends a set of headers first that describe that http transaction. One of those headers can contain a line of text known as the cookie. If you fetch another page from the same server [1] then the cookie should be sent back to the server as one of the request headers. This allows the cookie to store information that the server can use to identify you, or the session you are engaged in. Obviously for some processes, this information is essential. This was first supported by the netscape browser, and so the first spec was called the Netscape cookie protocol. It was described in RFC 2109, and then there was an attempt to extend this in the form of RFC 2965 - which has never been widely used. In reality, the protocol implemented by all the major browsers is still based on the Netscape protocol. By now it only bears a passing resemblance to the protocol sketched out in the original document [2]. Conditional Imports A version of this code has been printed in the second edition of the Python Cookbook. One of the reasons it was included in the cookbook is that it illustrates an interesting programming idiom called conditional import. It's particularly important in this recipe because we need the behaviour of the underlying code to be slightly different depending on which library is available. The interface we present to the programmer is the same in all three cases though. Pay attention to the first chunk of code which attempts to import the cookie libraries. This has to setup different behaviour depending on which library it imports. The pattern it uses is : try: import library except ImportError: # library not available setup alternate behaviour .... else: # library is available establish normal behaviour .... We use the name of the library we are importing as a marker, by setting it to None at the start. Later in the code we can tell if the library is available by using code like the following : # we know library is not available provide different or reduced function .... else: # library is available ..... The Code The code shown in this article can be downloaded from the Voidspace Recipebook. In this first section we attempt to import a cookie handling library. We first try cookielib, then ClientCookie. If neither is available then we default to objects from urllib2. import urllib2 We've now imported the relevant library. Whichever library is being used the name``urlopen`` is bound to the right function for retrieving URLs. The name Request is bound to the right class for creating Request objects. If we successfully managed to import a cookie handling library then the name cj is bound to a CookieJar instance. Installing the CookieJar Now we need to get our CookieJar installed in the default opener for fetching URLs. This means that all calls to urlopen will have their cookies handled. The actual handling is done by an object called an HTTPCookieProcessor. If the terms opener and handler are new to you, then read either the urllib2 docs or my urllib2 tutorial. All this is either done in ClientCookie or in urllib2, depending on which module we successfully imported. #) If one of the cookie libraries is available, any call to urlopen will now handle cookies using the CookieJar instance we've created. Fetching Webpages So having done all the dirty work, we're ready to fetch our webpages. Any cookies sent will be handled. THis means they will be stored in the CookieJar, returned to the server when appropriate, and expired correctly as well. Because we may want to restart the same session next time, we save the cookies when we've finished. # an example url that sets a cookie, # try different urls here and see the cookie collection you can make ! If you want to adapt this code for yourself it is worth noting the following things : We can always tell which import was successful. : - If we are using cookielib then cookielib is not None. - If we are using ClientCookie then ClientCookie is not None. - If we are using neither then cj is None. Request is the name bound to the appropriate function for creating Request objects and urlopen is the name bound to the appropriate function for opening URLs whichever library we have used !! For buying techie books, science fiction, computer hardware or the latest gadgets: visit The Voidspace Amazon Store. Last edited Tue Aug 2 00:51:34 2011. Counter...
http://www.voidspace.org.uk/python/articles/cookielib.shtml
CC-MAIN-2017-04
refinedweb
1,012
64.61
This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions. Hello! I'm developing a smart device application (written with Visual Basic Language) for a device who has Win Ce.NET 4.2 Operating System. This application: 1. runs well on the emulator of Visual Studio 2003 2. take data from a database installed on sql Server 2000 (infact I frequently use function of System.Data.SqlClient namespace like connect , command ,etc ) when I connect the device to my pc and I try to launch the debug , this fails. I have copied the .exe on device and when I launch it it gives me the following message : "Could not find resource assembly" Note : I have yet installed on device .Net Compact Framework. Please help me because On this application I'm doing my degree thesis !!! I'm sorry for my weak english but I'm Italian ! Thank you
https://social.msdn.microsoft.com/Forums/en-US/46b7caaa-47b7-43c6-8a0e-7b8242309e96/question-about-smart-device-application?forum=netfxcompact
CC-MAIN-2021-49
refinedweb
154
68.67
eventy 0.2.5 Easy to use event-loop dispatcher mechanism To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: Eventy Easy-to-use event-loop dispatcher framework for D-based applications Getting started The engine The first thing every Eventy-based application will need is an instance of the Engine. This provides the user with the basic event-loop functionality that eventy provides. It's the core of the whole framework that exists to have event-triggers ingested into its queues, checking those queues and one by one dispatching each signal handler that is associated with each queue on each item in the queue. The simplest way to get a new engine up and running is as follow: Engine engine = new Engine(); engine.start(); This will create a new engine initializing all of its internals and then start it as well. Queues Queues are as they sound, a list containing items. Each queue has a unique ID which we can choose. The items of each queue will be the events that are pushed into the engine. An event has an ID associated with it which tells the engine which queue it must be added to! Let's create two queues, with IDs 1 and 2: engine.addQueue(1); engine.addQueue(2); This will tell the engine to create two new queues with tags 1 and 2 respectively. Event handlers We're almost done. So far we have created a new engine for handling our queues and the triggering of events. What is missing is something to handle those queues when they have something added to them, we call this an "event handler" in computer science but this is Eventy, and in Eventy this is known as a Signal. We're going to create a signal that can handle both the queues and perform the same task for both of them. We do this by creating a class that inherits from the Signal base type: class SignalHandler1 : Signal { this() { super([1,2]); } public override void handler(Event e) { import std.stdio; writeln("Running event", e.id); } } We need to tell the Signal class two things: - What queue IDs it will handle - What to run for said queues The first of these two is very easy, this is what you see in the constructor this(): this() { super([1,2]); } The super([1,2]) call tells the Signal class that this signal handler handles those two IDs, namely 1 and 2. As for what to run, that is specified by overriding the void handler(Event) method in the Signal class. In our case we make it write to the console the ID of the event (which would end up either being 1 or 2 seeing as this handler is only registered for those queue IDs). import std.stdio; writeln("Running event", e.id); We're almost there, trust me. The last thing to do is to register this signal handler with the engine, we do so as follows: Signal j = new SignalHandler1(); engine.addSignalHandler(j); Triggering events Now comes the fun part, you can add events into the system by pushing them to the core as follows: Event eTest = new Event(1); engine.push(eTest); eTest = new Event(2); engine.push(eTest); You will then see something like this: Running event1 Running event2 or: Running event1 Running event2 The reason is it depends on which process gets shceduled by the Linux kernel first, this is because new threads (special types of processes) are spanwed on the dispatch of each event. - Registered by Tristan B. Kildaire - 0.2.5 released 29 days ago - deavmi/eventy - LGPL v3 - Authors: - - Dependencies: - none - Versions: - Show all 17 versions - Download Stats: 0 downloads today 0 downloads this week 1 downloads this month 17 downloads total - Score: - 0.6 - Short URL: - eventy.dub.pm
https://code.dlang.org/packages/eventy
CC-MAIN-2022-27
refinedweb
653
67.28
Hitscotty + 65 comments With my solution I used modular arithmetic to calculate the position of the each element and placed them as I read from input. for(int i = 0; i < lengthOfArray; i++){ int newLocation = (i + (lengthOfArray - shiftAmount)) % lengthOfArray; a[newLocation] = in.nextInt(); } antrikshverma2 + 1 comment Neat code , thanks Hitscotty !! michellerodri247 + 0 comments The array is a part of the programming field. There are different topics related to this array destination wedding . The left rotation indicates the rotation of elements in an array. The rotation takes place in left wise. The rotation happens a single element at a time. manishdas + 5 comments hmm.. I'm surprised that worked for you. This one worked for me: str = '' length_of_array.times do |i| new_index = (i + no_of_left_rotation) % length_of_array str += "#{array[new_index]} " end puts str.strip darwin57721 + 2 comments what is the starting value of your i? (i dont know ruby). d=2, n = 10. Because if it is 0, it would be (0+2)%10 = 2. What am I getting wrong? manishdas + 1 comment The starting value of the i is 0. Looks like correct calculation to me. What result are you expecting? darwin57721 + 0 comments ha, yeah i wasn't understanding right! I made it this way, that's why I was confused. rotated[(n+i-d)%n] = a[i]. Which is analogous to yours, but calculating the index in destination. Yours is more clear I think. Thanks! Usernamer89 + 1 comment are you a mathematician? because i came out with a bit similar answer jambekardhanash1 + 2 comments why do we need i? Can you please explain? manishdas + 33 comments Based on current index (i), you need to generate new index. For example: let's say array = [1, 2, 3, 4] and k = 2, then after 2 left rotation it should be [3, 4, 1, 2] => 3 4 1 2 (space separated string output) Now let's walk through my algorithm: # Initial assignments: # array = [1, 2, 3, 4] # length_of_array = array.length = 4 # no_of_left_rotation = k = 2 # new_arr = Arra.new(length_of_array) # new_arr: [nil, nil, nil, nil] # NOTE: # length_of_array.times do |i| # is equivalent to # for(i = 0; i < length_of_array; i++) # Algorithm to calculate new index and update new array for each index (i): # new_index = (i + no_of_left_rotation) % length_of_array # new_arr[i] = array[new_index] # LOOP1: # i = 0 # new_index = (0 + 2) % 4 = 2 # new_arr[i = 0] = array[new_index = 2] = 3 # new_arr: [3, nil, nil, nil] # LOOP2: # i = 1 # new_index = (1 + 2) % 4 = 3 # new_arr[i = 1] = array[new_index = 3] = 4 # new_arr: [3, 4, nil, nil] # LOOP3: # i = 2 # new_index = (2 + 2) % 4 = 0 # new_arr[i = 2] = array[new_index = 0] = 1 # new_arr: [3, 4, 1, nil] # LOOP4: # i = 3 # new_index = (3 + 2) % 4 = 1 # new_arr[i = 3] = array[new_index = 1] = 2 # new_arr: [3, 4, 1, 2] # After final loop our new roated array is [3, 4, 1, 2] # You can return the output: # new_arr.join(' ') => 3 4 1 2 Hope that's clear. MobilityWins + 0 comments I am trying to understand this, but this is the first time I have seen value assignments that involve a val= val= anotherVal I am not quite understanding how that is supposed to work, also what is "nil" and its purpose for an array mzancanella + 1 comment if the length of the array is = 3 then it seems it won't work. p_callebat + 2 comments new_index = (i + no_of_left_rotation) % length_of_array; seems incorrect. You will see the problem if you test, for example [1,2,3,4,5] and k = 2 . I guess would be better: new_index = (i + (lengthOfArray - no_of_left_rotation)) % lengthOfArray; supertrens + 3 comments Seems like this algorith only works for small number because when the array is big enough due to long looping period u will have system "timeout" 2017A7PS0931G + 2 comments I was facing the same problem.I gave several attempts but the issue couldn't be solved. Can you please tell me how to define a loop for a set of array with so many elements as such... :) pawel_jozkow + 5 comments In java8 the problem was in String; You have to use more efficient StringBuilder instead; And of couse use only one loop to iterate over array; here is my code snippet: StringBuilder output = new StringBuilder(); for(int i = 0; i<n; i++) { b[i] = a[(i+k) % n]; output = output.append(b[i]).append(" "); } d_p_sergeev + 0 comments Better to use linked list, so no need to LOOP fully: val z = LinkedList(a.toList()) for(i in 0 until n) z.addLast(z.pollFirst()) jaya170199 + 0 comments why it is not working if we are using same array to store modified array i.e. a[i]=a[i+k)%n] sreetejayatam + 0 comments include void reverse(int *str,int length) { int start,end; for(start=0,end=length-1;start } } int main(){ int size,nor; scanf("%d %d",&size,&nor); int *str=(int *)malloc(size*sizeof(int)); for(int i=0;i<size;scanf("%d",&str[i++])); reverse(str,size); reverse(str,size-nor); reverse(str+size-nor,nor); for(int i=0;i<size;printf("%d ",str[i++])); return 0; } __raviraj__ + 3 comments include using namespace std; int main() { long int a[1000000],n,d,i,f; cin>>n>>d; for(i=0;i>a[i]; for(int j=0;j<d;j++) { f=a[0]; for(i=0;i<n;i++) { a[i]=a[i+1]; } a[n-1]=f; } for(i=0;i<n;i++) cout<<a[i]<<" "; } //this is my code and im getting time out could u please solve nsaikaly12 + 2 comments its because your solution is O(n^2) with the inner loop. Try and find an O(xn) solution and iterate over the whole array only once. __raviraj__ + 1 comment i didnt get u reddychintu + 1 comment O(n^2) means you have 2 for loops causing a greater time complexity monica_marlene_1 + 0 comments an inner loop will not cause his program to time out. I don't believe the variable n was ever initialized, so the loop is approaching a value of n that isn't defined. SBU3411348 + 0 comments static int[] rotLeft(int[] a, int d) { int j,i,p; for(j=0;j Check with this you will get what is the mistake ypu did. joelvanpatten + 0 comments I was facing the same issue in PHP. My solution worked for 9 out of 10 test cases but timed out on one of them every time. You have to re-write the solution to be less memory intensive. In my case I was using array_shift() which re-indexes the arrays, so for large arrays it uses too much memory. My solution was to use array_reverse() and then array_pop() instead, because those methods don't re-index. haroon_1993 + 0 comments This Does not suits for all entries if you make the rotation to more than 4 its fails lakshman1055 + 1 comment How to think like this ? Once the code is there I know its easy to understand.I want to know how did you know to use modulous and how did you come up thinking that logic ? thanks in advance. amrelbehairy88 + 1 comment Have you ever heard about Data Structure ? because if you do , you would probably heard about circular array. I was able to solve the question because I'm knew about circular arrays , we use % + size of array to create a cirural array , then all you need to do is to complete the puzzle to solve the problem. check this video, sasuke_10 + 1 comment Great solution. Any tips on how to know if you need to use modulus in your algorithm? I solved this problem using 2 for loops... mikehow1005 + 3 comments I figured it out by saying, I don't need to loop through this array over and over to know what the final state of the array should be. What I need to figure out is what the first element of the new array will be after I've rotated X amount of times. So if I divide the number of rotations (X) by the length of the array (lenArr) I should get the amount of times the array has been fully rotated. I don't need that, I need what the first element will be after this division operation. For that I need the remainder of that divison (the modulus). This is because after all of the full array loops are done, the remaining rotations determine what the first element in the new array will be. So you take that remainder (modulus) and that's the first element's index in the old array. For example, 24 rotations in a 5 element long array means that the first element in the new array is in the 4th index of the old array. (24 % 5 = 4) So rotate through [3, 4, 5, 6, 7] 24 times and the first element will be 7. So just take that and put it before the other elements. ([7. 3, 4, 5, 6]) Another good tip is always look for repeating patterns. It's a sign that you can simplify your code. The for loop method is just repeating the state of the array over and over: [3, 4, 5, 6, 7] [4, 5, 6, 7, 3,] [5, 6, 7, 3, 4,] [6, 7, 3, 4, 5,] [7, 3, 4, 5, 6,] [3, 4, 5, 6, 7] [4, 5, 6, 7, 3,] [5, 6, 7, 3, 4,]... You only really need to know what's happening in the final few rotations, after the last full loop. anisharya16 + 0 comments thankyou so much, it helped a lot. but can you please tell how did you think about the new index position. what did you think? rakeshreddy5566 + 1 comment simple is peace return arr[d:] +arr[0:d] morrisontech + 0 comments i is a variable used to iterate through the loop, it generally represents the index of the array that is being referenced on a particular iteration of the loop. abhash24oct + 0 comments Your code if for right rotation, and the explanation gave you right answer as the size was 4 and k =2 , so no matters you do left/right you will get same. For left it will be int newLoc= (n +(i-k))%n; zenmasterchris + 4 comments The question asks to shift a fully formed array, not to shift elements to their position as they're read in. Start with a fully formed array, then this solution does not work. cmshiyas007 + 0 comments thats what me too thinking of..was wondering why the logic writte here was arranging the array on read... andritogv + 1 comment That's exactly the point of the exercise. You have to rotate an already existing array. Turings_Ghost + 0 comments I noticed that right away. If the point was to produce printed output, then this is fine (and a lot of analysis works backward from output). But, as stated, one is supposed to shift an array, so this missed it. aubreylolandt + 0 comments this could easily be modified though by creating another array of the same size: vector b(n); for(int i = 0; i < n; i++) { b[i] = a[(i+k) % n]; } return b; buzzaldrin + 0 comments I had the same idea! Just find the starting point of the array with the shift and count on from there, taking modulo and the size of the array into account. denis_ariel + 1 comment (i + shift) % lenght Should be enough robertgbjones + 0 comments Except that describes a right shift, and specification says a left shift. You might consider left shift to be negative shift, in which case you are correct mathematically, but I'd feel much more comfortable keeping the whole calculation in positive territory. chrislucas + 3 comments modular arithmetic is cool. I solved that way too for idx in range(0, _size): indexes[(idx - shift + _size) % _size] = _list[idx] marwinko19 + 1 comment Can you please explain how that works? jericogantuangc1 + 0 comments Hello, where did this solution from? what should I study to be able to come up with solutions like this? jattilah + 6 comments Looks a lot like my C# solution: static int[] Rotate(int[] a, int n) { n %= a.Length; var ret = new int[a.Length]; for(int i = 0; i < a.Length; ++i) { ret[i] = a[(i + n) % a.Length]; } return ret; } purshottamV + 1 comment This line usefull when n >= a.Length caiocapasso + 0 comments Here's another slightly different solution. I'm assuming it would be less performant, since it uses List and then converts it to Array, but I'm not sure how much more so. static int[] rotLeft(int[] a, int d) { var result = new List<int>(); for (int i = d; i < (a.Length + d); i++) { result.Add(a[i%a.Length]); } return result.ToArray(); } sidnext2none + 3 comments I agree modular arithmetic is awesome. But, simple list slicing as follows solves too ;) def rotLeft(a, d): return a[d:]+a[:d] Turings_Ghost + 0 comments The modulus operation always returns positive. If, as in Java, it really does remainder, rather than the mathematical modulus, it can return negative. So, depends on which language. marakhakim + 2 comments What if lengthOfArray < shiftAmount? I think you should use abs value jattilah + 0 comments You deal with lengthOfArray < shiftAmount by using: shiftAmount = shiftAmount % lengthOfArray; If the array length is 4, and you're shifting 6, then you really just want to shift 2. The constraints say that shiftAmount will always be >= 1, so you don't have to worry about negative numbers. vovchuck_bogdan + 10 comments pretty simple in js: a.splice(k).concat(a.slice(0, k)).join(' ') amezolma + 2 comments Did something similar in C#.. using System; using System.Collections.Generic; using System.IO; using System.Linq; class Solution { static string rotate(int rot, int[] arr) { string left = string.Join( " ", arr.Take(rot).ToArray() ); string right = string.Join( " ", arr.Skip(rot).ToArray() ); return right + ' ' + left; } static void Main(String[] args) { string[] tokens_n = Console.ReadLine().Split(' '); int n = Convert.ToInt32(tokens_n[0]); int k = Convert.ToInt32(tokens_n[1]); string[] a_temp = Console.ReadLine().Split(' '); int[] a = Array.ConvertAll(a_temp,Int32.Parse); // rotate and return as string string result = Solution.rotate(k, a); // print result Console.WriteLine(result); } } merkman + 2 comments Or you can one line it with LINQ Console.Write(string.Join(" ", a.Skip(k).Concat(a.Take(k)).ToArray())); rahulbhansali + 1 comment While it is definitely elegant looking with a single line of code, how many times will this iterate over the array when performing 'skip', 'take' and 'concating' them? In other words, what's the complexity of this algorithm? jordandamman + 0 comments Any resources that explain how this works? I definitely see that it works, but say k is 5 in the first example and the array is 12345, it looks like we're skipping the whole array, then concatenating that whole array back to it with Take(5). What am I missing? Thank you for your time. avi_roychow + 4 comments Can any one please tell me why the below code is timing out for large data set: for(int j=0;j<k;j++) { for(int current=n-1;current>=0;current--) { if(current!=0) { if(temp!=0) { a[current-1]= a[current-1]+temp; temp= a[current-1]-temp; a[current-1]=a[current-1]-temp; } else { temp=a[current-1]; a[current-1]=a[current];//for the first time } } else//when current reaches the first element { a[n-1]=temp; } } } Console.WriteLine(string.Join(" ",a)); rishabh10 + 2 comments mine is also a brute force approach but it worked check it out if it helps you import java.io.*; import java.util.*; import java.text.*; import java.math.*; import java.util.regex.*; public class Solution { public static int[] arrayLeftRotation(int[] a, int n, int k) { int temp,i,j; for(i=0;i<k;i++){ temp=a[0]; for(j=1;j<n;j++){ a[j-1]=a[j]; } a[n-1]=temp; } return a; } public static void main(String[] args) { Scanner in = new Scanner(System.in); int n = in.nextInt(); int k = in.nextInt(); int a[] = new int[n]; for(int a_i=0; a_i < n; a_i++){ a[a_i] = in.nextInt(); } int[] output = new int[n]; output = arrayLeftRotation(a, n, k); for(int i = 0; i < n; i++) System.out.print(output[i] + " "); System.out.println(); } } pateldeep18 + 0 comments int main(){ int n; int k; int temp1, temp2; scanf("%d %d",&n,&k); int *a = malloc(sizeof(int) * n); for(int a_i = 0; a_i < n; a_i++){ scanf("%d",&a[a_i]); } k = k %n; for(int a_i = 0; a_i < k; a_i++){ temp1 = a[0]; for(int i = 1; i < n; i++){ a[i-1] = a[i]; } a[n-1] = temp1; } for(int a_i = 0; a_i < n; a_i++){ printf("%d ", a[a_i]); } return 0; } my code is the same as yours but i still time in test case 8, why is that? not_nigel + 0 comments You're not wrong but this solution is inefficient. You're solving it in O(((n-1) * k) + 2n). The solution below is in O(2n). private static void solution(int size, int shift, int[] arr) { int count = 0; for (int i = shift; i < size; i++) { System.out.print(arr[i]); System.out.print(" "); count++; } count = 0; for (int i = size - shift; i < size; i++) { System.out.print(arr[count]); if (i != size - 1) System.out.print(" "); count++; } } ash_jo4444 + 2 comments I got a timeout error for TC#8 and #9 for the same logic in Python :( Muthukumar_T + 1 comment i got time out for tc#8 in c why?????? russelljuma + 0 comments No loops. Just split and reconnect. def rotLeft(a, d): b = [] b = a[d:len(a)] + a[0:d] return b gdahis + 1 comment Because it is O(n*k), if you have a big n and a big k, it could timeout. See if you can think of an algorithm that would visit each array element only once and make it o(n). Also, is there any optimization you can make? For example: if k is bigger than n, then you don't need to do k rotations you just need to do k % n rotations and k will be much smaller, smaller than n. Example: [ 1, 2, 3, 4, 5 ] K=2, K=7=(1*5)+2, K=12=(2*5)+2, they are all equivant, leading the array to be: [3, 4, 5, 1, 2] Nitin304 + 1 comment My Solution : public static int[] arrayLeftRotation(int[] a, int n, int k) { int[] b = new int[n]; for(int i=0;i<n-k;i++){ b[i] = a[k+i]; } int l = 0; for(int i=n-k;i<n;i++){ b[i] = a[l++]; } return b; } hatem_ali64 + 1 comment[deleted] amit_feb06 + 1 comment with one for loop i have subitted the code thefonso + 1 comment in an actual interview they will ask you not to use splice or slice. had that happen ti me. _e_popov + 1 comment indeed, forgot that endgoes through the end of a sequence, so here is my solution function rotLeft(a, d) { const index = d % a.length; return [...a.slice(index), ...a.slice(0, index)]; } Paul_Denton + 1 comment Spoiler! You can do it even simpler: rotated[i] = a[(i + k) % n]. Also spoilers should be removed from the discussion or the discussion should only be available after solving. I will complain about this until its changed :P gurdeeps158 + 0 comments your solution is cool but if you have an array as input then you are in trouble bcoz in that case you have space complexity of O(n) as you need an another array to store element in new place.. think.. [DELETED] + 1 comment[deleted] theshishodia + 2 comments alexzaitsev + 0 comments Hey, guys Here is a solution based on modular arithmetic for the case when k > n: new_index = (n + i - abs(k-n)) % n (note: n - abs(k-n) can be collapsed to a single number) milindmehtamsc + 0 comments This will also fail when my shiftAmount = 7 and lengthOfArray = 3, in short lengthOfArray is less than shiftAmount. In this case we can use Math.abs(). for(int a_i=0; a_i < n; a_i++){ int new_index = Math.abs((a_i + (lengthOfArray - shiftAmount))) % lengthOfArray ; a[new_index] = in.nextInt(); } mihir7759 + 1 comment It's not cheating exactly. Using the same method you can even rotate the array, instead of printing the array just give the values of the array to a new array. codextj + 0 comments I was nitpicking, I thought of the same soln at first but then changed my mind; As the question was GIVEN an array ..so if this was an interview there is this constraint that your array is already populated with the elements. btw r u 14 ? its great to see my young indian frnds indulging in programing 96rishu_nidhi + 1 comment can u please elaborate some more about your code as i dont have much knowledge about modular maths greengalaxy2016 + 0 comments the requirement is to take an array and left rotate the array d times. Your solution returns the correct result, but takes an integer one at a time. c00301223 + 0 comments Thanks for sharing this code it really helpped. I felt the constraints were to be includes by ifstatements but after viewing your code I was able get it. I have a small suggestion, would it improve the code if one were to seperate the (LengthOfArray - ShiftAmount) part into a variable and then reuse it since its kind of a constant value. Once again kudos. riyaz_rayyan07 + 0 comments what is in.nextInt() which language is that did you create another scanner object of in can you be more specific? ZeoNeo + 0 comments It's easy when you directly read them from system input. Try to make it work on already stored array. That's what problem statement says. It gets tricky and interesting after that to solve it in o(n) without extra memory. i.e. // Complete roLeft function My solution private static int getIncomingIndex(int index, int rotations, int length) { if(index < (length - rotations)) { return index + rotations; } return index + rotations - length; } // Complete the rotLeft function below. static int[] rotLeft(int[] a, int d) { int rotations = d % a.length; if(a.length == 0 || a.length == 1 || rotations == 0) { return a; } if( a.length % 2 == 0 && a.length / 2 == rotations) { for(int i =0 ; i < a.length / 2 ; i++) { swap(a, i, i + rotations); } } else { int count = 0; int i = 0; while(true) { int dstIndex = getIncomingIndex(i, rotations, a.length); swap(a, i, dstIndex); i = dstIndex; count++; if(count == a.length - 1) { break; } } } return a; } madhanmohansure + 1 comment nice code tq scweiss1 + 1 comment The part I'm missing here is why use a loop (O(n)). Can't you take the array and find the effective rotation based on the shift amount (using the same modular arithemetic you're doing? (Which is now O(1) since the length of the array is a property) function rotLeft(a, d) { //calculate effective rotation (d % a.length) let effectiveRotation = d % a.length; // split a at index of effective rotation into left and right let leftPortion = a.slice(0, effectiveRotation); let rightPortion = a.slice(effectiveRotation); // concat left to right return rightPortion.concat(leftPortion) } silverdust2695 + 0 comments Why would you loop for every element when in essence the rotation operation is nothing but just a rearrangement of the array elements in a specified fashion? LeHarkunwar + 0 comments Tried a different approach def rotLeft(a, d): return reversed(list(reversed(a[:d])) + list(reversed(a[d:]))) mortal_geek + 0 comments But isn't the whole point that you are not placing them as they come, the array is pre-populated and then rotate it. My solution is O(dn), not sure if there is anything better. Clearly I am not an algorithm guy (anymore)! for (int i = 0; i < d; i++) { int pop=a[0]; //shift left for (int j = 1; j < a.length; j++) { a[j-1] = a[j]; } //push a[a.length-1]=pop; } fakirchand + 1 comment Excellent !!! I am new to problem solving. I had solved it via normal shifting using one for loop and one while loop. How did you arrive at this kind of solution?? Little bit of explanation as what you thought while solving this would help a lot. Thanks. judith_herrera22 + 0 comments I don't see my submission in the discussion board. Are you reviewing my solutions? mine0nlinux + 0 comments If the number of rotations are greater than array length (I know it's less than array length which is given in the question, let us assume), then how would this formula change? BTW That's a great way to get the array indices without having to traverse the whole array jc_imbeault + 0 comments Interesting take on the problem! I'm just mentionning this for completeness' sake but not actually solving the problem as asked, which is to write a separate function :) Also, a follow-up question might be "improve your function so that it rotates the array in-place" vikas_nadahalli + 0 comments How do you people come up with such optimization? my mind doesn't seem to work :( ecoworld007 + 0 comments I was thinking to do the same but thought not gonna do this with arithmetic so I just looped twice. let result = []; for(let i = shiftAmount; i < array.length; i++){ result.push(array[i]); } for(let i = 0; i < shiftAmount; i++){ result.push(array[i]); } return result; qzhang63 + 15 comments Python 3 It is way easier if you choose python since you can play with indices. def array_left_rotation(a, n, k): alist = list(a) b = alist[k:]+alist[:k] return b kevinmathis08 + 10 comments Yeah index slicing FTW, here was my 1-liner in definition, lol: def array_left_rotation(a, n, k): return a[k:]+a[:k] n, k = map(int, input().strip().split(' ')) a = list(map(int, input().strip().split(' '))) answer = array_left_rotation(a, n, k); print(*answer, sep=' ') Lord_krishna + 1 comment is that scala? aniket_vartak + 1 comment you dont need to pass n to your function, right.. michael_bubb + 2 comments I agree - I ended up not using 'n' (Python): def left_shift(n,k,a): for _ in range(k): a.append(a.pop(0)) print(*a) unitraxx + 1 comment Obviously this solves the problem, but is a terrible solution. Pop is an O(N) operation. So your solution becomes O(K*N). This should be done in O(N) total time complexity. You do have the space requirement of O(1) correct. All the standard solutions have a O(N) space complexity. asfaltboy + 1 comment True. However, it becomes an elegant solution if we use collections.dequeue instead of list. Double ended queues have a popleft method, which is an O(1) operation: def array_left_rotation(a, n, k): for _ in range(k): a.append(a.popleft()) return a More info: AffineStructure + 1 comment They have rotate built into the deque def array_left_rotation(a, n, k): a = deque(a) for i in range(k): a.rotate(-1) return a josegabriel_st + 0 comments You have O(n) in: a = deque(a) In order to avoid this, you should use a deque from the beginning like: from collections import deque def array_left_rotation(a, n, k): a.rotate(-k) n, k = map(int, input().strip().split(' ')) a = deque(map(int, input().strip().split(' '))) array_left_rotation(a, n, k); print(*a, sep=' ') And array_left_rotation only takes O(k) instead of O(n). Note that a is pased by reference, so there is no need to return anything, but this could be an issue for some user cases, for this particular problem, it works. ansimionescu + 1 comment k -> k%n domar + 1 comment Pretty smart, but are you sure you are not copying the kth eleement, in ruby it would be: def array_left_rotation(a, k) a[k..-1] + a[0...k] end In ruby, ...means excluding the right number. kevinmathis08 + 1 comment Yes both qzangs and my answer is correct. In python index slicing (indices[start:stop:step]), works like so... We will begin with the index specified at start and traverse to the next index by our step amount (i.e. if step = 2, then we jump over every other element, if step = 3, we jump over 2 elements at a time). If step is not specified it is defaulted to 1. We continue steping from our start point until we come to or exeed our stop point. We do NOT get the stop point, it simply represents the point at which we stop. I love Python :) burakozdemir32 + 0 comments What about if 'k' is greater than 'n'? You should use modular arithmetic to get actual rotate count. actual_rotate_count = k % n Then your solution would work for every k values. jhaaditya14 + 2 comments I am getting request timeout for test case 8... anyone with same problem?? or anyone knows the solution?? vabanagas + 1 comment The test case is a large array with a large amount of shits. If your algorithim is not effecient than it will time out. Array size: 73642 Left shifts: 60581 shilpaJayashekar + 1 comment If u are using javascript, this will work var b = a.splice(0, d); a = a.concat(b); belolapotkov_v + 0 comments Super weird but checked twice function rotLeft(a, d) { const headIndex = d % a.length const head = a.splice(0, headIndex) return a.concat(head) // fails test 9 as it creates a new array } function rotLeft(a, d) { const headIndex = d % a.length const head = a.splice(0, headIndex) a.push(...head) return a // passes test 9 as it modifies initial array } chenyu_zhu86 + 1 comment Python index slicing makes this trivial :D def array_left_rotation(a, n, k): return a[k:] + a[:k] darkOverLord + 6 comments I did it this way, in Java public static int[] arrayLeftRotation(int[] a, int n, int k) { if (k >= n) { k = k % n; } if (k == 0) return a; int[] temp = new int[n]; for (int i = 0; i < n; i++) { if (i + k < n) { temp[i] = a[i + k]; } else { temp[i] = a[(i + k) - n]; } } return temp; } vinaysh + 1 comment instead of if-else statement temp[i] = a[(i+k)%n]; would be enough. Also this solution would take up extra memory(for temp). shortcut2alireza + 2 comments Would you mind sharing your solution that does it in-place? Thanks jacob0306 + 1 comment in case you need it! hope it help![((n- (k%n))+a_i)%n] = in.nextInt(); } for(int i = 0; i < n; i++) System.out.print(a[i] + " "); System.out.println(); } } hackerrank_com23 + 3 comments Here's my in-place function implementation: public static int[] arrayLeftRotation(int[] a, int n, int k) { // Rotate in-place int[] temp = new int[k]; System.arraycopy(a, 0, temp, 0, k); System.arraycopy(a, k, a, 0, n - k); System.arraycopy(temp, 0, a, n - k, k); return a; } cc_insp + 0 comments I used the System.arraycopy() method which was used in the video tutorial. I'm wondering if this solution is more efficient or mine?[a_i] = in.nextInt(); } a = leftRotation(n, k, a); for (int i=0; i<a.length; i++) { System.out.print(a[i]+" "); } } public static int[] leftRotation(int n, int k, int[] a){ int[] copy = new int[n]; System.arraycopy(a, k, copy, 0, (n - k)); System.arraycopy(a, 0, copy, (n - k), (n - (n - k))); return copy; } } yash_97373 + 0 comments Is calculation really required ? for(int i=k; i < a.length; i++){ System.out.print(a[i] + " "); } for(int i=0; i < k; i++){ System.out.print(a[i] + " "); } HeinousTugboat + 4 comments One line of JS, no looping: console.log(a.concat(a.splice(0,k).join(' '))); tienle_dalat + 0 comments One step forward with spread operator: return [...a.splice(d, a.length - 1), ...a]; mmicael + 2 comments In PHP: $list = array_merge(array_slice($a, $k), array_slice($a, 0, $k)); echo implode(" ", $list); quliyev_rustam + 0 comments Hi! You are in right way, but in wrong code. You need use this: a, a)-a, 0, $k)); kuttumiah + 1 comment Hi, For cases including shifts more than the array size this should work. $actual_shift = $d % count($a); $list = array_merge(array_slice($a, $actual_shift), array_slice($a, 0, $actual_shift)); quliyev_rustam + 1 comment It is not necessary. By the hypothesis of the task shifts cant bo more than array size kuttumiah + 1 comment Yeah, Thanks for the response. I missed that hypothesis. In that case this shouldn't this be just fine? array_merge(array_slice($a, $k), array_slice($a, 0, $k)); quliyev_rustam + 1 comment In this test, for "Tester 8", you need to use third argument for array_slice. Otherwise, an error is returned. This is because for "Tester 8" uses a large array of transmitted values Sort 2429 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/ctci-array-left-rotation/forum
CC-MAIN-2019-43
refinedweb
5,435
64.2
Some test text! Welcome to PDFTron. Currently, .NET Framework for the PDFTron SDK is only supported on Windows. This guide will help you run PDFTron samples and integrate a free trial of the PDFTron SDK into .NET Framework applications on Windows. Your free trial includes unlimited trial usage and support from solution engineers. Make sure that the .NET Desktop Development workload is part of your installation. This guide will use Visual Studio 2017. Extract the folder from the .zip file. This article uses PDFNET_BASE as the path into the PDFNetDotNet4 folder that you extracted. PDFNET_BASE = path/to/extraction/folder/PDFNetDotNet4/ Navigate to the location of extracted contents. Find and enter the Samples folder ( PDFNET_BASE/Samples). Here you can find sample code for a large number of features supported by the PDFTron SDK. Samples_20XX.slnin Visual Studio. Choose an appropriate version for your Visual Studio installation. This is called the "PDFTron Hello World" application. It is easy to integrate the rest of PDFTron SDK if you are able to open, save and close a PDFDoc. Visual C#or Visual Basiccategory. Navigate into your project's folder. By default, the path should be similar to: C:/Users/User_Name/source/repos/myApp Copy the Lib folder from PDFNET_BASE to your project folder (the folder which contains your .csproj or .vbproj file). Find the Solution Explorer to the right. Right-click on References and select the Add reference option. This opens a Reference Manager dialog. Click on Browse... at the bottom of the dialog. Navigate to the copied Lib folder and add PDFNetLoader.dll to the references. Also add the appropriate version of PDFNet.dll from the x86 folder as another reference ( path/to/your/project/folder/Lib/PDFNet/x86/PDFNet.dll). This version will allow the application to run on both 32-bit and 64-bit OS. PDFNet.dll and set its Copy Local property to False. Right click on your project and select Properties. In the left pane, select the Build Events tab. Under Post-Build Events, add the following code snippet: xcopy $(ProjectDir)Lib\PDFNet $(TargetDir)PDFNet /S /I /Y Replace the contents of Program.cs or Module1.vb with: // Default namespaces using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; // Majority of PDFTron SDK can be used with these namespaces using pdftron; using pdftron.Common; using pdftron.SDF; using pdftron.PDF; namespace myAppCS { class Program { // Required for AnyCPU implementation. private static PDFNetLoader loader = PDFNetLoader.Instance(); static void Main(string[] args) { // Initialize PDFNet before using any PDFTron related // classes and methods (some exceptions can be found in API) PDFNet.Initialize(); // Using PDFNet related classes and methods, must catch or throw PDFNetException try { using (PDFDoc doc = new PDFDoc()) { doc.InitSecurityHandler(); // An example of creating a new page and adding it to // doc's sequence of pages Page newPg = doc.PageCreate(); doc.PagePushBack(newPg); // Save as a linearized file which is most popular // and effective format for quick PDF Viewing. doc.Save("linearized_output.pdf", SDFDoc.SaveOptions.e_linearized); System.Console.WriteLine("Done. Results saved in linearized_output.pdf"); } } catch (PDFNetException e) { System.Console.WriteLine(e.); } } } } Build and run the project using the Start button in Visual Studio. You should find the "linearized_output.pdf" in your project folder with a blank page. Setting a startup project Find out how to set a startup
https://www.pdftron.com/documentation/dotnet/get-started/integration/
CC-MAIN-2019-43
refinedweb
554
53.98