url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://archive.sap.com/discussions/thread/1921503
code
Lock Table Overflow SAP 4.7 6.20 - Windows Server 2008 - Oracle 10g I have many error messages in tx sm21 : GEG Lock Table Overflow GEA Internal lock administration error GZZ EqSetCluster() ; rsc=16 enxxmenq1905 The error happens countinuasly for 8-9-10 minutes and then everything returns to normal. No dumps in st22, but some jobs lose the schedule. I need to reschedule some of them. I was informed by some consultants that the transaction VT02N is using an average of 28500 lock entries by user. The value of the parameter enque/table_size is 30000. But the consultants say that we should not increase this value. Because VT02N has a lot of customizations here, we should investigate the program. I had never seen this error before, so I need some help to understand the situation better. I've already read a lot of ducumentation and the more I read the more I get confused. 1) What kind of lock is that ? There is not so many lock messages in tx sm12. And there is nothing in tx db01. 2) What kind of mistake could someone do in a program / exit / include to produce these errors ? 3) How could I discover if a program is really producing so many locks ? Thanks in advance, Sunny Pahuja replied When you face this problem, then you can go to SM12 and then go to Extras --> Top Capacity used. There you can find out which user is trying to lock which table and have maximum locks. And then you can troubleshoot more. You can check SAP note 746138 for troubleshooting on lock table overflow. Also, check below thread:
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00806.warc.gz
CC-MAIN-2017-43
1,520
21
http://www.blakepell.com/Blog/?p=451
code
You might see the following error after upgrading from Windows 2003 Server to 2008 (or another Windows platform). - Error: Invalid Syntax, Code 800401E4, Source: (null) - Microsoft VBScript runtime error: The remote server machine does not exist or is unavailable: ‘GetObject’ The likely solution will be that the server is missing some components (See item #2 below). If you are trying to run an ADSI/WMI Script remotely from a client machine which gets information the from IIS web server around websites etc. you may see this: 1. Ensure Firewall is not blocking access to your server from remote access on the client (if you are running the script remotely) 2. ADSI provider should be installed on the client as well as the IIS machine for the above script to work. For running ADSI script against IIS we need to ensure the ADSI provider is installed as below. Windows Server 2003/XP: - Add/Remove windows Components –> Application Server –> Internet Information services (IIS) –> Common Files. Windows Vista/Windows 7: - Control Panel –> Programs and Features –> Turn Windows Features on or off –> Internet Information Services –> Web Management tools –> IIS 6 Management Compatibility –> IIS Metabase and IIS 6 configuration compatibility Windows Server 2008: - Server Manager –> Roles –> web Server (IIS) –> Add Role Services –> Management Tools –> IIS 6 Management Compatibility –> Select IIS 6 Metabase compatibility, IIS 6 WMI compatibility and IIS 6 Scripting Tools.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00603-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
1,510
13
https://www.kasperonbi.com/update-your-report-based-on-a-website/
code
A quick hack today. Got this question from someone who needed to be able to update a report and show users that something had changed. This is easy when you have access to a database and can add data to it but in this case that was not possible. So I came up with a hacky (and great :P) way to do this. and wanted to share it in case it came handy in your box of tricks :). Instead of using a database I am going to make use of Power BI desktops wide range of datasources and connect to a web page that is under my control. This can be a page on sharepoint, the web, the intranet, wordpress, doesn’t matter. In this example I am connecting to my own blog page: https://www.kasperonbi.com/on-demand-webinar-strengthen-your-data-modeling-skills-with-power-bi/ On this page I added some text and colored it white. No one will see it :). The color part is optional though. Next I load this page into Power BI dekstop. I am using a new M function called “Web.BrowserContents” to get just the raw content Next I am adding a new column that searches for the existence of the string “version1″ using Text.Contains([Column1],”version1”) Now that I have this I can remove the original column containing all the HTML text (no need to store that in memory) and write a measure using the data I just got.VersionText = IF ( VALUES ( containsversion[ContainsV1] ), “No update”, “Update” ) Now the text in my report changes if I update the text my blog post. Simple but effective. You can take this much further of course and even have the text shown come from the page or have it trigger conditional formatting and so on.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474649.44/warc/CC-MAIN-20240225234904-20240226024904-00044.warc.gz
CC-MAIN-2024-10
1,628
10
https://mono.github.io/mail-archives/mono-osx/2011-December/004682.html
code
[Mono-osx] How to close a window. chris.waldron at booktrack.co Tue Dec 20 05:30:41 EST 2011 I'm closing the window from the window class so I'm using this. What I noticed is that it crashes if you used null but it doesn't crash if you use new NSNull(). I think there are still funky conversion issue because IMO null should be implicitly converted to NSNull and likewise string to NSString. However I've run into several inconsistencies. On Tue, Dec 20, 2011 at 1:59 PM, Uli Hertlein <uli at xdt.com.au> wrote: > Hi Chris, > On 20/12/2011 11:52, Chris Waldron wrote: > > I just submitted a question to close down a window and Uli description > > of his issue help me resolve my issue. The OrderOut method closes the > > window. > Did you by any chance manage to call it with sender==null? This is what > I've found in various Cocoa tutorials but it crashes in MonoMac > (NullReferenceException) - not a big deal but I was wondering if it's > obsolete data on the Internet or MonoMac-specific. > (Unfortunately I haven't had any luck with my issue so far). > > On Mon, Dec 19, 2011 at 3:05 PM, Uli Hertlein <uli at xdt.com.au > > <mailto:uli at xdt.com.au>> wrote: > > Hi guys, > > this might be more of a pure Cocoa question than anything, but since > > writing this in C#/MonoMac it applies here as well and I'm hoping for > > some expert insight. > > I have a MainWindowController (with an NSWindow from .xib) and a > > EditWindowController (with an NSPanel from another .xib) that's > > to slide over the main window as a sheet/modal dialog. > > To begin/show the edit sheet I'm calling: > > NSApplication.SharedApplication.BeginSheet(editPanelWindow, > > mainDocWindow, selector, 0); > > and close it with: > > editPanelWindow.OrderOut(sender); > > NSApplication.SharedApplication.EndSheet(editPanelWindow, > > where editPanelWindow is the NSPanel and mainDocWindow is the main > > window. > > The first time the editPanelWindow pops-up almost randomly on the > > screen, not attached to the mainWindow. Oddly enough after closing > > re-opening the NSPanel it nicely slides out of the mainWindow title > > and is in the right location. > > It feels like there's some setup taking place as part of the > > BeginSheet/EndSheet that I'm missing but the docs/tutorials/etc > > helpful... > > Cheers, > > /uli > Mono-osx mailing list > Mono-osx at lists.ximian.com -------------- next part -------------- An HTML attachment was scrubbed... More information about the Mono-osx
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510225.44/warc/CC-MAIN-20230926211344-20230927001344-00250.warc.gz
CC-MAIN-2023-40
2,476
50
https://forums.fogproject.org/topic/6012/fog-tftp-boot-issue/3?lang=en-US
code
SOLVED Fog TFTP boot issue I get the error in the attachment. http://<fog-server>//service/ipxe/bg.png. The issue is the “//service” it should be “/fog/service”. I have tried switching files to fix this and I can not figure out where I need to make the edit to fix this issue (bzimage files and kpxe fiels). Where do i make the necessary change. Thanks for any help and guidance. Thank you Uncle Frank that was the answer. instead of being /fog/ there was nothing there. I reckon it’s got to do with a wrong setting within the FOG web interface. Access the web GUI and go to FOG Configuration -> FOG Settings -> Web Server and see what the option ‘FOG_WEB_ROOT’ is set to. I guess it is ‘/’ but should be ‘/fog/’. Thank you I will look into this tommorrow. I asssumed ther was just a document i could edit the path to the correct location. like a php file. Wayne Workman last edited by Wayne Workman I think your problem is that FOG installed the fogweb folder in the wrong spot. Ubuntu has changed since 1.2.0’s release. It’s probably here: /var/www/fogand it should be here: /var/www/html/fogJust a guess, I’m not totally sure because you didn’t specify what version of Ubuntu Server you are running. A symbolic link is likely the most simple fix. Something like ln -s /var/www/fog /var/www/html Fog 1.2.0, on Ubuntu server.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039508673.81/warc/CC-MAIN-20210421035139-20210421065139-00258.warc.gz
CC-MAIN-2021-17
1,356
14
https://digicoll.lib.berkeley.edu/record/139418
code
The study of user analysis behavior is of interest to the designers of analysis tools. Specific questions studied include: What types of tasks do users perform using this analysis tool? What approaches do users take to gain insights? What interface features help or hinder users in their work? What are the the distinguishing characteristics of different types of users? These questions are often investigated through controlled experiments, observational studies, user interviews, or surveys. An alternative avenue of investigation is to analyze the logs – the records of user activity – generated by analysis tools themselves. In this dissertation we present two case studies using log analysis to understand user behavior. In the first, we analyze records of user queries from Splunk, a system for log analysis, as well as a survey of Splunk users. In the second, we analyze detailed event logs and application state from Tableau, a system for visualizing relational data. We focus in particular on methods of identifying higher-level units of activity, which we refer to as tasks. We include a discussion of the particular challenges associated with collecting and analyzing log data from analysis systems. In addition to this discussion, our contributions include the description of two different approaches for identifying higher-level analysis activity from logs and a summary of the tasks represented in our datasets.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100568.68/warc/CC-MAIN-20231205204654-20231205234654-00778.warc.gz
CC-MAIN-2023-50
1,429
1
https://forums.athemes.com/t/plugin-query/6229
code
While building my site using Sydney theme, I came across this stylish thing built on the home page. Please check the screen shot at I saw this in the video tutorial Can you please let me know which plugin was used and a short note on how to execute in my home page ?
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00012.warc.gz
CC-MAIN-2020-40
266
4
http://thestamfordhcc.org/opi-nail-hardener.py
code
It's also highly recommended to are aware of about opi nail hardener design movements and lifestyle. The tasks and products featured right here make available an limitless supply of modern interior design and style tips for our viewers. You are able to also choose low charge opi nail hardener interior design tricks for a well-decorated home. If you need us to get the business finished, therefore become specific to obtain in contact with us DM In house Studio. If you'd like to discover considerably more about the exact up-to-date in property design, it'd end up being far better talk to different home builders. You could possibly possess a look at these opi nail hardener pics for additional inspiration. Accordingly, if you wanting to know how I could design my little household, afterward you're in luck. The interior style plan may likewise comprise of to identify a lawn within the house. Color comes in opi nail hardener an choice of distinct tones. In most situations, the rooms of the partitions not necessarily colored and the can color isn't really transformed to offer the fashionable presence to the homes. The convenient and coordinating coloration colour scheme can be utilized in virtually all areas far too, creating a straight forward solution for opi nail hardener property style thoughts. Be convinced to pick out compact opi nail hardener for the decent structure as far due to practical You can easily even secure classic beautiful loving styles with exhilarating tones. Your polyurethane timber surface finish isn't likely to appear fantastic on the very 1st coating. Although the outlay of building products and pieces of furniture crafted from all-natural supplies will be costlier than their man-made opi nail hardener.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999783.49/warc/CC-MAIN-20190625011649-20190625033649-00253.warc.gz
CC-MAIN-2019-26
1,749
5
https://www.infographics.group/jobs/front-end-developer/
code
We are looking for a full-time Frontend Developer (m/f/d) with interdisciplinary experience in data journalism and/or data visualization and an agile mindset to join our fast-growing team in Berlin. This position has the potential to make a life-changing impact on the issues that affect our lives the most today, starting with an environmental data project. The task at hand: - Develop microsites and interactive stories for a range of technologies and platforms. - Design and test products that change the way we think of stories, including with virtual and augmented reality as well as multi-touch tablets. - Collect and develop your own story ideas and turn them into reality with a team. - Work with an interdisciplinary team of renowned data scientists, graphic artists and editors to build data viz projects (small to medium scale) for our wide range of clients. - Grow the agency’s national and international business in line with our future strategies. Ideally you bring: - A portfolio of experience in d3.js - A fluent understanding of react.js as well as the ability to build a microsite without relying on hundreds of modules - In-depth knowledge of creating concepts for and executing complex web applications. - Know-how in further Data Viz libraries and technologies (turf, leaflet, webgl) a plus - An open mind and enjoy working in a team toward a common solution. - The ability to work independently and the desire to take ownership of your projects. - Ideally keen interest in making an impact in the environmental sector and an eye on current events especially in political and economic matters. Our business language is German, so a working level of fluency is required with strong English language skills a bonus. If this opportunity excites you, send your CV to [email protected]. The INFOGRAPHICS GROUP uses storytelling and data visualizations to develop award-winning infographics that enable knowledge. Our multidisciplinary team includes graphic artists, data journalists, developers and researchers that work together to build impactful solutions for our wide-range of clients. From our home in Berlin, we build customized content and standardized infographic toolsets for business, design engagement-driven products for museums, researchers, publishers and government institutions. The agency was founded by infographic evangelist Jan Schwochow in 2007 and has stayed close to its roots in journalism while exploring new techniques and technologies, including augmented and virtual reality.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257847.56/warc/CC-MAIN-20190525024710-20190525050710-00294.warc.gz
CC-MAIN-2019-22
2,534
18
https://www.websupergoo.com/security.aspx
code
Systems for Ensuring Safety and Security Security considerations are baked into all aspects of our software lifecycle. As part of this we are currently undergoing accreditation for ISO 27001. Our software is: - tested repeatedly and relentlessly. Over a period of over two decades we have amassed vast test suites which push the boundaries of our software to their limits to ensure they perform as expected. For products like ABCpdf this type of QA is performed by our build servers as soon as any changes are made. If something breaks or a regression error occurs we know about it immediately. - penetration tested. We use third-party professional pen testers for independent validation of the security of our major products. Our white hat experts attempt to find ways to compromise the software so that we can all be confident that hackers will not. - digitally signed. This means that you can be confident that the software you get from us is the same software which we released for you. As different parts of the software are loaded we perform our own signature and validity checks to ensure that files have not been tampered with after installation. - obfuscated. This makes it exceptionally difficult to reverse engineer, to understand how it might be manipulated and what attack surfaces might exist. - designed to be run in a limited permission environment. By doing this, you sandbox it away from your other systems. You can harden the environment as much as you like to ensure that it is completely isolated from anything else. - based on defense in depth. In addition to external checks performed by Windows we perform our own checks and isolation. For example the ABCChrome and WebKit HTML rendering engines operate inside sandboxes which limit access using our cutting edge FireShield technology. The kernels of products like ABCpdf - the most critical parts - are closed source. This offers a number of advantages over open source solutions, notably in the area of security. Open source solutions are vulnerable to exploitation precisely because they are so visible. Large numbers of people actively look through the code to try and find exploits which can be applied to any dependent product. The industry of open source exploit-mining results in the production of a variety of reports. Some are distributed in open source databases and can require that you update immediately to avoid a well publicized crack being used against you. Others are held in private collections so that nefarious actors can take advantage of them. Closed source has an advantage here because the only way to look through the code is by reading the raw assembly instructions or the obfuscated code. This is extremely challenging and makes finding exploits many orders of magnitude harder. There are easier alternatives. The result is that few if any exploits are located. Even if an exploit is found, this type of closed source code has advantages. With open source there is no record of people using a particular product. If a vulnerability is found the only option is to make it public so that everyone knows they need to upgrade. The problem here is that this knowledge becomes public both for those that need to avoid the exploit and also those who may wish to take advantage of it. With closed source licensed code, one can inform all clients without advertising anything to the wider world. Our Open Source Our open source extensions are generally quite simple. They are designed to demonstrate how to do things rather than provide complex code. This simplicity means that you can understand them and modify them yourself. There is little room for vulnerabilities to creep in because they are so straightforward. These things are not like engineering documents - more like recipes. While there might be a hidden flaw in a design for an aircraft it is much more difficult to hide a flaw in something so simple. A recipe for tomato soup which contains apples instead of tomatoes is fairly obviously wrong. Third Party Open Source Libraries We use a variety of small open source libraries for particular areas of functionality. The versions we use are tried and tested. They are neither so new that their provenance is uncertain, nor so old that they contain significant security flaws. There is a sweet spot when it comes to these things and that is the point we try to hit. Think little bear’s porridge. Libraries are specific to particular modules. So you may find that a legacy HTML engine contains a legacy JPEG library. The solution here is not to update the library - it is to use a more up to date HTML engine. The libraries themselves are well embedded, so inputs are generally vetted fairly carefully before presentation. This means that there is a limited attack surface exposed simply because it is difficult to get unvetted content through to the final insertion point. Major PDF Exploits The most major PDF exploits reported so far, were uncovered by a team from Ruhr University in February 2019. This was reported on channels such as ZDNet, "Researchers faked signatures on 21 of 22 desktop PDF viewer apps and 5 out of 7 online PDF digital signing services." These flaws exist primarily in the area of digital signatures - an obviously sensitive realm. The team demonstrated how you could use simple structures to make a signed document appear valid when it was not, or different from the originally signed copy. Since then, they have released a sequence of other ingenious exploits and found similar levels of vulnerability each time. As the exploits have been reported we have tested the current version of ABCpdf against them. ABCpdf has always passed without modification - it just does the right thing. We do not make use of static security scans for a number of reasons. The main one is that we have the source code, so we know what elements are in place. However there are other more general reasons too. The most major reason is related to lack of context. A static scan can only report what may be done - not what actually is done. This is a little abstract, but is a critical point. Take a parallel example - your static scan reports someone with a knife. Yes this sounds alarming but is it? If they are on the street then yes perhaps, but if they are in a kitchen then probably not. In terms of software, a call to a function that deletes a file is absolutely normal and will be contained in any non-trivial application. However such a call is potentially a massive problem if misused. Similarly a call to read a file and calls to transfer data over the internet are again subject to massive misuse. Yet such calls are part and parcel of any non-trivial service. That is the level of context which is missing. It can be large. One might even call it a flaw. So you still want to perform static scans? If so… Firstly, you must be aware of these major caveats regarding the results. Secondly, you need to ensure you are scanning the right parts. Our software is modular so you should only scan the pieces you want to use. For example, ABCpdf contains an HTML engine called WebKit which can be useful in very specific situations. WebKit is based on Qt 4 which ended support in 2015, WebKit itself was last updated in 2012. As such, if you scan it you will find that it contains legacy code. However if you are not using it then obviously it is not relevant. The Manual Installation section of the documentation contains details about the different parts of ABCpdf, what they do, and from that you can work out which parts you should scan. There are no known vulnerabilities in the ABCpdf source. No client has ever reported that ABCpdf was used as part of an exploit. No client has ever found a bug which could be used as a security exploit in any significant way. So perhaps you are asking yourself, surely something must have gone wrong at some point? In 2017 a professional penetration tester found a flaw in our PostScript code which could in theory be exploited. It would have taken a very specific set of code in place on the client side, combined with a lack of security, to produce a rather limited vulnerability. We fixed it immediately and notified all affected parties. In 2004 Microsoft reported a buffer overflow vulnerability in GDI+ which could affect any applications which might use it. In some places and in specific situations our products might have been affected. This was resolved by a Microsoft update. That is it. Over a period of more than twenty years and probably millions of installations - as used by almost half of the world's top 100 global brands. Use our products in accordance with the documentation. As long as you do that, they are bullet proof secure solutions.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00365.warc.gz
CC-MAIN-2023-50
8,721
53
https://bensherlock.co.uk/2018/11/23/how-i-wrote-my-thesis/
code
I wrote the article below just after I had submitted the softbound copy of my thesis for examination back in 2017. It is more of a summary and overview of the tools that I used, and the approach I took with the writing. It may have some useful links to tools for other writers, but the approach is just what worked for me in the end. I highly recommend the book “The Clockwork Muse: A Practical Guide to Writing Theses, Dissertations and Books”, 1999 by Eviatab Zerubavel. I could have done with following it closer in terms of time management, but the approach of writing the whole thesis as one piece of work ensured that the narrative and writing style flowed consistently throughout. At this moment (July 2017), the thesis has been submitted and is awaiting examination (and all the process that this entails). I have summarised very, very, briefly the tools, and general approach taken to writing this thesis, whilst it is still raw in my memory. Feeling the finished professionally printed and soft-bound copy in my hands was quite a moment after so much hard work and emotional labour. I have had a short time (1 week) to rest and recharge since the submission, and felt the time was right to get this on paper (screen). The thesis was written using the LaTeX markup language to ensure that it could be saved as plain text in a version control system, it was future-proofed by not being tied to any particular editor or operating system, and it compiled to correctly produce precisely what was required in the final document. The version control system used was Git (git-scm.com), with a bare repos on a unix server used as the central store/origin. The editor and compiler used in the end was the cloud-based Overleaf (https://www.overleaf.com/register). This allowed Git to pull and push both from Overleaf and also to the unix-based origin repository. Not requiring a complex build environment to be replicated across multiple machines meant that the thesis could be worked on with ease from anywhere so long as I had a browser and internet connection. Reducing friction meant that work happened when otherwise faff may have been sufficient excuse not to write. Images were produced using yed (www.yworks.com/products/yed) for block and flow diagrams, which were exported to SVG format. Fonts were set to use Times New Roman to ensure consistency across platforms. Vector images were created using Inkscape (inkscape.org), and the SVGs from yed were also imported into Inkscape before the final output PDFs were created for upload to Overleaf. Graphs were produced using Python scripts and Matplotlib (matplotlib.org) with a customised colour and marker scheme to produce professional, clear and consistent graphs across the entire thesis. Data was produced using bespoke C++ analysis tools or MATLAB and saved as CSV files for future-proofing. The scripts then processed these files accordingly to produce the graphs. References were incorporated into a bibtex text file. The key names used were [author][year][first non-fillword of title]. The pdfs were stored alongside in the folder structure and filenames formatted as below: references/ references-all.bib papers/ jones2013analysis-Analysis of Something of Interest.pdf Google scholar was mainly used to search for and download the bibtex citation data. The LaTeX structure used for the thesis originated from the template created by Krishna Kumar (github.com/kks32/phd-thesis-template), designed to meet the requirements of Cambridge University Engineering Department. Modifications were added and the structure tweaked slightly to meet the specific formatting requirements of the author’s university. The structure or skeleton of the thesis followed much the same as all other engineering discipline theses. The decisions were mainly regarding the breakdown and ordering of the technical chapters to ensure a sensible division of themes whilst maintaining a consistent narrative throughout the entire thesis. The Writing Process With the template tweaked, the skeleton of the thesis now in place and consistent figures now possible to be produced the writing process began for real. The methods and results were the first items to be added to the technical chapters. Starting with the first technical chapter and working through them, the first pass included the methods text and block diagrams, results figures and tables. The background, motivation and state-of-the-art chapter was then tackled in depth next. Based on the technical chapters this helped to stress the relevant areas of background and state-of-the art. The state-of-the-art then informed the next part of the process. The discussions for each of the technical chapters was the next major part. This process involved printing off all technical chapters (2-up double sided) and then sitting away from computers and distractions, and hand-writing on lined A4 pad the discussions for each and every results figure and table. The flow achieved by working in this way allowed for a complete discussion to be produced in the morning and typed up in the afternoon for each technical chapter. Although, as the week wore on, it was clear that tiredness was slowing the thinking and hand-writing process. Once the discussions were complete the introduction and summary of every technical chapter could be produced. Again in the same was as the discussions, handwriting to A4 lined paper before typing up later in the day. The conclusions chapter was largely a case of drawing these chapter summaries together along with an overarching discussion of the thesis as a whole. The introductions chapter again took each of the technical chapter introductions and presented the work as a whole. The thesis in entirety was then printed and proof-read through much as before, without distraction of computers and the internet. For each of the sections of work, there were a number of iterations in printing, reading, editing, typing, and repeat. But largely this is the process it went through. Every draft chapter at each stage was also made available to supervisors for feedback, and towards the end, the document as a whole was made available to engineers in other disciplines for broader feedback, mainly on formatting. Printing and Binding A full week was allowed for when having the thesis printed and soft-bound. This is worth taking some time over, having a professional looking and feeling document demonstrates the same care that has been taken in writing the content, and the research work that led to this singular entity that encapsulates it all. This is all that the examiner will see of all that hard work. The above summarises the approach that I took to completing the thesis. This is by no means the only, or even necessarily a good, approach. However, it got me to the end point having something worthwhile to submit for examination that I believe reasonably reflects the work that has gone into the research over the last four years.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00163.warc.gz
CC-MAIN-2023-50
6,975
25
https://mail.python.org/pipermail/python-list/2001-June/090267.html
code
time module - is strptime available under NT? ruediger.maehl at web.de Wed Jun 27 14:24:32 CEST 2001 Dan Tropp wrote >Is this function available in the windows (NT) version? >If not is there a commonly used alternative? Or do people just parse things >up themselves with re? >Dept Psychology, University of Melbourne, Australia no, it is not available. I am using the mxDateTime package from # in the latest version you must use: # import mx.DateTime t = '2001-06-26 09:00:00' dt = DateTime.ISO.ParseDateTime(t) dt is a DateTime-object and you can then apply all methods from the DateTime package. More information about the Python-list
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524522.18/warc/CC-MAIN-20190716095720-20190716121720-00260.warc.gz
CC-MAIN-2019-30
636
16
https://community.talktalk.co.uk/t5/Email/Mailbox-problems-with-Webmail/td-p/2658835
code
Today I've somehow messed up my Webmail for two of my email addresses. I've been updating my passwords which I've done OK. I can access my emails using my email client but when I try to access my mailboxes with Webmail I am getting a warning Your e-mail account ********@tiscali.co.uk was disabled due to invalid credentials. Please edit the account and enter correct credentials to enable it again When I log in to email address 1 I get the warning for email address 2 and vice versa If I edit the accounts to match up the details for the accounts I get a warning Validation of server imap.tiscali.co.uk failed due to invalid credentials. What can I do? Thanks for any advice. Solved! Jump to the Best Answer.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488519735.70/warc/CC-MAIN-20210622190124-20210622220124-00523.warc.gz
CC-MAIN-2021-25
710
6
https://www.nationalgeographic.com/animals/article/150609-snakes-house-maryland-infestation-animals-science
code
Snakes Infest House in Maryland—How Did It Happen? The invasion of the black rat snakes, which may have been seeking out a winter refuge, is a "freak occurrence," one expert says. Everything seemed normal when the Brooks family moved into their dream home near Annapolis, Maryland, last December. That is, until spring came—and the snakes arrived. Snakeskins and droppings became commonplace in the house. As did actual black rat snakes (Pantherophis obsoletus), which ranged from small hatchlings to full-grown seven-footers. By April, the family had evacuated the premises and filed a $2 million lawsuit against their real estate agent, according to a local newspaper, the Capital Gazette. The good news for other homeowners is such snake infestations are "really unusual. It's not something most people should expect or fear," says David Steen, a wildlife ecologist at Auburn University in Alabama. For some reason, the non-venomous reptiles had "settled on this house as a congregation point, maybe for denning in the winter," Steen said. Snakes are ectothermic, which means their body temperature fluctuates with their environment. When it gets cold, the serpents find a cozy place to curl up and wait out the winter. That could be anything from a tree stump to a rocky crevice to—you guessed it—a warm house. Even if such instances are rare, the house in Maryland isn't the first report of a domicile that would make Indiana Jones squirm. In one house in Idaho, "garter snakes were crawling through cracks in the house's foundation and using it as a hibernaculum [a place to spend the winter]," says Michael Dorcas, a snake expert at Davidson College in North Carolina. It's likely the snakes had always used the area as a hibernaculum, and the house was simply built on top of the animals' territory. (Related: "Year of the Snake: The Serpent Behind the Horoscope.") Scientists have also found wild places where snakes congregate, including the Narcisse Snake Dens of Manitoba, Canada (map), where 75,000 garter snakes mate in great, writhing masses. It's the world's largest known gathering of snakes. But rat snakes, says Steen, aren't really known for doing this. So the case of the Maryland house, he says, is likely a "freak occurrence." How Do You Prevent A Snake Infestation? While rat snakes are generally shy around people, their prey can draw them close to people. In fact, because their diet primarily consists of pest animals like rats and mice, they are usually looked upon as a handy animal to have around. "They like farms, barns, and even neighborhoods," says Dorcas, who has studied the species. "And they climb really well." Though the rat snake is clearly capable of Mission Impossible-style maneuvers, a tightly sealed home should be all it takes to prevent the animals from becoming unwanted houseguests. Cracks in doorframes and windowpanes, holes in walls, and crevices in a home's foundation can all provide access to these nimble creatures. Know Your Reptiles Black rat snakes, one of the most commonly encountered species across the eastern U.S., are often confused with venomous snakes, Steen adds. This can lead people to harass or kill reptiles that pose little to no threat. In fact, Steen scours social media each day for these false IDs and engage with the public about the actual species in their backyards. (Hint: It's usually #NotACottonmouth.) But even snake experts say they'd prefer not to live in a house full of rat snakes. (Read about the world's most venomous snakes.) "If you go to the bathroom at night and you step on a snake in the hallway, it's going to startle you," says Dorcas. "Even if you like snakes, you don't want that."
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945317.85/warc/CC-MAIN-20230325064253-20230325094253-00340.warc.gz
CC-MAIN-2023-14
3,690
25
https://greenmask.io/built_in_transformers/standard_transformers/random_title_male/
code
RandomTitleMale transformer is developed to populate specified database columns with random male titles. This tool is essential for applications that require the simulation of user profiles, testing gender-specific features, or anonymizing user data in datasets. |Supported DB types |The name of the column to be affected |Indicates whether NULL values should be preserved RandomTitleMale transformer utilizes a predefined list of male titles (e. g., Mr., Dr., Prof.) to inject random male titles into the designated database column. This feature allows for the creation of diverse and realistic user profiles by simulating a variety of male titles without using real user data. Example: Populate random male titles for the This example outlines configuring the RandomTitleMale transformer to populate the title column in a user_profiles table with random male titles. It is a straightforward method for simulating a variety of user profiles with male titles. - schema: "public" - name: "RandomTitleMale" In this configuration, the title column will be updated with random male titles for each user profile entry, replacing any existing non-NULL values. If the keep_null parameter is set to true, existing NULL values in the column will be preserved, ensuring the integrity of records where title information is not applicable or provided.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817398.21/warc/CC-MAIN-20240419110125-20240419140125-00292.warc.gz
CC-MAIN-2024-18
1,339
16
http://superuser.com/questions/318817/disable-task-manager-for-restricted-user-on-windows-7-home-premium
code
This was my original answer: Log in as the restricted user, then go to c:\windows and right click on regedit and click "Run as Administrator" (you will be asked for the credentials), and make the same registry changes. OK, to be honest, after I wrote the answer, I decided to test it, and double-check my answer as I often do. What I found blew up a long-held thought that I had, and I am going to mention it now because I am sure that I am not the only one, who will be surprised. Despite being logged in as my user "Test", when I ran regedit.exe as administrator (or even a second administrative user), instead of it ONLY running the program with elevated privileges, it also changes the HKEY_Current_User hive to that same administrator account. I was extremely surprised. So I would do it this way for simplicity: Elevate the user to administrative level temporarily in Control Panel>Users. Log in as that user, and make the registry change exactly as you had above. You can test it immediately by right-clicking the taskbar. Log out, and back in as administrator, and demote the user back to standard. I tested this and it worked. An even easier way: While logged in as your administrative user, elevate the standard user to administrative level temporarily in Control Panel>Users, then follow the original answer. Now running regedit as that user, it will load their hive, and you can edit it. Then demote them again. This way, it is all done without logging in and out. I think this is clear, but if I did make something a bit confusing, just ask in the comments, and I will try to clarify.
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274191.57/warc/CC-MAIN-20160524002114-00005-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
1,597
9
https://community.infosecinstitute.com/discussion/66256/skillsoft-skillport
code
I am A+ certified and been in the tech field for about 2 years. I have been using skillport software developed by skillsoft to prepare for the Network+ exam. I am pleased with it, however it seems to dive a lot deeper into certain aspects such as the 802.11 standards than what I feel would probably be on the exam. For example, it talks a lot about DSSS, FHSS, and PHY. Are these topics that I am actually going to see on the exam? It just seems a little unecessary and I was trying to save time if at all possible. Any help would be greatly appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057687.51/warc/CC-MAIN-20210925142524-20210925172524-00018.warc.gz
CC-MAIN-2021-39
555
1
https://saveenr.gitbooks.io/visioautomation_docs/content/
code
VisioAutomation is a set of .NET libraries designed to make it simpler to automate Microsoft Visio. VisioAutomation helps you - Write Visio Add-ins - Control the Visio application from other applications or scripting languages - Visio Automation 2010 NuGet package Projects that Use VisioAutomation - Visio PowerShell: a PowerShell module for Automating Visio 2010. - Power Tools for Visio 2010: Semi-useful tools (in active development) - VisioAutomation.VDX: a library to generate simple Visio XML (VDX) files without even having Visio installed. Git Clone url: Watch the Gource visualization of the Git commits from 2011 to 2015: https://vimeo.com/128739957 Older versions and tools that support Visio 2007 are still available but are NOT actively maintained.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154127.53/warc/CC-MAIN-20210731234924-20210801024924-00131.warc.gz
CC-MAIN-2021-31
762
12
http://catalin-festila.blogspot.com/2011_07_03_archive.html
code
Today I looked on the internet to see a laptop for me. I've seen some good models in price / quality. Then I remembered that someone had compatibility problems with a Dell laptop, which accept only Vista drivers. I found the link, but unfortunately the list is small and outdated. I'm sure most of us have installed Fedora on the laptop. When installing Fedora , we can send hardware profile to Fedora Team. If data on computers that can run Fedora would be available, users would receive real help. On the other hand, I saw several laptops that are available in stores various operating systems. Unfortunately it's just a fantasy..Why? Here is a clear example of store : a turnover of euro 102 million in 2010 , see image:
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607731.0/warc/CC-MAIN-20170524020456-20170524040456-00084.warc.gz
CC-MAIN-2017-22
723
10
https://listmaster.pepperfish.net/hyperkitty/list/[email protected]/message/6Z3GC45YNSEIPIWZHYJWF2WX5LFWS7G7/
code
On Tue, May 28, 2019 at 14:37:17 +0100, Richard Ipsum wrote: > I'm unsure that it's worthwhile blocking the signals in > though I suppose it saves either losing signals or having to queue them > in luxio rather than in the kernel. I've been thinking about this and it seems absolutely necessary. If you don't block them while you're handling then there's nothing to stop another signal coming in and clobbering everything while you're still trying to handle the first one. You can queue them yourself but nothing stops a signal being delivered while you're performing operations on your queue, seems safer to let the OS do the queueing. Fair, I accept your reasoning. I've been implementing a possible solution to avoid the limit of VM but I have to say that I don't like it, it seems a lot more fragile to me and so I'm wondering whether there is really any compelling reason that Luxio would worry about supporting handlers set by multiple VMs? If not then maybe we just document this limitation? As I (think) I said before - I'm okay with the limitation as long as it is clearly documented, and ideally protected against (i.e. if you attempt to register a signal handler from a Lua state which isn't the one cached in the global context, then you get an error return from the function call) Does that sound okay to you? Daniel Silverstone http://www.digital-scurf.org/ PGP mail accepted and encouraged. Key Id: 3CCE BABE 206C 3B69
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337855.83/warc/CC-MAIN-20221006191305-20221006221305-00750.warc.gz
CC-MAIN-2022-40
1,433
23
https://cmp.scalr.com/blog/aws-vs-azure-5-differences-you-need-to-know/
code
Recently, Ron Harnik, our Product Marketing Manager, and Sebastian Stadil, Scalr’s CEO, hosted a webinar on AWS and Azure – breaking down exactly what was the difference between the world’s leading cloud providers. It was one of our highest performing webinars – letting us know that developers and administrators are still searching for the right direction in building their cloud solutions. As the number of multi-cloud enterprises continues to grow, we thought it would be essential to boil down the details of that webinar into the following summary. If you want to watch the webinar in it’s entirety (it’s worth it), visit the link below. Otherwise, read on. Ron started the webinar off with an excellent statement – enterprises tend to listen to analysts and other enterprises. While the developers and front-line IT engineers experiment with cloud providers in small pockets, it’s the top down mandate from the CIO that defines what big companies use across the board. In other words, what are all the other kids doing, and should I be doing it? In searching for the answer to the above question, we talked to Scalr customers, analysts and enterprises we have relationships with. There’s three core facts that we learned. - More often there is no cloud adoption policy, multi-cloud tends to just ‘happen’ - This also doesn’t happen at the team or application level, it’s typically at a business unit/department perspective - Phase 1 is on premise/VMware - Phase 2 is when pockets of developers use AWS, Azure - Phase 3 is discovering multiple pockets of cloud usage across business units, and central IT has to find a solution to this - Most companies use AWS or AWS + Azure - Single cloud companies are exploring to try other clouds - Azure is gaining ground not because of technical superiority, but through an aggressive discounting campaign - Microsoft Enterprise Agreement customers will get significant discounts. It’s the lowest-cost bidder concept. - AWS isn’t necessarily interested in playing the ground-game now that they’ve proved themselves. - Huge ecosystem of services, tools, and relationships with vendors - Strong IaaS and PaaS products - Aggressive release schedule. Every other day they’re making improvements - Because of all of this, AWS/Amazon knowledge is easy to find in administrators and developers - Easy to use, hard to master - AWS puts a lot of work into the most popular services (which is obvious), but the less popular ones get faded away - Complex pricing (mentioned in depth) - Complex support – it’s not relationship based, it’s more tiered based. More premium support. I.e. big company size doesn’t matter – everyone pays at the tiered levels - Choice paralysis – it’s a full time job keeping up to date with everything being released and best practices - Cost management tools are often needed to supplement Cost Center/Billing - Through 2013 – 2015 Azure has been rapidly rolling out releases and support. Customers say that the overall infrastructure is ‘good enough’ for now - Support for Linux and other OSes has been released, not just forced to use Windows Server - Azure Stack, now in technical preview 2, is an on-premises deployment that works great with cloud providers. It’s an extension of the cloud environment - In efforts to race to market, not all features are finished, or have complete API support - Documentation can’t keep up with releases – as an example as Security Groups have evolved on the dev side over the past few months, the public documentation hasn’t kept up - In contrast to AWS, there’s a limited number of Azure experts - Vendors are reporting problems with secure authentication, slowing down deployment. But this is relatively minor – Azure is still extremely focused on What Customers Are Saying As you may have guessed, most companies are more inclined towards AWS because of their broad range of services. There’s tons of cloud-based solutions for services companies would be interested in using (like Service Catalog) or simple solutions to products they’ve hobbled together (like data warehousing with RedShift). On the other hand, Azure is meeting the baseline requirements of most companies as an IaaS. In other words, most companies don’t need all the bells and whistles – give us VMs, storage, load balancers and databases and we can figure out the rest. Azure Stack (now in Technical Preview 2) has been generating a little bit of buzz, and will be released on dedicated on-premise infrastructure. The precursor to this, Azure Pack was an interim solution, a free collection of Microsoft Azure technologies available to Microsoft customers. It integrates with Windows Server, System Center, and SQL Server to offer a self-service portal and cloud services such as virtual machine hosting (IaaS), database as a services (DBaaS), scalable web app hosting (PaaS), and more. On the Azure side, customers also mentioned that even if the Support system isn’t too helpful at the low level, you’ll almost always be put in direct contact with Engineering so you’ll get solutions. Here’s the analog of the individual services provided by both clouds. A quick summary on the less popular services above: Direct Connect/ExpressRoute helps companies establish a dedicated network connection between the public cloud provider and their on-prem network. The benefit is consistent network performance and a reduction in bandwidth costs by funneling data directly through the provider. AWS GovCloud/Azure Government is for the most compliancy-driven organizations – governments. By running on dedicated hardware separate from other cloud customers, 24/7 monitoring, multiple backups across data centers, and all servers held on country soil (i.e. clients using Azure Government would have all servers on continental U.S. soil), these solutions are engineered for security. AWS Directory Service for Microsoft Active Directory (Enterprise Edition)/AWS Microsoft AD/Azure AD is a directory and identity management service. The big benefit is integrated into your directories that already exist (single sign-on access to SaaS applications that a company may already be using like Salesforce or DropBox. Azure AD also includes a full suite of identity management capabilities including multi-factor authentication, device registration, self-service password management, self-service group management, privileged account management, role based access control, application usage monitoring, rich auditing and security monitoring and alerting. As for the naming scheme – they all integrate with Microsoft AD which AWS conceded that most enterprises wouldn’t migrate from. Everyone’s biggest concern at scale is typically security. As it rightly should be – not only do we have to make sure our applications and information are secure at the network level, we also have to create systems that ensure that user error won’t be the downfall of our infrastructure. In both AWS and Azure, Security Groups are the main way to manage network security. One of the big challenges in cloud are when you’re trying to enforce old security paradigms on cloud providers. The issue? There’s no single enforcement point. AWS Security Groups - Can secure EC2, RDS, ELB - Security Groups are applied to the primary Elastic Network Interface(ENI) by default - You can apply multiple security groups to an instance - White list only – only allow rules (inbound/outbound). I.e. allow HTTPS for your public facing sites, SSH only from your IP for email servers Azure Network Security Groups (NSGs) - Can secure VMs and Subnets - Applied to primary NIC on servers, or all VMs in subnet - Both Allow & Deny rules. For more detail on how NSGs work, visit this blog post - Unlike AWS, you can’t attach multiple NSGs to a VM or Subnet. The difference is only from the administrative perspective – you can easily get out of control with multiple security groups on instances. This NSG structure dictates better user habit. - All rules are stateful Generally, the consensus is that AWS pricing gets confusing, especially once you start to use more and more services, across regions. Cost visibility isn’t stellar, and while you can tag resources to track billing, the process tends to fall apart as more developers and end users use Accounts. There’s three types of AWS pricing. - On Demand – Services billed by the hour. A typical example would be when selecting instances and you can see the difference between instance families (i.e. T3.larges charge $0.019 an hour) - Spot – Bid for instances. When cost goes over, bid instance is determined. - Reserved – You are able to reserve instances and compute capacity for 1-3 years. You can receive up to a 75% discount for reserved instances when you pay up front. There’s two types – Standard and Scheduled. Standard means you’ll be reserving this instance full-time for the next year. Scheduled means you can reserving these instances at specific times. Examples: we need more processing power at night-time as we turn over data, or during the holidays e-commerce sites need boosted services over a stretch of time when our infrastructure gets overwhelmed. In addition to pricing, here’s how support ties into AWS. Here’s how Amazon defines what AWS Support you may need: Developer Support – Experimenting with AWS Individual developers exploring the potential of AWS, looking for access to technical support resources to help quickly and effectively get started. Business Support – Production Use of AWS Businesses looking for guidance and best practices to enable availability, scalability, and security of production workloads – reducing the need for reactive support. Enterprise Support – Business Critical Use of AWS Businesses whose success is directly linked with the performance of workloads and applications, benefiting from high-touch proactive/preventive services. - On Demand – Billed by the minute. There’s two options: Standard or Basic, which defines the support you have around those instances. - Pre-Pay: Companies are able to reserve VMs at a 5% discount when purchasing over a year-long period. And how support ties into Azure: Like we mentioned, Azure really is the MVP of cloud providers in a sense – when you really just need IaaS and already have a Microsoft Enterprise Agreement, the choice is pretty much made for a lot of big companies. You’re not getting the best product per-se, but it’s straightforward. AWS won the lowest cost bidder game back when they first came out, so they’re no longer interested in the ground game. Pricing has become more convoluted and complex. Following Ron’s analysis of pricing and and security, Sebastian Stadil, our CEO, spoke on the legal complexity behind both providers. Curiously enough, Amazon makes you sign a IP Non-Assert Clause when using their products. If you’re a company that has an IP(Intellectual Property) based business, be wary because it’s a pass through agreement. There’s two troubling issues: duration and scope. Duration states that this clause of the contract lasts in perpetuity. So if one of your employees has ever signed the agreement to use an AWS product and Amazon starts infringing on your IP, they’d have great grounds to keep you from taking them to court. Scope also means that if several developers have been using AWS and signing these pass-through agreements, any case you would bring against the company wouldn’t stand too much of a chance. On the flip side, Microsoft has more leniency in the legal department. There’s no clauses like this in the Azure agreements. After the brutal IP wars Microsoft has gone through over the years, and especially in the 90s, Azure doesn’t have any of the ‘gotcha’ agreements written into AWS. Again, all of this isn’t worrisome to most companies, but if you have an IP centered company, it would be essential to work out an agreement beforehand. After a fantastic Q&A from the audience (check out the webinar for that – starting at the 40 minute mark. Sample question we got: As a startup what are the pivotal decisions in deciding between AWS and Azure?) Access & Permissions AWS and Azure handle access and permissions a little bit differently. AWS is centered around users, and groups via Identity Access Management (IAM) Policies. Policies define what resources can be accessed or what actions can be performed. In other words, policies define the WHAT and WHERE. Groups are the collection of users those policies are applied to. On the other hand, Azure uses Role Based Access Control (RBAC), which associates Users with Roles. Roles grant hierarchical permissions to resources, meaning that they define WHO, WHAT, and WHERE. If you’ve used Google Cloud Platform, the organizational structure is similar. Here’s how that workflow usually works: - Create an IAM Group: - Add Users to that Group. An example would be Q&A team, developers, sales engineers, and so on. - Create Policy: You have three options in creating policies. You can copy and edit existing policies. AWS has a ton of them that cover the basics you need, like access to S3, EC2, and so on. There’s also an IAM policy generator. Lastly, you can also write your own JSON policies. - Associate Users with Roles - That’s it. With resources(instances, load balancers, etc) organized into resource groups, you assign your users to roles in the scope of these resource groups. At the end of the webinar, Ron & Sebastian had a great freeform discussion on what cloud provider companies should use. The general consensus is that you should start with AWS – going back to that concept of ‘easy to use, hard to master’, and then expand into other clouds as you look for better solutions. Most enterprises end up with a multi cloud solution, as different business units and teams will use a particular public cloud for a specific solution. That’s not an issue though – with cloud management platforms you can take the concepts we discussed above and easily abstract them to a higher level, meaning that visibility with pricing, security, role based access, and services are easily solved through one interface. This is just an overview on the webinar, so if you’d like to learn more and specifically how Scalr solves the issues we mentioned above, watch our on-demand webinar – AWS vs Azure – 5 Differences You Need To Know. The first half covers an overview on how enterprises manage multiple AWS account, then at the thirty-five minute mark we jump into solutions for the challenges discussed above.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00558.warc.gz
CC-MAIN-2021-43
14,676
83
http://forums.linuxmint.com/viewtopic.php?p=445698
code
I've been using Linux Mint for almost two years now, love it. Thought I'd mention an open source project I've just published called gAddressBookTool. The package can be downloaded here: https://sourceforge.net/projects/gaddressbktool/files/ This software can manipulate VCard and CSV address books by merging records, distinguishing mobile numbers from landlines and reconfiguring their fields and text formatting of phone numbers, using a database of national dial plans. I wrote it as I wanted to collect and merge all my address books - from Thunderbird and my old Nokia phone so that I could export it onto my android smartphone. It is written in Python with GTK interface. I wrote it on a laptop with Linux Mint and using Geany IDE.
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701166222.10/warc/CC-MAIN-20160205193926-00271-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
737
6
http://poker.stackexchange.com/questions/tagged/drawing+bubble
code
to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Tricky Tournament Bubble Situation How (and why) do you play this hand as big stack on the bubble of a short-handed SNG? All players are reasonably skilled. BB has played pretty tight in the previous ~100 hands this tournament, and in ... Jul 17 '12 at 16:35 newest drawing bubble questions feed Hot Network Questions What's the inspiration for the owlbear? What is the current of a capacitor when the derivative of voltage is undefined? Multiple Room 101's in 1984 Cubicle assigned seating in office, requesting change in the face of refusal Should I learn to use LaTex to write up a History Masters Thesis? A coworker beat me to resignation. How can I resign in a professional manner? Why doesn't the electric field inside a wire in a circuit fall off with distance from the battery? Why does the standard deviation not decrease when I do more measurements? How to acknowledge contributions of anonymous referee in new paper? Why am I allowed to access protected Windows files when I boot Ubuntu from USB? X or Y guitarist doesn't know music theory - how true is this statement? Dark spot under cockpit on A-10s Alphanumeric Hello World How to add +x just for user with chmod? Pawns at centre: Strength or weakness? Which one and why? don't or not to How much extra time should I reserve to enter the US from Canada with the Windsor-Detroit Tunnel Bus as a European VWP citizen? Is Yellow coloring inside a fish normal? How to respond to "Why shouldn't we hire you?" The Floating Horde How to wire 2 Female USB plugs into a "Y" USB connector? Is the possibility of Harry being named Elvendork from the books? The logic behind "better safe than sorry" Is my coding technique progressing in terms of C# loops? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Overflow Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011131391/warc/CC-MAIN-20140305091851-00047-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
2,347
52
https://utilizewindows.com/mac-address/
code
Before you start Objectives: learn what is MAC address, how and when it is used, and which protocols are associated with it. Prerequisites: no prerequisites. Key terms: address, mac, destination, ip, frame, source, layer, packet, adapter, broadcast, arp, rarp, ffff, card An Ethernet MAC address is a 12 character number (48 bit) that uniquely identifies each Network Adapter. The numbers are hexadecimal, meaning that each number ranges from 0-9 or A-F, giving us a total of 16 possible combinations for each position. Example MAC address would be: 00-1A-4B-59-95-6D Each number in MAC address can be represented by zero and all the way up to nine, giving us a total of 10, plus A, B, C, D, E and F, giving us 16 possible values. The MAC address is a 48 bit number that has 12 digits often separated by dashes, periods or colons. The MAC address uniquely identifies each Network Adapter card and is actually physically burned into the Network Adapter card. For this reason, it is often called a burned in address, since it’s hard coded into the hardware. MAC Address Parts The MAC address is guaranteed unique through design. The MAC address identifies both the Network Adapter, as well as the manufacturer of the network adapter card. Each manufacturer is assigned a range of MAC addresses. The first half (first 6 digits) of the MAC address is assigned to each manufacturer. The manufacturer determines the rest of the address, assigning a unique value which identifies the host address. A manufacturer that uses all the addresses in the original assignment can apply for a new MAC address assignment. Some network cards allow us to change the MAC address through jumpers, switches, or software, but there’s really not much reason to do so. Devices use the MAC address to send Frames to other devices on the same subnet, the same network segment. MAC addresses exist at the layer two, or the Data Link layer, in the OSI model. Before a device can communicate with another host it needs to know the MAC address of the destination device that’s on that segment. In our example we have two computers connected through a simple network. PC 1 has a message that it needs to send to PC 2. On the bottom we have listed the seven OSI model layers. Image 220.1 – Simple Network Remember that at layer 3 a Packet is created. The packet includes the destination IP address and the source IP address. So in this case we would have packet information which includes the data, for the destination IP address we have the IP address of PC 2, and as the source the IP address of PC 1. The packet is then converted into a Frame for layer 2. Within the Frame we need to have the destination MAC address of the destination device, as well as the source address of the source device, along with the packet information. Image 220.2 – First 3 Layers In our example let’s say that PC 1 has an address of 10.10.10.1 and PC 2 has an address of 10.10.10.2, and let’s assume that PC 1 already knows the IP address of PC 2, so as it creates the Frame it uses its own MAC address as the source address within the Frame. But how does it learn what the destination MAC address is? Before two devices can communicate, they must know their MAC addresses. In this case the computer uses a protocol that’s called the Address Resolution Protocol (ARP), to discover the MAC address of a device from a known IP address. Source device creates a special frame with its own address and it uses the broadcast MAC address as the destination address. The broadcast address is an address where all positions are set to F. The frame is then sent across the wire. Image 220.3 – MAC Broadcast PC 2 receives the Frame and looks at the destination MAC address. It realizes that it’s a broadcast frame, so it needs to process the Frame to see what’s inside. It strips off the Frame headers leaving a Packet with a destination IP address that matches its own. The host then knows that it needs to process the Packet and respond to the Packet. The Packet basically asks, what is your MAC address? To supply the MAC address to the original device, PC 2 creates a new packet using its own address as the source address, and the original sending device’s IP address as the destination device. It gets this information from the original Packet. It then creates a Frame using the original source MAC address as the destination MAC address, and using its own MAC address as the source MAC address, and it sends that information back to PC 1. Image 220.4 – Response PC 1 receives this special Frame, realizes that the Frame is addressed to itself and then obtains the MAC address of the destination device. Once this original device knows the MAC address of the destination device, it can create the Packet with the destination IP address and its own source address, create a Frame using the destination MAC address as the destination, and it’s own MAC address as the source address, and it can send those packets to the destination device. Image 220.5 – Sending Data So hosts use ARP to find the MAC address of a host when it knows the IP address. To find the MAC address of the recipient, the sending device sends out a broadcast frame. This means that the destination MAC address is all F’s (FFFF:FFFF:FFFF), the source MAC address is its own MAC address, the destination IP address is the known IP address of the destination host and the source IP address is its own IP address. All hosts on the subnet process the broadcast frame, looking at the destination IP address. If the destination IP address matches its own address, the host responds with a Frame that includes its own MAC address as the sending MAC address. The original sender then reads the MAC address from the Frame and associates the IP address with the MAC address, saving it in cache. Once the sender knows the MAC address of the receiver, it sends data in Frames addressed to the destination device. Another protocol called Reverse ARP or RARP is used to find the IP address when the MAC address is already known. Once a host learns the MAC address of a destination device it takes that MAC address and puts it into a table so that the next time it needs that information it is available in its cache and does not have to go though the ARP process. Network Adapters function at both the layer 1 and layer 2 of the OSI model. At layer 1, Network Adapter is responsible for sending the bits on the transmission medium. At layer 2, Network Adapter uses the MAC address to create Frames. Transceivers and Media Converters operate only at layer 1, and both are responsible for taking the bits and converting them to electrical, light or other signals for transmission on the transmission medium. Frames include a Cyclic Redundancy Check (CRC) which is used to detect Frames that have been corrupted during transmission. Image 220.6 – CRC The MAC address is a 48 bit number that has 12 digits often separated by dashes, periods or colons. It uniquely identifies each network adapter card and is actually physically burned into the network adapter card. The first half (first 6 digits) of the MAC address is assigned to each manufacturer. MAC addresses exist at the layer two, or the Data Link layer, in the OSI model. ARP is used to discover the MAC address of a device from a known IP address. The broadcast MAC address is an address where all positions are set to F (FFFF:FFFF:FFFF). RARP is used to find the IP address when the MAC address is already known.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100047.66/warc/CC-MAIN-20231129010302-20231129040302-00220.warc.gz
CC-MAIN-2023-50
7,503
24
https://en.opensuse.org/openSUSE:At_Flickr
code
So, we created openSUSE at Flickr as a place to contribute artwork and select one that will go into next openSUSE release. Right now we are looking for wallpapers. Wallpapers in general means not busy images with large areas of more or less monotonous coloring so that desktop elements are not lost, like: - macro photos - coast and sea - custom made graphics Also, they can be seen by anyone in the family, which eliminates some categories: - sexually explicit - depicting violence - inciting hatred You can post images to the group with any license, but to include images in openSUSE release we need license that will allow: - changes in the artwork, like adding openSUSE logo, text openSUSE and similar, - change in size, to fit different screen sizes, - using parts to create openSUSE branded splash screens for applications, and similar, - using parts to illustrate documentation. We consider that above is possible with Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA 3.0).
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00061.warc.gz
CC-MAIN-2022-21
1,002
16
http://www.horizonsunlimited.com/hubb/yamaha-tech/name-that-tenere-21943-print
code
name that tenere i'm kinda new to old teneres. I would like to know if my import on a D plate (uk)is a Z model and so order the right parts to service it. It has a kickstart and rear drum brake- does this make it the earliest model? cheers . |All times are GMT +1. The time now is 18:23.|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510270877.35/warc/CC-MAIN-20140728011750-00156-ip-10-146-231-18.ec2.internal.warc.gz
CC-MAIN-2014-23
288
3
https://forums.wincustomize.com/336414
code
well, I'm really disapointed by latest CursorFx (premium) version. Each time a software or the system is busy, the cursor doesn't react anymore. It stay freezed on the screen , until the software/application is no more busy. At the end that's just too annoying, and I restore regular cursors. Also, there's that stupid bug: if I press apply in the cursor fx panel , the regular cursor come back. I must double click on the cursor I want, and then press the close buton to keep the cursor I want. And finally, I must say I'm fed up with all the bugs that the stardock application have (windowblinds have annoying bugs too) . I'm thinking now, than an object desktop subscription is not worth it.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689411.82/warc/CC-MAIN-20170922235700-20170923015700-00650.warc.gz
CC-MAIN-2017-39
694
6
https://gis.meta.stackexchange.com/users/142375/georgy
code
Top network posts - 86 Pythonic type hints with pandas? - 34 Find coordinate of the closest point on polygon in Shapely - 27 Inverse transform sampling - 19 How to select all non-black pixels in a NumPy array? - 17 Asking the user for input until they give a valid response - 17 Efficiently selecting spatially distributed weighted points - 16 Calculate overlapped area between two rectangles - View more network posts → Keeping a low profile. This user hasn't posted yet.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00236.warc.gz
CC-MAIN-2020-50
474
11
https://soopertramp.com/projects_blogs/1.%20Python/1.%20basic_python/tuple.html
code
Dictionary Manipulation: Mode Calculation, Tuple Modification, Stop Words, and Key Checking In this article, we will explore various Python code snippets that focus on dictionary manipulation tasks. We will cover topics such as calculating the mode of a list, modifying tuples within a list, updating stop words in natural language processing, and checking for the existence of keys in a dictionary. These examples will highlight the flexibility and functionality of Python in working with dictionaries and performing common operations efficiently. Throughout the process, I gained insights into fundamental concepts such as tuples, set, and dictionaries. Problem 1: Calculating the Mode of a List To start, we are given a list of temperatures and are asked to calculate the mode. The mode is defined as the value that appears most frequently in the data. We accomplish this by first obtaining a unique set of values from the list. Then, we create an empty dictionary to store the unique temperatures as keys and their corresponding counts as values. By iterating through the list and updating the dictionary accordingly, we can determine the mode temperature. Problem 2: Modifying Tuples within a List Next, we are presented with a list of tuples and tasked with modifying the last element of the last tuple. To achieve this, we convert the last tuple to a list, update the desired element, and convert it back to a tuple. Finally, we print the updated list of tuples. Problem 3: Updating Stop Words in NLP In this problem, we focus on natural language processing (NLP) and the concept of stop words. Stop words are common words that are typically removed from text processing tasks. We begin with a sample sentence and a default set of stop words. By updating the set of stop words with additional custom words, we ensure that our NLP tasks exclude those specific words during text processing. Problem 4: Checking for Key Existence in a Dictionary: Lastly, we explore dictionary manipulation by checking for the existence of a key. We start with a dictionary and verify if a given key is present using a boolean variable. If the key exists, we remove it from the dictionary; if it does not, we add the key along with its corresponding value. This demonstrates how Python allows for easy checking and manipulation of keys in a dictionary. In this article, we have covered various Python code snippets for dictionary manipulation. We learned how to calculate the mode of a list by updating a dictionary with key-value pairs representing unique elements and their counts. Additionally, we explored how to modify tuples within a list and update stop words in NLP tasks by manipulating sets. Lastly, we checked for the existence of keys in a dictionary and demonstrated how to add or remove them as needed. These examples highlight the flexibility and efficiency of Python when working with dictionaries, making it a valuable tool for data manipulation tasks.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00345.warc.gz
CC-MAIN-2023-50
2,956
11
https://gaming.stackexchange.com/questions/285074/how-can-i-execute-armorstands-around-me
code
I would like to execute from the red ArmorStand to the blue ArmorStands. And the blue ArmorStands should say "hi". My problem is, that I can't use tags. So if I use "r=2" only the ArmorStands next to the red one will say "hi". How can I select all eight blue ArmorStands without tags?
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143963.79/warc/CC-MAIN-20200219000604-20200219030604-00130.warc.gz
CC-MAIN-2020-10
284
3
http://fidees.ru/technology-that-doesnt-rely-on-updating-signatures-or-urls-23317.html
code
Technology that doesnt rely on updating signatures or urls Technology that doesnt rely on updating signatures or urls - 100 free ladyboy dating sites Using a shared access signature (SAS) is a powerful way to grant limited access to objects in your storage account to other clients, without having to expose your account key. A shared access signature provides delegated access to resources in your storage account.This means that you can grant a client limited permissions to objects in your storage account for a specified period of time and with a specified set of permissions, without having to share your account access keys.The SAS is a URI that encompasses in its query parameters all of the information necessary for authenticated access to a storage resource.To access storage resources with the SAS, the client only needs to pass in the SAS to the appropriate constructor or method.You can use a SAS when you want to provide access to resources in your storage account to a client that can't be trusted with the account key.Your storage account keys include both a primary and secondary key, both of which grant administrative access to your account and all of the resources in it. Exposing either of your account keys opens your account to the possibility of malicious or negligent use. Shared access signatures provide a safe alternative that allows other clients to read, write, and delete data in your storage account according to the permissions you've granted, and without need for the account key. A common scenario where a SAS is useful is a service where users read and write their own data to your storage account. In a scenario where a storage account stores user data, there are two typical design patterns: 1. Clients upload and download data via a front-end proxy service, which performs authentication. This front-end proxy service has the advantage of allowing validation of business rules, but for large amounts of data or high-volume transactions, creating a service that can scale to match demand may be expensive or difficult. A lightweight service authenticates the client as needed and then generates a SAS.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00228.warc.gz
CC-MAIN-2019-30
2,139
10
http://stackoverflow.com/questions/961449/a-problem-in-the-field-of-blend-of-opengl?answertab=oldest
code
I draw an image as a back on the screen first, then draw a mask for a picture like this: it is a circle with a white color in the middle, and all of the left is black. I use glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); to make it display only the white circle on the back image. Then i need to draw another image on the same position of mask. my aim is to make this image only be drawed where the part is corresponding with the white circle. In fact what i want to draw is a moon. And i must make it opaque. What should i do? i wish recive your help. you can email me at [email protected] thanks very much!
s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644064538.31/warc/CC-MAIN-20150827025424-00090-ip-10-171-96-226.ec2.internal.warc.gz
CC-MAIN-2015-35
625
3
https://moz.com/community/q/best-to-create-a-new-blog-as-a-subsection-or-as-a-new-site
code
Our company has a blog "feature" or "theme" that we'd like to publish every week at least. Should we create a subsection of our current company blog? Or should we register a new domain specifically for the feature? The latter would allow us to do some linking back to the main website, and would probably be good in terms of future career flexibility / personal branding for whoever writes it. Things to keep in mind? Anyone have any suggestions? Best practices? What should I do?
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038118762.49/warc/CC-MAIN-20210417071833-20210417101833-00564.warc.gz
CC-MAIN-2021-17
480
3
https://forum.openframeworks.cc/t/ipad-2-9-3-5-sdk-9-3-ofxosc-not-working/28033
code
I have successfully compiled and tested the ofxOscSender example. Working in the simulator it send OSC messaged properly -another OF-based application running on the same computer receives the messages. I could compiled and upload -as developer test, not from the AppStore- the ofxOscSender example to an iPad 2 (iOS 9.3.5) and open the application on the device, but it doesn’t seem to be sending any message. The application doesn’t receive anything nor does OSCulator detect any message. Ports are the ssame. I am wonderng if there is any option that I should check to load something when compiling into my iPad 2 test. Any help is highly appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00016.warc.gz
CC-MAIN-2022-27
658
4
http://stackoverflow.com/questions/15809873/tortoisehg-how-refresh-list-of-files-from-api
code
I have in repo some files with specific binary format. I want see its contents. For such binary file I put in repo specific txt file, containing md5 and nice content from binary. To minimize manual movements I wrote precommit hook in python that see changes in binary and check is txt-formatted file match new binary one. If matching fails, hook automatically refresh txt-content and does not allow commit. Here I need to manually push F5 to refresh list of changes, becouse TortoiseHG does not include in commit files that Modified, but not present in list...
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246639414.6/warc/CC-MAIN-20150417045719-00166-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
560
4
https://www.fi.freelancer.com/projects/software-architecture/point-sale-system-15776490/
code
I need you to develop a complete Point of Sale system for restaurabt using MVC and SQL Server Database. 34 freelanceria on tarjonnut keskimäärin %project_bid_stats_avg_sub_26% %project_currencyDetails_sign_sub_27% tähän työhön Designed and developed many systems and solutions. Contact me with more details. I am interested Relevant Skills and Experience 4+ years of professional experience Proposed Milestones $1250 USD - . Hi, I am very interested in your project. I am very good at ASP.NET MVC and MSSQL Relevant Skills and Experience MVC, SQL Proposed Milestones $1250 USD - milestone
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748315.98/warc/CC-MAIN-20181121112832-20181121134832-00107.warc.gz
CC-MAIN-2018-47
594
4
https://community.smartbear.com/discussions/zephyrsquad/test-cycle-disappearing/236933/replies/236983
code
I have recently had a Test cycle disappear from the cycle summary for a release, even though when I looked at Test Executions I can see that Test cases are still showing as added to this Test cycle. ... I've had this happen, I figured out if there are too many folders under a release it starts "losing" them just as you described. I ended up creating some "releases" for each year and archiving test cycles into them to take them out of the active rotation. Thanks for response and good tip, I've just realised my description wasn't quite correct, its the Test Cycle that disappeared and then reappeared again. Have just checked and it has now disappeared again. I suspect that one of the managers running the tests is doing something in error, but not sure what Yes, sorry, and I was slightly unclear, as well. I am also referring to the test cycle getting lost - due to how my dev team uses Jira we don't have releases set up in the way that Zephyr uses to create release folders, so I'll set up test cycles in the Unreleased > Unscheduled folder by default. When I had too many (15? 20?) test cycles in Unscheduled they started not appearing in the left hand navigation tree, but I could Search Test Executions where I could see, and navigate to, the test cycle so I knew they weren't actually missing. I created a fake release named "2021" for the Jira project, the folder showed up in the Cycle Summary tree under Unreleased, I moved last year's test cycles into it, and the "missing" test cycle popped back up in the Unscheduled folder. thanks for response - this sort of sounds related - I actually have set up releases in Jira mainly for reporting purposes, and the Test Cycle that has gone missing is in a release of its own - I have raised a ticket which is being looked at now, they suggested the following navigate to settings > apps > general configuration > clear server > perform project metadata index and execution index - which didnt work 🙂 I will update in this thread if they come back
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818468.34/warc/CC-MAIN-20240423064231-20240423094231-00386.warc.gz
CC-MAIN-2024-18
2,009
6
http://agropolibj.cluster023.hosting.ovh.net/European-Coffee-Science-Master
code
We propose a workshop to study the conditions of the emergence of The "International Master on Coffee and Coffee beverages Sciences". We think that this Master should have for objectives: a) implement Coffee culture and ’caféologie’ techniques in production contexts, b) define production strategies and technical itineraries, in relation with marketing, c) conduct experimentation and research projects in the relevant fields. The contributions of the partners are as follows: SupAgro: experience of teaching specialties (wines and vines) Zhaw: experience of teaching and learning about roasted coffee and pckaging UMRs CIRAD and IRD: agronomies, research questions and training grounds Industries: internships and professional experiences
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474686.54/warc/CC-MAIN-20240227184934-20240227214934-00014.warc.gz
CC-MAIN-2024-10
745
10
https://gamedev.stackexchange.com/questions/204257/calculating-lookat-position-from-rotation-and-translation-matrix
code
I was wondering if it was possible to calculate a lookat position from the translation and rotation matrices (aka the building blocks of my view matrix). I need the lookAt position to implement a special type of frustum culling, but my program does not use the lookat position, so I was looking for a simple solution to calculate it. Your view matrix is the inverse of the object's camera transformation, and the inverse of a rotation matrix is just its transpose, so the 3rd row is your camera's local forward vector. Add some multiple d of the third row of your view matrix to your camera position to get a point d units in front of your camera along its forward axis. (This is equivalent to multiplying the vector (0, 0, d, 1) by the camera's transformation matrix, or the inverse of the view matrix, just skipping redundant ops for the zero terms) Using that as a look-at point will give the same result as your current camera orientation, since your camera is already looking at it. Some APIs treat negative d values as "in front" of the camera, while others say that's behind and positive is in front, so flip the sign on d if you initially get the wrong answer.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00037.warc.gz
CC-MAIN-2024-18
1,168
11
http://www.cricketweb.net/forum/2212030-post6.html
code
I'm in for this one as well. I vote that we pick just 11 players as it makes balancing the side an interesting challenge. Also I vote that we just have a free for all on each round, also without having a keeper specific round. If you either forget or don't plan well enough to draft a specialist keeper then it's your own bloody fault. Just the same as if you deliberately nominate poor players and end up with them yourself. basically I think the more freedom we have the more interesting it is to take part in, and als the more interesting strategies and teams we will end up with. Last edited by pskov; 30-04-2010 at 01:26 PM.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704655626/warc/CC-MAIN-20130516114415-00091-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
629
5
https://devpost.com/software/mind-media
code
We decided to use this hackathon to try to address contextual computing in the context of home entertainment. With support from Pebble and Emotiiv, we started building a system to monitor the user’s subconscious and use the data to generate and curate recommendations for film, music, and television. The Pebble watch acts as the primary display center and conscious input device for our platform, allowing the user to react to the prompts from the cloud server (Azure) based off the emotional data, harvested through the Emotiv EPOC+. Our system starts with the EPOC+. It monitors electrical activity within the brain. This data is processed on a computer (hopefully future iterations of the hardware will allow us to upload directly to the cloud). The computer pushes the relevant data to the cloud. In the cloud we pull data from the user’s media accounts and run it through a service to generate an emotional profile for every song. We run those songs against the data pulled from EPOC+ to create a list of recommendations. The cloud server (Azure) proceeds push this list to the Pebble Smart Watch where the user is given the ability to choose between a variety of options. The same list is used to prime the content for playing on the media consumption device. The user then will select his her choice from the options presented and the cloud server sends the instructions the television or stereo system. Although as of now it may not be fully implemented, compiling and documenting the outcome of each event has a foundational role in our systems ability to adjust to the user. Our team decided early on that Pebble was the optimal platform for user input and output for the home media consumption. As a team, we think that for a smart watch to actually be part of your daily routine it has to be both unobtrusive and reliable. We spent a fair amount of time developing an application for the Pebble platform to handle all of the active input and output for the cloud application side of our project. The cloud emulator was extremely helpful in this venture. We intend to try to integrate our system with as many media services as possible, starting with Spotify and moving out (due to registration requirements and security tokens, implementing Spotify support, while very straightforward and well documented, is a significant time commitment). Currently we are limited to local audio and video files although implementation for Spotify is partially implemented. Unfortunately, Netflix stopped supporting their public API. We spent some time looking for a work around and may eventually be able to build some sort of remote but leveraging their content is much more difficult than Spotify’s. We are working on adding elements of machine learning to our algorithm, allowing us to create unique profiles for our user, placing an emphasis on the algorithm’s failures (spikes in the delta value of the Frustration graph generated by EPOC+) to make our recommendations more useful. A lot of what we are trying to do varies significantly between users. One man’s “sad” music may be another’s “happy “ music, not to mention the variation between genre and decade. Tracking reactions and decisions as well as Emotiv’s quantified emotional data gives us an unprecedented opportunity to accurately adjust to a consumer’s individuality and nuances. The core elements of this project have a diverse set of applications. EPOC+ can essentially bypass the subject’s consciousness and provide extremely insightful raw data. Using our system to monitor psych patients or pilots to help quantify and thereby diminish risk would be invaluable to society. Even using it to help couples communicate by showing each member his/her partner’s mood (ideally with just a simple glance toward their wrist) would be incredibly socially rewarding. If your partner knew how you felt, arguably more accurately than you did, even before you said anything, much of the tension that plagues relationships in our fast paced lives could be alleviated. The one thing our entire group agrees on is that it would be a waste to limit the technology we have been developing to a single specific field.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500356.92/warc/CC-MAIN-20230206145603-20230206175603-00442.warc.gz
CC-MAIN-2023-06
4,197
1
http://www.businessrefinery.com/b/15/59/
code
- Java Barcode - .NET Barcode - Generation Guide - Online Store If You Were a Dog Page with Event Code in .NET If You Were a Dog Page with Event Code Bar Code Generator In VS .NET Using Barcode encoder for ASP.NET Control to generate, create barcode image in ASP.NET applications. <%@ Page Explicit= True Language= VB Debug= True %> <html> <script runat= server > Sub OKButton_Click(Sender As Object, E As EventArgs) Dim UserAge, DogAge As Integer UserAge = AgeText DogAge = UserAge / 7 MessageText= If you were a dog, you d be & _ DogAge & years old End Sub </script> <body> <form runat= server > <h1>If You Were A Dog</h1> How old are you <br> <asp:textbox id= Age runat= server /> <asp:button text= OK onclick= OKButton_Click runat= server /><br> <asp:label id= Message runat= server /> </form> </body> </html> QR Code Printer In Visual C# Using Barcode creation for .NET Control to generate, create QR image in VS .NET applications. Part V: Creating Interactive Web Applications Quick Response Code Drawer In Visual Studio .NET Using Barcode encoder for Visual Studio .NET Control to generate, create QR Code image in Visual Studio .NET applications. This code includes three new items: A <script> tag and a subroutine within it A new label control with its id attribute set to Message A new onclick attribute on the button control First, take a look at the new attribute on the button server control: <asp:button text= OK onclick= OKButton_Click runat= server /><br> The onclick attribute captures the event that happens when the user clicks this button How By specifying the name of a subroutine to execute when the event happens So, in this case, you write code in the OKButton_Click subroutine Then when the user clicks the OK button, the subroutine is called, and code there is executed If you re using Visual Web Developer 2005 Express, you can use a couple of different techniques to automatically create subroutines to handle an event You can simply double-click the button in Design view to switch to Source view and automatically create the Click event subroutine Or if you are already in Source view, you can use the drop-down list boxes at the top of the Editor window to select the button control and then select the Click event This also automatically creates the new subroutine For more information and step-by-step directions, see 4 The subroutine in Listing 10-2 accepts two arguments: Sender and E All subroutines that respond to system events receive these arguments You have to specify them here because that s what the event requires But you don t have to use them For now, you can ignore them First, you create a couple of integer variables Then, you assign a value to one of them The value is AgeText: UserAge = AgeText As you might expect, Age is an object, and Text is a property of the Age object But what does Age refer to Look at the textbox server control: <asp:textbox id= Age runat= server /> The id attribute is set to Age That enables you to refer to Age as an object from your ASPNET 20 code When you work with server controls, the server creates the objects for you automatically so that you can immediately access the control s properties In this case, the Age text box object s Text property holds the information in the textbox control The text in the textbox control is then placed in the UserAge variable Paint QR-Code In Visual Basic .NET Using Barcode maker for .NET framework Control to generate, create QR image in VS .NET applications. 10: Interfacing with Your Users Encoding Bar Code In Visual Studio .NET Using Barcode generation for ASP.NET Control to generate, create bar code image in ASP.NET applications. The DogAge variable is then calculated by dividing UserAge by 7 Finally, the result is displayed: MessageText= If you were a dog, you d be & _ DogAge & years old This time, you refer to another object: Message Message is another server control It s the new label control in this listing: <asp:label id= Message runat= server /> A label control is a lot like a textbox control, except it s designed only for showing text You can t edit the text in a label In this case, you use the label to display the result of the calculation You simply assign a value, using & to stick the strings together with the variable DogAge Barcode Drawer In .NET Using Barcode creation for ASP.NET Control to generate, create bar code image in ASP.NET applications. Manipulating Server Control Properties Bar Code Generator In Visual Basic .NET Using Barcode maker for VS .NET Control to generate, create bar code image in .NET applications. Properties and methods are the primary ways you work with server controls on your page So, understanding which properties and methods are available for each control helps you understand what you can do with the controls I spend the next few chapters helping you do just that But before I dive into each control and its members, you should know a few things about properties and giving them values Not all of them work as simply as the label control s Text property But after you get the hang of these few variations, you can work with any property on any control without trouble! Don t skip these sections; otherwise, you might be baffled by the different ways properties are handled in subsequent chapters Barcode Reader In Java Using Barcode reader for Java Control to read, scan read, scan image in Java applications. GS1 - 13 Generator In Java Using Barcode encoder for Java Control to generate, create European Article Number 13 image in Java applications. Bar Code Printer In Java Using Barcode encoder for Java Control to generate, create barcode image in Java applications. Code 39 Full ASCII Encoder In Visual C# Using Barcode encoder for VS .NET Control to generate, create Code-39 image in Visual Studio .NET applications. Code 128 Generator In C# Using Barcode generation for VS .NET Control to generate, create Code128 image in .NET applications. Make Data Matrix In Visual Basic .NET Using Barcode drawer for Visual Studio .NET Control to generate, create Data Matrix ECC200 image in .NET framework applications. Encoding Data Matrix In Java Using Barcode encoder for Java Control to generate, create Data Matrix ECC200 image in Java applications.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711005723/warc/CC-MAIN-20130516133005-00053-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
6,243
41
http://sourceforge.net/p/xbmc/mailman/xbmc-svn/thread/[email protected]/
code
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office. Author: CrystalP <CrystalP@...> Date: 2011-01-22 (Sat, 22 Jan 2011) fixed: failed cd rips of tracks with names containing the character '/'
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776438940.80/warc/CC-MAIN-20140707234038-00032-ip-10-180-212-248.ec2.internal.warc.gz
CC-MAIN-2014-23
285
4
https://www-01.ibm.com/support/docview.wss?uid=swg21303383
code
Cannot find a free socket for the debugger error debugging Java on Windows Vista This technote explains why attempts to debug a stand-alone Java™ program in IBM® Rational Application Developer v7 (v7.0 through to v18.104.22.168) on Windows Vista™ results in the error Cannot find a free socket for the debugger. The problem may occur if you are using RAD 22.214.171.124 or an older version. This is a known problem with IBM JDK 1.5 SR4 on Windows Vista, which is used to launch RAD 126.96.36.199 workspaces. The bug in IBM JDK 1.5 SR4 when running on Windows Vista will cause the Java debugger in RAD not able to find a free TCP/IP socket. Resolving the problem Update to IBM JDK used by RAD to 1.5 SR5 or above. Since RAD 188.8.131.52 is released with an updated IBM JDK 1.5 SR6, a simple way to resolve this issue is by updating RAD to 184.108.40.206. |Software Development||Rational Software Architect||Windows||7.0, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199, 188.8.131.52||Edition Independent| More support for: Rational Application Developer for WebSphere Software Software version: 7.0, 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124, 126.96.36.199 Operating system(s): Windows Reference #: 1303383 Modified date: 23 November 2010
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658681.7/warc/CC-MAIN-20190117020806-20190117042806-00165.warc.gz
CC-MAIN-2019-04
1,275
14
https://opensourcelibs.com/lib/abstractmachineslab-zap
code
Zap is a simple, fast, and correct build system for any programming language, that uses Deno and is built in Rust. Learn more at zap.build. - Fast by default. Every build action is cached and reused, and all builds are reproducible. - Ships only a single executable. - Automatically manages language installations per project for you. You can download the zap binary release from the releases page.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00696.warc.gz
CC-MAIN-2022-27
398
6
https://www.bitchute.com/video/xIXAzA555xk/
code
An error has occurred whilst processing your request! If the issue persists, then please contact us at [email protected]. There is only one spot available for the most private and secure phone you can get - and that spot has been taken by GrapheneOS. Join my channel and become a member to enjoy perks https://www.youtube.com/channel/UCjr2bPAyPV7t35MvcgT3W8Q/join Support me through Patreon: https://www.patreon.com/thehatedone - or donate anonymously: Donate to GrapheneOS: https://grapheneos.org/donate GrapheneOS is a mobile operating system focused on user privacy and security. It is based on Android Open Source Project but it is completely devoid of any and all Google apps. This video will give you a thorough introduction into GrapheneOS, overview of the system and tutorial on how to install GrapheneOS on a Google Pixel phone. In this case, I am installing GrapheneOS on Google Pixel 3a. Official GrapheneOS tutorial: https://grapheneos.org/install How to install fastboot (ADB, Android SDK platform-tools): https://www.xda-developers.com/install-adb-windows-macos-linux/ Download fastboot (ADB, Android SDK platform-tools): https://developer.android.com/studio/releases/platform-tools Install F-Droid to get FOSS apps: https://f-droid.org/ First instrumental by CO.AG Music: https://www.youtube.com/channel/UCcavSftXHgxLBWwLDm_bNvA Second instrumental by CHUKI BEATS: https://www.youtube.com/user/CHUKImusic The footage and images featured in the video were for critical analysis, commentary and parody, which are protected under the Fair Use laws of the United States Copyright act of 1976. |Category||Science & Technology| |Sensitivity||Normal - Content that is suitable for ages 16 and over| 1 week, 5 days ago 2 months, 3 weeks ago 3 months, 2 weeks ago To dismiss this warning and continue to watch the video please click on the button below. Note - Autoplay has been disabled for this video. This advertisement has been selected by the BitChute platform. By purchasing and/or using the linked product you are helping to cover the costs of running BitChute. Without the support of the community this platform will cease to exist. Registered users can opt-out of receiving advertising via the Interface tab on their Settings page. To help support BitChute or find out more about our creator monetization policy:
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739370.8/warc/CC-MAIN-20200814190500-20200814220500-00085.warc.gz
CC-MAIN-2020-34
2,327
26
https://www.theknowledgegroup.org/about/sponsor-webcast-cle-cpe/sponsor-profile/?sponsor=GrammaTech,%20Inc
code
GrammaTech is a leading developer of software-assurance tools and advanced cyber-security solutions. We help organizations develop and release high quality software – free of harmful defects and exploitable weaknesses that cause system failures, enable data breaches, and increase corporate liabilities in today’s connected world. With our security-first software design philosophy, you can rely on GrammaTech to help you design, develop, and deploy high-integrity device and application software, using advanced static analysis tools, and software hardening solutions. Unlike other traditional tools vendors, GrammaTech's mission balances a commercial business with a very strong research arm. Our staff, including over 20 PhDs, is focused on the most challenging software issues impacting the embedded, machine-to-machine (M2M), and Internet of Things (IoT) device markets, through an ongoing set of highly innovative research programs developing new techniques and technologies in software analysis, binary patching and transformation, software monitoring, and autonomic computing.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524743.61/warc/CC-MAIN-20210121101406-20210121131406-00098.warc.gz
CC-MAIN-2021-04
1,088
2
http://www.indiana.edu/~statmath/stat/spss/unix/1.html
code
How to use this document This document is intended to introduce researchers to SPSS for the UNIX environment. University Information Technology Services (UITS) at Indiana University has AIX (IBM) Unix operating systems. To learn more about Unix systems you may use Getting Started with UNIX. You may also enroll in an UITS STEPS class by contacting the IT Training & Education. Contact a consultant at the UITS Support Center, or at a UITS Student Technology Center (STC) if you need help. Consultants are on duty at most of the UITS sites. If you need help using SPSS from any UITS computers, contact the UITS Stat/Math Center (e-mail: [email protected]; phone: 812/855-4724 or 317/278-4740). UITS supports SPSS software under the IBM AIX Libra Cluster. If you want to set up an account on any of the timesharing computers contact the UITS Support Center or visit the webpage: For more information related to the availability of SPSS at IU, please visit the Availability Web page. Features of SPSS SPSS comes with a number of add-on modules along with its Base module. These include the Trends, Tables, and Categories modules. From Release 5 onwards, the Graphics module is incorporated into the Base module. Up to Release 5, the Base module also contains the Statistics module. With Release 5 and above, the Statistics module is separate from the Base module and is divided into Advanced Statistics and Professional Statistics. The Base, Trends, Advanced Statistics, Professional Statistics, Tables, and Graphics modules are available on all central Unix systems. Some features of SPSS are listed below. Data management capabilities include: - Detailed labeling of variables and data values; additional documentation of data sets; storage of data and documentation in system files. - Flexible definition of missing data codes. - Permanent and temporary transformation of existing variables and computation of new variables; conditional and looping structures for complex data transformations. - Reading raw data files in a wide variety of formats (e.g., numeric, alphanumeric, binary, dollar, date, and time formats). - Reading hierarchical and other non-rectangular raw data files. - Reading, combining, outputting multiple files. - Reading matrices for input to procedures. - Flip command to switch the columns and rows in a data set. - Macro facility to build ones own block of SPSS syntax elements and to control the execution of these blocks. - Ability to read and write to compressed files. Statistical procedures for data analysis include: - The EXAMINE procedure to explore data sets before deciding on the course of data analysis to perform. - Descriptive statistics, frequency distributions, and cross-tabulations, bar charts, histograms, and scatterplots. - The RANK procedure, which produces ranks, normal scores, Savage scores, and percentiles for numeric variables. - T-tests, univariate and multivariate analysis of variance and covariance, including repeated measures and nested designs. - Multiple regression, nonlinear regression, constrained nonlinear regression. - Loglinear models for discrete data; probit models. - Factor and principle components analysis, discriminant analysis, cluster analysis, multidimensional scaling. - Nonparametric tests. Besides these capabilities, SPSS add-on modules feature: - Tables to produce simple or complex tabulation formatted for presentation. - Trends including time series plots, plots of autocorrelation, partial autocorrelation, cross-correlation function, smoothing, seasonal regression, Box-Jenkins methods, spectral methods and forecasting. - Categories for doing conjoint analysis and optimal scaling. When working in a UNIX environment, you often hear about the C-shell (csh), Bourne shell (sh), and Korn shell (ksh). These are simply command language interpreters. They tell the system to act on the command you type in from a terminal. Each shell has some unique features. For SPSS computing, it makes no difference which shell you use. You access SPSS the same way whether you are in the K-shell or C-shell. Which shell is the default varies according to the system you're using. To change your local login shell, use the chsh command. You can also switch shells by typing ksh (from the C-shell) or csh (from the K-shell). The .login and .cshrc files are executed during login if you use the C-shell; .login and .kshrc files are executed during login if you use the K-shell. For more on shells, see an introductory guide to UNIX, or The least you need to know about UNIX. Helpful UNIX commands Below are a few UNIX commands you may find useful. Italics denote a parameter that you must specify (e.g. filename, directory name, etc.). ls list files in directory ls -l list files in directory in detail quota display disk quota (if any) history see a list of commands executed so far date print date and time who see a list of all logged in users whoami who is logged on to this account pwd show current directory passwd change password cat file list the contents of the file cat file1 file2 > file3 concatenates file1 and file2 into file more file list file page by page cp file1 file2 copy file1 to file2 mv file1 file2 rename file1 to file2 rm file delete the file head file show the beginning 10 lines of the file tail file show the last 10 lines of the files diff file1 file2 list the file differences wc file count the number of lines, words, and character in the file chmod mode file change the protection mode of the file finger username give information on the user specified. chfn change finger information cd pathname change to directory pathname cd .. move one directory up cd move to the login directory mkdir pathname create a new directory pathname rmdir pathname remove directory pathname man command display UNIX manual entry for command logout end terminal session Refer to a UNIX commands document for further information. Editors in UNIX You may use one of the several editors (e.g., vi, pico, nano, emacs) available from UNIX. Refer to a user's manual or, at the UNIX prompt, type man editor name for online manual. For beginning UNIX users, pico may be the easiest to use. If you're doing e-mail on Shakespeare, you're already using pico, the editor in Pine. Next: Getting Started Up: Table of Contents
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708835190/warc/CC-MAIN-20130516125355-00026-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
6,295
41
https://discourse.opentechschool.org/t/community-call-15-11/1674
code
The next edition of the OpenTechSchool Community Call is scheduled for November 16. I posted my findings about internal communication of other distributed organisations here. I can summarize it tonight if wanted. Reminder: This is happening today. @ben: I’ve put you on the agenda. Thank you all for coming to this edition of the Community Call. If you want to read everything about the fantastic work Berlin is doing with refugees, the revival of the Milan chapter, or how everything should just be a Discourse plugin — head directly to the meeting minutes.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303747.41/warc/CC-MAIN-20220122043216-20220122073216-00584.warc.gz
CC-MAIN-2022-05
562
7
https://physics.stackexchange.com/questions/131569/function-to-fit-solar-radiation-data
code
I have ground-level radiation data of solar incoming radiation from a radiometer (cosine collector) measured along the day. In the following plot you can see PAR irradiance (ie visible light) in Watts per square meter versus time of day (local time). AS usual for this type of sensor, radiation intensity varies with the Sun's angle with respect to the ground level. As you can see it doesn't look perfectly smooth, rather it has some 'chopped' sections (most notably after maximum at noon) due to the presence of transient, passing-by clouds. I would like to use these data points to fit a model and obtain the ideal radiation curve as if it was a clear-sky day, ie perfect, continuous curve. As you can see it is not a gaussian curve... I've heard before that the appropriate model is the Rayleigh distribution, but I'm not sure... What do you think? is that the correct one or should I use another distribution? I don't want just to fit any equation, rather I want to use the appropriate one which is suitable for investigating ant testing the corresponding parameters. Thanks!
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00855.warc.gz
CC-MAIN-2023-40
1,080
4
https://www.jeffputz.com/blog/mix---day-2-and-3
code
The second day of the conference was very dense, and the last session wrapped at 6. They even split the after-lunch hour into two, so there was double the action there. It was getting difficult to assimilate anything toward the end of the day. The keynote went into a lot of details about the forthcoming updates for the phone, mostly from the developer perspective, as well as some stuff about Silverlight 5. They also did some interesting demos with Kinect, using the PC-based SDK. One demo had a motorized chair controlled by it. I slipped out just before the end, which is a bummer because that's when they announced that attendees would all receive a free Kinect (employees not included). I did hear people go ape shit from the hall. I talked to virtually all of the contacts I've made in the Windows Azure org, and made some new contacts with the guys who are covering the new AppFabric caching service, which is awesome. At some point I'm gonna adapt the forums to use that, and see how they roll in Azure. I think this was the most engaged I've been in any day of any of the four Mix conferences I've been to. The quality of the content this year has been completely awesome. It was nice to wind down the day seeing Lion King. On the third day I only went to morning stuff, because of my flight time. With NAB ending as well, I did not want to mess with long waits at the airport. I think my concern was well founded, because I just got through, and it was already a 20 minute wait. Lots of time to kill, but I just saw "Major Nelson" (the Xbox Live guy from Microsoft), so perhaps I'll make some conversation with him. Pretty solid conference. If I'm here next year, I suspect it might be to work in some fashion, instead of just absorb content, but that's OK.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104496688.78/warc/CC-MAIN-20220704202455-20220704232455-00226.warc.gz
CC-MAIN-2022-27
1,769
6
https://community.se.com/t5/Building-Automation-Knowledge/Attempts-to-change-Controllers-IP-address-fails-and-unable-to/ta-p/1065
code
Controllers IP address needs to be changed but any attempt to do so results in the IP address reverting back to what it was previously configured for. - BCX controllers - NC2 controllers - ACX57xx controllers The Controller has hit the Flash writes circuit breaker and is no longer able to save any changes to Flash. This includes updating Device settings such as IP address and ACC Node ID. The controller error log may also contain an error similar to below when attempting to make any such changes. 10/08/2014 19:00:22.00 0x00004406 0x03b76100 0x0004cb68 0x003e2000 0x000007d1 DEVICE_ERR_ERASE_LIMIT (Flash circuit breaker tripped. CX97XX code base) One of the two methods to reset the Flash circuit breaker listed below should then allow the controller to write to Flash again.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100779.51/warc/CC-MAIN-20231208212357-20231209002357-00651.warc.gz
CC-MAIN-2023-50
781
8
https://answers.microsoft.com/en-us/mobiledevices/forum/wpdp-wpupdate/display-tearing-on-lumia-920/78ba6f8a-d939-4d2c-b021-b3814ad210e9?tm=1412571834443
code
I have a Lumia 920, currently on the developer preview. It is now on Cyan, but the display has developed a tearing artifact - much like the tearing you get on PC's when the refresh rate isn't syncrhosized. There is a video on youtube displaying this issue : The proposed solution is to use the recovery tool, which I believe is an unacceptable solution. Coupled with this, the phone now feels very slow, and I'm not sure if it's due to the "sync" issue, or unrelated. My OS version is 8.10.14176.243 Firmware : 3051.50009.1424.0008 H/W revision : 184.108.40.206 Let me know if you need more information.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00306.warc.gz
CC-MAIN-2019-39
603
7
https://www.lesswrong.com/posts/nGExy2RtLi3YuitxC/about-me-background-and-biases
code
I've been a civil servant working in state government for almost ten years. I've worked at several regulatory agencies, I take lots of civil service tests to (among other things) manage my nagging impostor syndrome, and as a result I've also had the opportunity to interview for jobs in this leviathan I expect to be connected to for the foreseeable future. Even when I didn't get the jobs I interviewed for, it's always given me an opportunity to "pull back the curtain" on a lot of the under-appreciated things that government does (sometimes unbeknownst to average citizens). Currently, I work as a paralegal, but I dabble with actuarial science (I passed Exam P and I'm currently studying for Exam FM). Before I started in state government I temped at an insurance company. In what now seems like prehistoric times I pursued a PhD in math, managed to pass my qualifying written and oral exams, completed a master's degree, but never completed a dissertation. Cognoscenti refer to this as "ABD" ("all but dissertation"). I'm fortunate enough to be represented by a union that negotiated very good educational benefits, and that allows me to take courses that nourish my mind. This gives me some insight that others might not have, but inevitably it may also bias me somewhat. Specifically, I may have higher standards than most for what "good" government systems ought to look like, particularly those that leverage information technology. Thus, while I think we might all broadly agree on Gehm's contrapositive of Clarke's Third Law, to wit: Any technology that does not appear magical is insufficiently advanced. I am also quite willing to concede that some things that might appear magical to you, dear reader, may not seem quite as magical to me - and vice versa. In some cases the best thing we can do is let everyone else catch up. Hope this helps.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499831.97/warc/CC-MAIN-20230130232547-20230131022547-00128.warc.gz
CC-MAIN-2023-06
1,857
6
https://petri.com/start-threaded-conversation-slack/
code
How to Start a Threaded Conversation in Slack In this Ask the Admin, I’ll show you how to work with threaded conversations in Slack. Slack has proved to be popular in the small business collaboration space, first with developers adopting the tool as a replacement for email, and then as it gradually filtered out to other teams who saw the benefits Slack provided over email as a collaboration tool. The ability to make calls and share documents was also added, posing a threat to document management and collaboration solutions from Microsoft, which responded late 2016 with Microsoft Teams, its Slack challenger that should reach general availability sometime in Q2 2017. For more information about Slack, see What is Slack and Is It Better Than Email? on the Petri IT Knowledgebase. But the problem I’ve always had with Slack channels is that they get cluttered over time, meaning that while a channel is initially created to discuss a particular topic, at some point, the conversation goes off on various tangents, making the thread hard to follow. That is until a recent update to Slack that provides for threaded conversations, making it easier to keep channels in order. It’s worth noting that the addition of threads is no doubt a response to Microsoft Teams, which supports threaded conversations, even though it’s still in preview. Start a New Thread To complete the instructions below, you’ll need to log into a Slack team using the desktop app, version 2.4.1 or later, or open Slack in a browser window. - Open the Slack desktop app. - Select a channel in the list of available channels on the left. - If no conversation has been started in the channel, you’ll need to add at least one message. - Hover the mouse pointer over a message in the channel window, and click the speech bubble icon in the context menu that appears to the right. - A new Thread panel will open to the right of the channel pane. - Type one or more messages in the Thread panel. Before committing a message to the thread, which you do by pressing ENTER, you can optionally check Also send to #channel to write the message to the parent channel as well as the thread. This is useful for copying important decisions that get made in threads into the main channel as well. Unlike Microsoft Teams, threads are summarized in the channel window rather than expanded in full, so if you want to see the complete thread text, you’ll need to click View thread, which appears to the bottom right when you hover the mouse pointer over the number of replies count. Additionally, all your active threads can be viewed by clicking All Threads at the top of the team pane. In this article, I showed you how to work with threaded conversations in Slack. More in Office M365 Changelog: Visio Personal App in Teams Jan 31, 2023 | Rabia Noureen M365 Changelog: ContextIQ - Inline Search During Message Composing Jan 24, 2023 | Rabia Noureen M365 Changelog: Manage result layouts for SharePoint results in Microsoft Search Dec 20, 2022 | Rabia Noureen M365 Changelog: Loop components in Word for the web in Targeted release Dec 3, 2022 | Rabia Noureen M365 Changelog: (Updated) Stream on SharePoint: Inline playback of videos in Hero web part Nov 23, 2022 | Rabia Noureen M365 Changelog: (Updated) Office for the web rebrand on Service health and Message center Nov 16, 2022 | Rabia Noureen Most popular on petri
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00358.warc.gz
CC-MAIN-2023-06
3,391
30
https://next.wooting.io/post/this-is-how-wooting-analog-keyboard-magic-works
code
Amazing, amazing, amazing feedback we've received over the last weeks. We've been scavenging over the interwebs and read nearly all the comments, in so many languages, people had posted about the Wooting one. There was one recurring theme, people wanted to know more about the analog magic that happens in games.As you know, we can't tell you yet what technology we're using to read analog values. But what we can tell you, is how things work on a software side and just a little tiny bit on the hardware side. I'll do my best to make things as easy as possible, so you don't need to be an engineer to decipher what I'm talking about. coughimnotanengineercough Hardware - The Wooting one is natively analog To start with, let's clear one thing straight away; it's not force sensitive. Meaning that you don't have to bottom out the key and start applying an increasing amount of force to accelerate your car for example. The keyboard works and feels exact like any beloved mechanical keyboard with those sweet mechanical switches and you don't notice anything strange or different about it.There are different methods to read analog values from a mechanical keyboard but, usually, this means you'll need to add extra hardware or devices to read this input. The Wooting one doesn't have this problem because it's entirely analog.Meaning that it inherently sends an analog signal but it depends on other factors if it outputs an analog or digital signal. So, you can still use the keyboard as any mechanical keyboard reading an on/off signal even though it has analog inside, you won't notice it.Note: Remember from this blog, analog can send different input values ranging, for example, from 0 to 100 and digital can only send either 0 or 1, or also known as an on/off signal.The current prototype only has 16 analog keys because we haven't found the most efficient method yet to read all 87/88 keys (ANSI/ISO layout) analog signal without making the keyboard monster size and/or too expensive. But, we have a very good lead that will make the entire keyboard read analog signals without the costly downsides. We'll let you know as soon as we know!Update 01-may-2016: We made it work without the extra costs... All keys analog! Software - Where the magic happens This is the truth, a PC doesn't understand or support an analog keyboard. It has no idea what to do with it but, thankfully, we have a way to make it understand everything without you installing all different kinds of software.To understand how things work, you first need to understand the basics of how the hell a PC knows what to do when you press a key on any input device. I'll talk in "Jip en Janneke" Language. Education time![caption id="attachment_632" align="aligncenter" width="711"] Let's see if you can find out what language we used here ;)[/caption]All input devices talk with their own language or better known as a "protocol". When you press a button, the device starts talking to the computer in its own language. Unfortunately, the computer doesn't understand any of these device languages! Because it only understands bleep-bloops.That's where "drivers" on the computer jumps in. A driver translates this device language into bleep-bloops that the computer understands but for each different language you need a different driver. That's also why, especially in the past, you always had to download or install additional software (drivers) on your computer to support different devices. Thankfully, in this kind-of-modern age, we have standard drivers already installed on your computer.These standard drivers can already translate keyboard, mouse, and gamepad language. When a game developer implements controls in a game, it will use these drivers to help connect the dots between input from a device and output in a game.Not so crazy long ago, Microsoft made the gamepad language even easier on the PC and invented "Xinput". Basically, Xinput is like an API, it takes the gamepad language and simplifies it into a language with far less vocabulary and grammar so that it's easier to implement in games!Another way looking at it is that Xinput is exactly like a Xbox controller in the amount of buttons or "input". But, if a game support Xinput, it doesn't mean it supports the regular gamepad language. It's up to the game developer to support a certain gamepad, the gamepad to support Xinput or a software emulator in between to include Xinput support. OK, take a breather, make some notes and draw some doodles.To put it in another way, let's say you have this weird 4 buttons gamepad with the unique names U, I, O and P printed on them but when you plug in the gamepad and want to start playing, let's say Burnout: Paradise, then you won't even be able to start your engine because it's not compatible with such a weird 4 button device and you can't even map the keys in the menu!That's because the game developer didn't support gamepad language in the game. This is where Xinput jumps in. Basically, Xinput can read the U, I, O and P buttons and simplify it into Xbox controller buttons, like A, X, Y, B. Buttons that a car understands and knows what to do with, or well that the developer basically used as mappable buttons.So, why don't all game developer just support gamepad language? Because it cost more work and time to implement, while Xinput has a limited amount of buttons, is much easier to implement and is used by the majority of people. Woeh! great! Hoerah-Hoerah for gaming but wait, what about the analog input? Basically, the only analog input that Xinput (Xbox controller) can send comes from the left- and right analog stick, and the left- and right trigger keys. It's up to the game developer if and how they make use of it. In nearly all games you'll see that it will use the left joystick for analog movement. Great. How the Wooting one magic works The Wooting one keyboard is recognized by the computer as a keyboard and gamepad at the same time. So it can not only send keyboard language but also gamepad language with Xinput support! We've made two modes, Gaming and Typing, that you can switch in between for its specific use.Basically, in typing mode, the keyboard will work exactly like any mechanical keyboard and in gaming mode, it will work as a (Xinput) gamepad.The reality, in gaming mode, the left analog stick is mapped on the WASD but this also means that you can't use WASD for typing anymore. So in Typing mode, the layout is like a regular keyboard without any gamepad buttons mapped.For now, you need to switch in between the two but with more development time it will be possible to type and use it as a gamepad in games, so even when you've mapped left analog stick on WASD, you can still type like a regular keyboard.[caption id="attachment_666" align="aligncenter" width="711"] Like this[/caption]So, to recap how gaming mode works. For gaming mode, you can map the "keyboard" and the "Xinput" keys (remember, Xinput is basically the same as a Xbox controller) anywhere you like. Meaning that you can, for instance, map the left analog stick on the WASD and keep the rest as regular keyboard keys. But you can also choose to map the other Xinput (Xbox controller) buttons (ABYX, triggers etc.) on any key you like. You could also choose to map the left analog stick on both the WASD and the arrow keys at the same time for whatever reason you see fit.[caption id="attachment_665" align="aligncenter" width="711"] Or this[/caption]Before going to the next chapter, let's talk a little about how you exactly control an analog stick with 4 keys. An example, you mapped the left analog stick to WASD. W analog stick forward, A analog stick left, S analog stick backward, D analog stick right. The further down you press a key, the further you press the analog stick towards that direction. So when you press W all the way down and the D half way, you've effectively moved the analog stick forward at a ~22-degree angle.[caption id="attachment_667" align="aligncenter" width="711"] Or whatever[/caption]In a game applies the same idea. The further down you press a key, the faster your character starts moving towards that direction. We'll show this in detail in another video when we're not hacking, slashing and testing the prototype.Let's not forget, even though you use Xinput for analog movement etc. on the keyboard, you're still using a mouse for superior aiming! Unfortunately, life is not easy. To use analog movement in a PC game, the game needs to have gamepad or Xinput support. Thankfully, nearly all games have Xinput support, so you can basically plug in a (Xbox) controller and start playing the game with it.The real problem arises when a game developer decides to turn off the mouse and keyboard when you activate gamepad in the menu or when you plug in or use a gamepad in the game. In the latter case, the game keeps swapping between devices, possible turning your mouse on and off, causing laggy mouse movements.That's why you won't be able to use analog movement in all games yet. A real party pooping situation, because it's not too difficult for a game developer to add support. We've learned that we're not the only one struggling with this problem. The recently released Steam controller from Valve, in essence, uses the same technique as we do. They use the keyboards, mouse and Xinput language at the same time to support their analog triggers and touch sensitive (mouse) movements.We're still in the process of finding alternatives and hackarounds as a temporary solution but as we progress we're building a permanent solution. More about this in the next chapter. Furthermore, with the introduction of the Wooting one and the further development of the steam controllers, game developers will increasingly recognize this issue and start providing more support.There's already a great amount of games in which it works smoothly, like Battlefield and GTA5. Note: GTA5 is a special topic, we will cover and talk about specifically another time.Now, for the not-gaming side of all this, you can use the keyboard for so many other things too. In the end, it's an analog device that can run on existing platforms, just like how we used Xinput for games, other device languages can be used for other things! The Wooting goal Of course, we're not planning to give you a one-time solution. We've made an analog mechanical keyboard and we don't want it to end up as a gimmick, hell what a waste that would be! This is the part for which we need sincerely need your support to help us get further. In some cases, it means providing us feedback/input, in other cases it's making something awesome for it, anything that increases the chance for people to start using analog input is a win for everybody.Our goals are as followed: - We're focused on releasing the Wooting one - All analog keys - and building as much as possible support for the current solution, working on existing platforms. Including, custom mapping of controller and keyboard buttons etc., tweaking analog experience for different games, fund mods for triple A and cool games that don't support the use of simultaneous devices. - Meanwhile, start developing an open-source analog keyboard driver, which will allow you to gain full access to the keyboard's potential. Enabling native support and custom applications. - Create an easy platform, API in mind, for people with less or no coding knowledge to make their own custom applications. - Let creative freedom flow, pushing for an industry standard. Why you want this too The keyboard is an extremely old input device that ages from before the computer (typewriter, you can spell it on the first row of a QWERTY keyboard). It's layout, functionality and application have barely changed over the multitude of years, even more ironic, we've gone from membrane keyboards back to the older technology of mechanical keyboards. In the meanwhile, your smartphone is using a mind bottling amount of input methods and its applications are ever so increasing. So why are we not increasing the input methods of the keyboard? Even if it's not a leap into the future, you'd, at least, want to see some kind of increments.Or you can just drop the above and realize that Finally you have analog movement in games and the capability to play around with analog input on a keyboard. Especially with the entrance of VR, you can still use your keyboard and mouse and experience complete immersion. Awesome.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00486.warc.gz
CC-MAIN-2022-33
12,451
26
https://experienceleaguecommunities.adobe.com/t5/adobe-analytics-questions/how-to-calculate-bounce-rate-with-a-metric/td-p/191518
code
I want to calculate the bounce rate for a report. I realized that the "bounce rate" metric is not available among my metrics, therefore I tried to create a new entry from "metrics" --> "Add" and there I set-up the follow configuration: Is this correct? Solved! Go to Solution.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510368.33/warc/CC-MAIN-20230928063033-20230928093033-00319.warc.gz
CC-MAIN-2023-40
276
3
https://keywords.oxus.net/archives/2009/03/06/dictionaries-suck-its-time-to-kill-them
code
When I look up a word I need for a lecture on political-economy or colonial theory, I know I will not find the word I need in my dictionary. And I have the largest Chinese-English dictionary I could buy: the ABC dictionary included in Wenlin. Even if I do find a phrase, I can be fairly certain that the one in my dictionary is not the one most commonly used by scholars, or which is used in Taiwan. Accordingly, I do the following. I suspect you do the same as well: Look it up in Wikipedia. If I’m lucky there is a Chinese version of the same page and the title of the Chinese version is the word I want. Great for proper names. Look it up in Google translate. Can’t really trust this, but sometimes it gives me a phrase which i can then plug into my own dictionary or a google search to see if it is correct. Do a search in Google restricted to Traditional Chinese results, but for the English word. I’ll often find academic papers which use the English word in brackets after the Chinese phrase. Do a Google search on Traditional Chinese pages for various possible translations to see the context in which they are used and the frequency in which they occur. So the question is whether it is possible to create a application or web-app which harnesses these various tools and combines them together to make this process even easier? For instance, you put in an English word on top and get results for the Chinese Wikipedia, Google Translate, CDICT (a free Chinese-English dictionary), and straight Google search results limited to TC pages. In addition, all the most common phrases are pulled out and listed in an easy-to-use manner, showing both the frequency the phrased is used (Google hits) as well as samples of the phrase being used in context (Google search results). Perhaps even throw in Google Image search as well — for some things that can quickly show you if you’ve found the right phrase. An even better implementation would allow you to refine your search and narrow down various possibilities. Feel free to steal this idea and build something fantastic! UPDATE: Here are three examples: - “Subaltern Studies” (Click for Google search in Traditional Chinese. The first and third results are different, but a google search for each will reveal that 底層研究 gets 2.7 million hits, while 賤民研究 gives only 83,400, suggesting that the third result — 底層研究 — offers the more appropriate translation.) - “Slumdog Millionaire” (Click on the Wikipedia link, then click on 中文 to see Chinese) - “檳榔西施” (A Google Image search tells you all you need. Warning, might not be safe for work.) My idea is to combine these tools into a single interface, together with a traditional dictionary and some usage statistics, etc.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475311.93/warc/CC-MAIN-20240301125520-20240301155520-00518.warc.gz
CC-MAIN-2024-10
2,781
13
https://foxgreat.com/python-programming-for-beginners-the-simplified-beginners-guide-to-mastering-python-programming-in-one-week-learn-python-quickly-with-no-prior-experience/
code
Python Programming for Beginners: The Simplified Beginner’s Guide to Mastering Python Programming in One Week. Learn Python Quickly with No Prior Experience Want to level up your career and gain more freedom? Say goodbye to limitations and embrace new income opportunities! Ready to unlock the power of Python?Overcome the challenge and master this in-demand programming language! Discover the game-changer in the tech industry: a versatile and beginner-friendly programming language that opens doors to endless possibilities.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.13/warc/CC-MAIN-20231206031946-20231206061946-00141.warc.gz
CC-MAIN-2023-50
528
2
https://discuss.daml.com/t/last-daml-sdk-installation-triggered-the-antivirus/5593
code
Hello DA Team, I downloaded the latest SDK to my laptop and when running daml studio my Norton Antivirus jumped indicating the following issue: The antivirus was triggered even before this message, I just didn’t take a screenshot of it. Anything that I need to do in order for that not to happen? I am certainly very rusty with WINOS but I would remove your complete Daml folder from any future active scans. Especially if you created it, work with it and are the only one what has access to it. Looking at that program path that allegedly triggered the alert, that’s your Daml EXEC, that I’d assume you put there. Try that. If it works, great else come back. Found this link to remove a folder or an extension from Norton Data Protector. Thanks @Ben_M and @rohitt, I am aware of how to remove a folder from future scans of the antivirus. However, it was surprising for me to receive this alert from the antivirus as I was not getting it when installing previous versions. I was thinking that it will be a good thing to raise this matter. Thanks for being understanding I’ll flag this with the Daml Language team for due diligence. Just for reference, here is another popup that was raised by the antivirus. The action seems to be triggered by an attempt to delete a file belonging to a VS Code extension. Can you check if by chance this popup is triggered by VS Code itself, rather then by running it with Since we’re discussing security here, it may be a good idea to put on a paranoid hat for a second. Can you post here the SHA256 of your daml.exe? This is most likely a false positive on Norton’s part, but it does seem worth double-checking you didn’t accidentally end up with something else masquerading as Daml. To get a SHA256 hash on Windows you can use this command: certutil -hashfile FILENAME SHA256 Can you please post the computed SHA256 for daml.exe and, if you still have it, the installer you downloaded? Also please clarify which version, specifically, this is happening with. Sorry, for the late response. @stefanobaghino-da , well, it did happen when opening the VS code. I assume that something was blocked, cause it does not happen again when opening the VS code again. But I am not 100% sure this is the reason. @Gary_Verhaegen I ran the SHA256 you asked and here is the result: SHA256 hash of daml.exe: CertUtil: -hashfile command completed successfully. It happened on version 2.4.0, It was downloaded by running the daml install 2.4.0 directly from my command line That’s the expected value. Thanks for reporting!
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711069.79/warc/CC-MAIN-20221206024911-20221206054911-00681.warc.gz
CC-MAIN-2022-49
2,556
31
https://nyuscholars.nyu.edu/en/publications/transparent-snarks-from-dark-compilers
code
We construct a new polynomial commitment scheme for univariate and multivariate polynomials over finite fields, with logarithmic size evaluation proofs and verification time, measured in the number of coefficients of the polynomial. The underlying technique is a Diophantine Argument of Knowledge (DARK), leveraging integer representations of polynomials and groups of unknown order. Security is shown from the strong RSA and the adaptive root assumptions. Moreover, the scheme does not require a trusted setup if instantiated with class groups. We apply this new cryptographic compiler to a restricted class of algebraic linear IOPs, which we call Polynomial IOPs, to obtain doubly-efficient public-coin interactive arguments of knowledge for any NP relation with succinct communication. With linear preprocessing, the online verifier’s work is logarithmic in the circuit complexity of the relation. There are many existing examples of Polynomial IOPs (PIOPs) dating back to the first PCP (BFLS, STOC’91). We present a generic compilation of any PIOP using our DARK polynomial commitment scheme. In particular, compiling the PIOP from PLONK (GWC, ePrint’19), an improvement on Sonic (MBKM, CCS’19), yields a public-coin interactive argument with quasi-linear preprocessing, quasi-linear (online) prover time, logarithmic communication, and logarithmic (online) verification time in the circuit size. Applying Fiat-Shamir results in a SNARK, which we call. Supersonic is also concretely efficient with 10 KB proofs and under 100 ms verification time for circuits with 1 million gates (estimated for 120-bit security). Most importantly, this SNARK is transparent: it does not require a trusted setup. We obtain zk-SNARKs by applying a hiding variant of our polynomial commitment scheme with zero-knowledge evaluations. Supersonic is the first complete zk-SNARK system that has both a practical prover time as well as asymptotically logarithmic proof size and verification time. The full version of the paper is available online .
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100550.40/warc/CC-MAIN-20231205073336-20231205103336-00020.warc.gz
CC-MAIN-2023-50
2,036
1
https://club.myce.com/t/28-weeks-later/209576
code
I am having problems backing up 28 weeks later using DVD FAB or DVD Shrink/DVD Decrypter any ideas Try Any Dvd tryed didnt work I found no problem (Any and Clone DVD); what region are you and what is going wrong? Shrink could be too old but DVDFab should be current.
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988927.95/warc/CC-MAIN-20210508211857-20210509001857-00560.warc.gz
CC-MAIN-2021-21
266
4
https://infinispan.org/blog/2012/11/19/infinispan-devoxx-2012-hackergarten
code
Infinispan @ Devoxx 2012 Hackergarten Just came back from Devoxx, and once again it didn’t let me down! Great conference, with awesome talks, and fantastic networking opportunities (hackergarten, party…etc). As far as Infinispan is concerned, I joined the Hackergarten on Tuesday morning to try lure some contributors into our project :). One of the most promising opportunities came from Alex Soto, who’s founder and lead of NoSQLUnit, which is a JUnit extension that helps you write NoSQL unit tests. He already has support for a number of NoSQL engines and he was explaining me the challenges of supporting multiple engines. We also discussed the possibility of supporting Infinispan as well :D. During the Hackergarten I also met Andrés Almiray, which is the Griffon founder, and we briefly discussed the possibility of adding integrating Infinispan’s Hot Rod client with Griffon clients. Unfortunately we didn’t have time to get into some coding, but since he lives close-by and he organises Hackergartens in Basel, I might pop in next time around and sit down with him to work on this integration :). Can’t wait for next Devoxx!! Get it, Use it, Ask us!We’re hard at work on new features, improvements and fixes, so watch this space for more announcements! Please, download and test the latest release. If you have questions, are experiencing a bug or want advice on using Infinispan, you can use GitHub discussions. We will do our best to answer you as soon as we can.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474893.90/warc/CC-MAIN-20240229234355-20240301024355-00711.warc.gz
CC-MAIN-2024-10
1,490
8
https://forums.rpgmakerweb.com/index.php?tags/learn/
code
I would like to be able to change the position and size of the menus in my game and now the buttons that are not selected have a background i'm thinking of removing this but I could not find the line that makes it I'm using YEP_SkillLearnSystem and YEP_JobPoints Here is the main issue: The amount of JP you have left for a class is not displayed in the price window for learning a skill of that class. For example, let's say your character's current main class is Thief, with 500 JP left, and you want... I am trying to make a skill/magic system similar in function to the "Mana Eggs" from Grandia 2. My version are crystals. for example, equipping the Fire Crystal will only let you see, learn, and use Fire spells. if you unequip the fire crystal, the spells go with it. i can set the crystals to... Is there a way to make an actor's learnable skills cost more JP the more skills they learn? I got the idea from Octopath Traveller and it's something I'm thinking about adding to my game, as it adds a bit of freedom to the game. Is there a way to do this without editing the plugin itself? Firstly, I apologize if I am posting this thread on the wrong board! I am brand new to RPGMaker software, and I have decided to create a game using RMMV. I have solid ideas and a clear outline for a game that I want to create, with a pretty good idea of how things would conceptually work. The... Some cool snippet we absolutely need to put in favorit, with great example. these basics codes allows you to manage mathematical basis movements in your game. Most essential and basic. This one is obviously very important in any sort of game development. What I find myself using... I have The Victory Aftermath , Aftermath LevelUp ( http://yanfly.moe/2016/01/16/yep-59-aftermath-level-up/ ) and also using Learn Skill System ( http://yanfly.moe/2015/11/14/yep-28-skill-learn-system/ ) I would really really like the aftermath to show what skills are now learn-able for each... Hello. I'm using yanfly's skill learn core. I have a skill that I don't want available for learning to the class, unless a specific weapon is equipped. <Learn Require Eval> value = false; var wpn = $game_actorId(this).weaponId().value; if (wpn === 7) value = true; </Learn Require Eval> Hello, Im new to this forum, so Im not sure if this is the right section, but here it is... Im using Yanfly's Skill Learn System but since I have a huge amount of skills to learn, Id like to be able to 'hide' already learnt skills, instead of them staying on the list labeled as learnt. SEK_AttackCountAndFormula - v 3.3 Counts uses of skills and attacks and adds a bonus damage formula. You can also learn skills after using x times a skill or a weapon. - You can see how many times an actor uses a skill, save the counter to a variable and... I'm not really sure if it belongs here...it is kinda like a request...but more a question about, if it is even possible to make^^ I use the Skill Learn System from Yanfly and wanted some skills accessable when you read a specific book. BUT...I wanted to make it... I'm not sure if it's possible, but I was wondering if there can be an addon to Yanfly's Skill Learn System where you don't ahve the Skill immediately at the start of the game, you have to find items or special people teach you about them. But they only teach you ABOUT them.... not teaching the... Using the RPG MAKER MV trial and having a blast. Have been planning to buy this game for a while now, but for now still using the trial. There is no difference except for the time limit as far as i'm aware? Anyway, I've been making a game and have developed a sort of ammo system using... When I first tried RPG Maker VXAce, I could not help but to wonder why there was no options for fog in the events like in RPG Maker XP. This is a simple tutorial to show how you can create fog for your maps using RPG Maker VXAce. Lets get started. You will need two things. 1.) A fog graphic... I sweep the floor. My brother, "You didn't do a good job. There are dust bunnies everywhere." He sweeps the floor weeks later. "I just swept. Why are there dust bunnies everywhere?" Me," Guess you didn't do a good enough job." Oh god, just watched HawkZombie's stream of my game and realised that *none* of the branching scenes work properly. One has no character graphic, one soft locks due to a passability issue, and one doesn't have its autorun set to the right trigger. I've got first prize in the bag, baby.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00559.warc.gz
CC-MAIN-2021-31
4,460
40
https://discuss.pytorch.org/t/deal-with-variable-length-sequence-for-1d-convolution-batch-normalization/6748
code
Is there any example or related document which describes how to deal with variable length sequences in minibatch for 1D convolution? Here is my detail situation. I have two sequences with size : (#Channel, #Length). Each pair of sequence have same length, but it differs within dataset. For example, let’s say (X1,Y1), (X2, Y2) is paired data with size X1.size() --> [5, 10], Y1.size() --> [3, 10] X2.size() --> [5, 20], Y2.size() --> [3, 20] My goal is learning the model f such that Y = f(X) I am considering f as 1D convolution with batch normalization like, h1 = bn(relu(conv(input))) h2 = bn(relu(conv(h1))) However, I am confused how to deal with multiple sequences in minibatch. If we zero-pad sequence when making minibatch, there seems no example to exclude zero-pad data involved in computation.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662594414.79/warc/CC-MAIN-20220525213545-20220526003545-00388.warc.gz
CC-MAIN-2022-21
807
12
https://lteboost.com/en/goods/info/324303-mobilnyy-proksi-ipv4-50tcpip-soedineniy-kazhdoe-s-otdelnym-ip-arenda-proksi-servera
code
50 TCP/IP connections Each connection has its own IP address Rent a proxy server - Rental period is 30 days. - Proxy type: 4G Arbitrage (Multiport). - Moscow, Volga region, Krasnodar, Central district IP addresses (MegaFon, Yota, Tele2, MTS, Beeline). - Every new HTTP connection has a different IP address. - 50 TCP/IP connections, every connection has a different IP address. - Maximum per client bandwidth is 2 Mbit/s. - IP address pool changes over time. - HTTP(S) and SOCKS5 protocols. - No refunds, please contact support on the site or in the Telegram bot to get a test before buying. - Unlimited data.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.96/warc/CC-MAIN-20230924223409-20230925013409-00786.warc.gz
CC-MAIN-2023-40
609
13
https://getcodesolution.com/r/create-a-function-to-check-if-a-point-falls-between-two-lines-in-r/
code
Maybe you can manage it easily with sp::point.in.polygon() function. The idea is to test if the point(s) are in the polygon made by your coordinates. library(sp) # first you've to get one column for the long, and one column for the lat new_df <- data.frame(long = c(long_1,long_2), lat = c(lat_1, lat_2)) # then you can put the points to test in a vector, like these, and use the function point.in.polygon(c(-85.927,-84.2),c(18.7,18.5),new_df$long,new_df$lat) 1 0 One is in, one is out. CLICK HERE to find out more related problems solutions.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00725.warc.gz
CC-MAIN-2023-06
543
5
https://help.frontify.com/en/articles/570196-how-to-transfer-a-brand
code
Here is a short how to transfer your fantastic work to your client. Please do the following steps. Click the Frontify Button at the Powerbar to get to the dashboard. - On the brand tile click the three dots - Choose "Transfer to another Account" - Now choose the right account and confirm the transfer. Please remember: You have to be the creator of the brand to do a transfer. And both (the customer and you) must have a connection through the brand or project. That means both must have invited each other in projects or brands for example.
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147234.52/warc/CC-MAIN-20200228135132-20200228165132-00297.warc.gz
CC-MAIN-2020-10
542
8
http://www.couponfree.online/2018/10/learn-python-python-for-beginners.html
code
Learn Python: Python for Beginners, Python introduction for beginners. Learn complete Python from scratch! Created by Abrar Hussain Preview This Course - GET COUPON CODE What Will I Learn? - Create fully functional Python programs - Understand user input - Learn about loop structures and conditionals - Correctly execute operations in Python - Work with Python file handling - Create and modify data structures in Python - Manipulate strings and data - Internet Connection - Mac OSX or PC with Windows Vista or Newer or Linux
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255944.3/warc/CC-MAIN-20190520121941-20190520143941-00227.warc.gz
CC-MAIN-2019-22
526
13
http://houston.culturemap.com/eventdetail/film-screening-camouflage/
code
Film screening: Camouflage An ironic and absurd comedy, Camouflage transports audiences to a university summer school camp. The shallowness and cynicism of the academic milieu becomes apparent though the relationship between a young linguistics professor, Jaroslaw, and his diabolical senior colleague, Jakub. Krzysztof Zanussi presents the deeply troubling premise of academic conformity with witty humor mocking the status quo. Not intended as a political film, Camouflage was harshly received by the Polish government, immediately landing on the year's list of banned films.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827175.38/warc/CC-MAIN-20181216003916-20181216025916-00300.warc.gz
CC-MAIN-2018-51
577
3
https://stackoverflow.com/questions/2736100/how-can-i-get-the-hibernate-configuration-object-from-spring
code
I am trying to obtain Spring-defined Hibernate Configuration and SessionFactory objects in my non-Spring code. The following is the definition in my applicationContext.xml file: <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.MySQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> <prop key="hibernate.cglib.use_reflection_optimizer">true</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.HashtableCacheProvider</prop> </props> </property> <property name="dataSource"> <ref bean="dataSource"/> </property> </bean> If I now call getBean("sessionFactory"), I am returned a $Proxy0 object which appears to be a proxy for the Hibernate SessionFactory object. But that isn't what I want - I need the LocalSessionFactoryBean itself because I need access to the Configuration as well as the SessionFactory. The reason I need the Configuration object is that our framework is able to use Hibernate's dynamic model to automatically insert mappings at runtime; this requires that we change the Configuration and rebuild the SessionFactory. Really, all we're trying to do is obtain the Hibernate config that already exists in Spring so that those of our customers that already have that information in Spring don't need to duplicate it into a hibernate.cfg.xml file in order to use our Hibernate features.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104204514.62/warc/CC-MAIN-20220702192528-20220702222528-00399.warc.gz
CC-MAIN-2022-27
1,515
4
https://wm-help.net/lib/b/book/1513151505/9
code
Книга: C# 2008 Programmer This chapter provided a quick overview of the .NET Framework and the various versions that make up the latest .NET Framework (3.5). Regardless of the language you use, all .NET applications will compile to a bytecode format known as MSIL. The MSIL is then JIT-compiled during runtime by the CLR to generate the native code to be executed by the processor. In the next chapter, you start your journey to C# programming by learning use the development environment of Visual Studio 2008.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570765.6/warc/CC-MAIN-20220808031623-20220808061623-00404.warc.gz
CC-MAIN-2022-33
515
3
http://www.cl.eps.manchester.ac.uk/medialand/maths/archived-events/colloquia/www.mims.manchester.ac.uk/events/colloquia/fchung.html
code
Random Graphs and Internet Graphs Professor Fan Chung Graham, University of California, San Diego Thursday June 9, 2005, 2:00 pm. MSS Building, Room C019 We will discuss some recent developments on random graphs with given expected degree distributions. Such random graphs can be used various very large graphs arising in internet and telecommunications. In turn, these "massive graphs" shed insights and lead to new directions for random graph theory. For example, it can be shown that the sizes of connected components primarily on the average degree and the second-order average degree mild conditions. Furthermore, the spectra of the adjacency matrices of some random power law graphs obey the power law while the spectra of the Laplacian follow the semi-circle law. We will mention a number of related results and problems that are suggested by various applications of massive graphs. Materials and further information - Professor Fan Chung Graham's from the MacTutor History of Mathematics archive - High resolution pictures of the lecture. 1 - 2.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867055.95/warc/CC-MAIN-20180624195735-20180624215735-00229.warc.gz
CC-MAIN-2018-26
1,053
23
https://mail.python.org/pipermail/ipython-dev/2016-July/016076.html
code
[IPython-dev] [IPython 5] [Docs] Custom Terminal Prompts carl.input at gmail.com Sun Jul 10 22:04:45 EDT 2016 It seems to be unrelated to the prompt customisation stuff. I removed the code from my last email, so now have the standard prompt again, but the execution count still fails to update, even though the prompt is working In [*1*]: *import* IPython In [*2*]: Shell = IPython.terminal.interactiveshell.TerminalInteractiveShell In [*3*]: shell = Shell() In [*4*]: shell.execution_count In [*5*]: shell.execution_count Maybe the prompt class is supposed to increment the count whenever the ` in_prompt_tokens` method is called, but I need to get to bed. It's 3am here. -- Carl Smith carl.input at gmail.com On 11 July 2016 at 02:27, Carl Smith <carl.input at gmail.com> wrote: > Just to help move things along, this is what I have. It works, except for > the input number always being `1`. I don't know much about some of the > parts, so have no idea why the count doesn't increment. > import IPython > from pygments.token import Token > Prompts = IPython.terminal.prompts.Prompts > Shell = IPython.terminal.interactiveshell.TerminalInteractiveShell > class CustomPrompts(Prompts): > def in_prompt_tokens(self, cli=None): return [ > (Token.Prompt, "CustomIn["), > (Token.PromptNum, str(self.shell.execution_count)), > (Token.Prompt, "]: ") > get_ipython().prompts = CustomPrompts(Shell()) > This code creates an input prompt like `CustomIn: `, but the number is > always `1`. Interestingly, the output prompt, which is inherited > unmodified, is also stuck at `1` now. Everything works correctly; you can > input code and so on, but the execution count never updates. > If anyone has any ideas... > -- Carl Smith > carl.input at gmail.com > On 11 July 2016 at 00:20, Carl Smith <carl.input at gmail.com> wrote: >> Thanks Fernando, but please don't put yourself out on my account. >> Obviously, it's something that needs figuring out, but there's no urgency >> -- Carl Smith >> carl.input at gmail.com >> On 11 July 2016 at 00:15, Fernando Perez <fperez.net at gmail.com> wrote: >>> On Sun, Jul 10, 2016 at 4:11 PM, Carl Smith <carl.input at gmail.com> >>>> No problem at all, Fernando. >>> Great, thanks! >>>> I did have a go at it based on the code Thomas pointed to, but couldn't >>>> figure out how to use the reference to `self.shell.execution_count`. When >>>> it's used directly, as it is in the IPython source (passed through `str`), >>>> you end up with something like this: >>>> In [<traitlets.traitlets.Int at 0x104800b70>]: >>>> Using the `default_value` and `default_value_repr` methods, gives you >>>> the number, but it was always `1`, no matter what the actual input number >>> Mmh, I'm afraid I haven't really done any significant prompt >>> customizations in years, way before we made the system traitlets-based... I >>> don't have a quick solution handy. But let's see if someone else can pitch >>> in and we get a solution, otherwise I'll try to dig in later... >>> Fernando Perez (@fperez_org; http://fperez.org) >>> fperez.net-at-gmail: mailing lists only (I ignore this when swamped!) >>> fernando.perez-at-berkeley: contact me here for any direct mail >>> IPython-dev mailing list >>> IPython-dev at scipy.org -------------- next part -------------- An HTML attachment was scrubbed... More information about the IPython-dev
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573145.32/warc/CC-MAIN-20220818003501-20220818033501-00198.warc.gz
CC-MAIN-2022-33
3,350
64
https://meta.discourse.org/t/import-from-phplist-csv/197117
code
I am considering moving an announcement (newsletter) mailing list to a Discourse category. I’m currently using the best open source software for email newsletters, so I accept that the change might not be the best idea! Though I don’t use many of its advanced features. The idea would be to draw newsletter subscribers to the forum in the hope that some will be intrigued and participate. Also, they could decide on what other categories to watch and what emailed notifications to receive, all in one place. PhpList can export the subscriber list to a CSV file. What would be the best way to import this? There are no usernames but I could create these automatically before importing. I believe I have already locked down all Discourse’s privacy-related settings so that the user list is not visible etc. Or would you recommend sending invitations instead, as in Multiple Use Invite Links? That could be a way of refreshing the list to ensure it only includes people who are still interested.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00087.warc.gz
CC-MAIN-2021-31
998
6
https://www.i2tutorials.com/mysql-tutorial/mysql-cross-join/
code
MySQL – CROSS JOIN A MySQL CROSS JOIN is used to combine all possible results from two or more tables and return a result containing every row from all contributing tables. The CARTESIAN JOIN is also known as the CROSS JOIN, and it provides the Cartesian product of all the tables associated with the CROSS JOIN. A Cartesian product is the product of all rows present in a first table multiplied by the rows present in a second table. Similar to the Inner Join, this clause does not allow the joining condition to be specified. SELECT column-lists FROM table1 CROSS JOIN table2; In the above syntax, column-lists represents the column or field that you want to return, and table1 and table2 represent the table names from which the records are fetched. SELECT * FROM dept_emp CROSS JOIN employees ;
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474661.10/warc/CC-MAIN-20240226162136-20240226192136-00658.warc.gz
CC-MAIN-2024-10
800
5
https://forums.mirc.com/ubbthreads.php/posts/270604
code
I'm encountering a GPF bug that I cannot replicate because it's intermittent, so I cannot create an example to make it happen on-demand. But I'm hoping someone else has been encountering this bug and can offer more clues that could help identify the fix. I have 2 scripts accessing the same website via urlget. Both scripts use hashtables where each item added contains text up to 2kb in length. One script access 1 url every half hour, updating a hash table that can reach up to 20mb or so, and hsaving it each half-hour right after accessing the url. Never seems to be a problem with that script. The other script accesses a different url each half second or so, where each url is used to add another of those items to the hashtable, and it hsave's the hashtable at the end when finished, as well as every 400 items being added while in progress. It's the hsave at one of the intervals of 400 that seems to be when the GPF happens. When the crash happens, there's almost always a mirc*.tm_ file whose timestamp matches when the crash happened, and the content is what would be from the 2nd script hsaving to disk. In case it matters, the hsaves are both using the -i switch. For the latest crash, the disk write had what appears to be correctly written item=data lines, except for the very last disk write that contains unexpected content, so maybe the crash is due to writing data from someplace random that it shouldn't be. The latest *.tm_ file contains 2385 valid item=string lines, but following the $crlf for the last completed line, the remaining bytes are the hex string: 0x43 0x05 0x20 0x0d 0x0a In this case, the correct disk write would've begun with an itemname= beginning with '4'. Since the item count wasn't a multiple of 400, this tells me that it didn't write the whole hashtable database. before crashing. update: another crash had a different invalid string as the final line following the $crlf: 0x0a 0x2a 0x0d 0x0a 1502 valid item=string before the bad line, again not a multiple of 400 update: another crash. The final garbled line was the same 0x43 0x05 0x20 0x0d 0x0a as above. 1st time I remember the crash being the larger hashtable on the 30 minutes timer, after writing 3.7 of 18mb edit: and what event reporter is saying Faulting application name: mirc.exe, version: 22.214.171.124, time stamp: 0x62d559f7 Faulting module name: ntdll.dll, version: 6.1.7601.23391, time stamp: 0x56e9a630 Exception code: 0xc0000374 Fault offset: 0x000c3b03 Faulting process id: 0x26b0 Faulting application start time: 0x01d8a617020b3af1 Faulting application path: C:\mIRC\mirc.exe Faulting module path: C:\Windows\SYSTEM32\ntdll.dll Last edited by maroon; 03/08/22 04:11 AM.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337339.70/warc/CC-MAIN-20221002181356-20221002211356-00569.warc.gz
CC-MAIN-2022-40
2,687
24
https://game-updates.info/oxygennotincluded?146
code
Medicine pack and medicine vial recipes are automatically unlocked at the apothecary when the matching doctor stations are researched. Add some retries to mod db loading for cases where backup utilities or onedrive have locked a file. Skill Scrubber will be correctly resumed on file load. Updated artwork for Skill Scrubber. Added sounds for various animated Artefacts. Dupe hats work correctly on the Schedule screen dropdown. Dupe hats work correctly on the assignment screens. Various sick emotes (i.e. holding stomach and swaying) no longer cause chore interruption. Tweaked frequency of sick emotes. Dupe Germ Susceptibility attribute is now Germ Resistance. Traits, game setting, boosters, etc. affect a duplicant’s Germ Resistance, and each disease has a certain negative resistance. The combination of the two becomes the chance to get sick, which gets close to but never reaches 0% or 100% (except with the custom game setting of course). Germ Exposure tooltips include a (small) clue as to where the germ was encountered.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00241.warc.gz
CC-MAIN-2022-27
1,034
11
https://wordpress.org/plugins/lively-chat-support/changelog/
code
Fix for issue where SMS was not being received (switch from accepting POST vars to REQUEST). Fix sound issue. Avoid errors showing on the Name & Email boxes. Optional track hits for lively Guaranteed support times added (24 hours for customers, 72 for non-customers) If no agents are online, the chat stays in Offline mode. Bug fix while polling for new messages. Fixed a bug in the scheduling algorithm. Repaired permissions for multi-agent addon. Fixed a security issue. Fixed an issue where triggers were occurring more than once. WPML fix (thanks Dmitry). Cookie fix (was showing errors in the footer). Cache support for button text whether in online or offline mode. Bug fix where fields weren't saving if they were blank strings. Scheduling bug squashed. SMS bug squashed. Bug when registering the plugin. Updating issue when not administrator. Cleared a few errors that were showing up if your theme had error reporting turned on. Fixed an issue where some themes were (inappropriately) hijacking links, causing Lively to not function. The Golden IE Fix: Removed 2 trailing commas in JS that were causing Lively to break in IE. To combat memory issues, Lively stores all its configuration in one options hash (1000 less cache hits and more than 50% less get_options() calls)! To upgrade, you just need to install the latest version of the plugin - we update your old data to the new format automatically. We're preselling Screen Sharing! To be released sometime this summer... :D SMS is now supported in "Online" mode (instead of just "Office Hours"). Empty triggers no longer validate - body text is required. Emails are now using wp_mail() function instead of mail(). New header for the plugin page on wordpress.org! Fixed an issue where previously sent messages would trigger a "ding" on page load. Removed whitespace appearing at the top of the chat on mobile devices. Instructions update for SMS. Removed extra slashes in Survey's "Thank You" field. Lithuanian translation added. Monthly plans added for addons ($6.99/month). Just bumping stable tag (sorry about all the updates!). Nearly useless update. Mobile styling bug fix where chatbox title had white space above it. Bug fixed where W3 Total Cache was not getting cleared. Chatbox visibility - an easy way to show Lively on certain pages - no shortcode necessary! Flushing caches when changes are made (supports W3 Total Cache and WP Super Cache) Deleting history in chatbox should keep the Fixed a bug where quick, consecutive messages would fall through the cracks. Caching support for Online/Offline/Hours features Added Danish translation! menu item shows for all users, so agents don't have to be admins to chat. Slashes appearing in chatbox message (Don\'t) Twilio sign up link fixed. Help PDF added to the assets folder (Thanks Sharon!). Profile hooks removed on lively-chat-support.php. Updated SMS documentation. Visitor conversations are now print friendly! Remote HTTPS requests are allowed (some users were unable to connect to the Twilio API for SMS Addon) Twilio credentials fixed. FreeIP failures caught. Curl requests now using WP HTTP. Swapped left/right CSS for chatbox position. Cleaned up agent interface. MULTI AGENT ADDON IS AVAILABLE! Show LivelyChat only on certain pages. New "Schedule" tab allows you to say who's online and when. Push finish date if clicking on unread convo. Green dot appears on a conversation when you receive a new message (no refresh necessary). Message Template for convos (generate on the fly instead of clone)... causing issues with some other plugins. jQuery scope more obvious and effective so there are not conflicts with your theme. Header min-height reset to be compatible with your theme. Fix strange DB multiple primary key error. Fix logout button (somehow didn't update from 1.0.1). Brought back the "Online via SMS" option for those that have purchased the SMS Addon. We've Hit 1,000 Downloads! Pre-Order the all new Multi Agent feature through the link in the top right of the plugin page (will be released October 15) Schedule your online hours (with screenshot!) Sort convos by date! Fix chatbox text colour issue Apostrophes for chatbox titles and messages Delete specific convo and delete all conversation history Trim twilio credentials and activation code for SMS Addon (so that they work even with an extra space on the end or beginning) Option to remove "Powered By Lively Chat" (but we'd prefer if you didn't ;]) Database tables are charsetted and collated to support unicode (utf8) Caching support for triggers, surveys, and loading messages. jQuery noConflict used to prevent JS issues. Survey's addon released! Get specific information from your customers with the most unintrusive tool in town! All chat registrations are emailed to the admin (set on the Settings page). Prepping for Triggers addon. Preparing plugin for Addons plus a few other bug fixes. Fixed a problem with the offline form not submitting. Added an option on the settings page to change the email address the form response is sent to. 0.9.1, 0.9.2, 0.9.3, 0.9.4, 0.9.5 Working on the plugin screenshots Lively Chat Support is released into the wild. Requires: 3.6 or higher Compatible up to: 4.0.15 Last Updated: 2 years ago Active Installs:
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171646.15/warc/CC-MAIN-20170219104611-00459-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
5,255
89
https://devnet.logianalytics.com/hc/en-us/articles/1500009584321-Filtering-the-Data
code
Filtering the Data This topic introduces how to filter the data in a table. Below is a list of the sections covered in this topic: There are 3 levels of filters in Logi JReport: - Query filter (including filter on SQL), which applies to all data components that use the query. - Dataset filter, which applies to the data components that use the dataset in the current report. - Local filter on individual data component defined using the report wizard or by right-clicking a data component and using the Edit Filter dialog. It applies to this individual data component and does not affect other components that use the same dataset. Dataset filters are passed along to the Query Engine the same as query filters. The two levels of filters are much more efficient since only the filtered data is returned to Logi JReport. Local filters on individual components are not passed to the Query Engine so all data is returned to Logi JReport and the component filters out the unnecessary data. This may be very inefficient so always use dataset filters or query filters whenever possible. However if you are using stored procedures, web services and other data sources, Logi JReport may not be able to pass the filter to the Query Engine. The advantage of using a dataset filter instead of a query filter is that it only affects data components that use the dataset in the current report. It still passes the filter to the database but does not change the catalog thus does not affect any other reports. For data share concern, local filters most often cannot be pushed down to the database even though the Push Down Group Query feature is enabled, thus all data is returned and Logi JReport filters the data locally which will use a lot more computer resources. To get better performance, it is better to define the filter at the other two levels. To add filter conditions to a table, that is to apply a local filter on a table, follow the steps below: - Right-click the table and select Format Filter to display the Edit Filter dialog. - If the table is created using a business view, the Filter drop-down list is available, listing all the predefined filters of the business view. You can select one from the drop-down list to apply, or select User Defined in the list to define a new filter as required. - Select the Add Condition button to add a condition line. - In the field drop-down list, specify the field on which the filter will be based. For a table created on a query, you can filter on any DBField in the query, or a parameter or valid formula that is in the same catalog data source as the query. For a table created on a business view, you can only filter on a group or detail object in the business view. - From the operator drop-down list, set the operator with which to compose the filter expression. - In the value text field, specify the value of how to filter the field. When you type in the value manually, if multiple values are required, they should be separated with ",", and if "," or "\" is contained in the values, write it as "\," or "\\". For a table created on a business view, you can use the special field User Name or a parameter to define the value dynamically. When the available parameters cannot meet your requirement, you can also create a local parameter to use in the filter. For the usage of parameters in filter conditions, see the example in Dynamically Filtering Queries. - Select Add Condition to add more condition lines and define the logic between the condition lines. To make some condition lines grouped, select them and select the Group button, then the selected condition lines will be added in one group and work as one line of filter expression. Conditions and groups together can be further grouped. To take any condition or group in a group out, select it and select Ungroup. To adjust the priority of the condition lines, select it and select the Up or Down button. To delete a condition line, select it and select the Delete button. - Select OK to create the filter. Then when you preview the table, only data satisfying the specified filter conditions are shown. Note: The following SQL type of data cannot be filtered: Db.SQL_BINARY, Db.SQL_BLOB, Db.SQL_CLOB, Db.SQL_LONGVARCHAR, Db.SQL_LONGVARBINARY, Db.SQL_VARBINARY and Db.SQL_OTHER.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506658.2/warc/CC-MAIN-20230924155422-20230924185422-00502.warc.gz
CC-MAIN-2023-40
4,292
26
http://www.dbasupport.com/forums/showthread.php?8877-CURSOR&p=35596
code
I have next problem. I have table EMP with columns In this table I have for example 1000 rows. I need to write cursor which will retrieve one row per deptno, if I have many rows with the same deptno than I need the latest one (max(dhire)). Would your query consistently return the same employee if there are multiple employees hired on the same date in the same department? What if the table is reorganized or in some way altered in sequence (CTAS? Export/Import?)? Clearly you will get AN employee from the set defined by OCPDBA2001. I would just like to more completely understand the result of this query. Thanks?
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706499548/warc/CC-MAIN-20130516121459-00055-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
616
5
https://pitstop.manageengine.com/portal/en/community/topic/notification-sent-to-new-user-when-created
code
Notification sent to new User when created We are trying to send an email notification of required online training to new employees when they are on-boarded. However, the current notification sends out the email prior to the completion of the email synchronization process, so that it sends the email, but the end user mailbox has not been fully replicated in O365 and hence never receives the email. We do not receive any errors; it is just that the email is sent but never received by the mailbox. We CC'd our shared mailbox and do receive the copy of the notice, but through testing the user mailbox does not receive the notification email. We have determined that there needs to be a delay feature with the notifications, that when the new user is created the notification can be scheduled to be sent at a later date/time. Similar to the scheduled reports feature or the automation feature. New to ADSelfService Plus?
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710909.66/warc/CC-MAIN-20221202150823-20221202180823-00712.warc.gz
CC-MAIN-2022-49
921
4
https://support.mozilla.org/en-US/questions/997198
code
This thread was closed and archived. Please ask a new question if you need help. Why are FF29 and Australis so extremely bad compared to older Firefox version? What were the developers thinking when producing this catastrophy? Why are you trying so hard to make Firefox look like Chrome? WHY? Why is Firefox trying so hard to be a bad copy of Chrome? WHY? Why isn't Firefox the nice and functional Browser I once knew? WHY? Whoever had the idea of making Australis, please kick him out. And if you're gonna copy Chrome, then WHY DON'T YOU MAKE MULTIPLE PROCESSES FOR TABS?! Instead, you're working not make Firefox look dumb. Great. Modified by mumei All Replies (8) That is a matter of opinion. Many users like the new UI. You can submit feedback to the developers here: https://input.mozilla.org/en-US/feedback The majority of the people providing support here are users just like you. You're dreaming. No user likes the new interface. The comments on the feedback page make it pretty clear. Who are you trying to fool? If you don't like it, simply stop using version 29. Simple as that. There's no need to be rude to other contributors. By the way, Mozilla is actively monitoring social media and the reaction has been pretty good regarding Australis. hello, this forum is intended for technical support and not general discussion - so if you don't have a particular issue you want to troubleshoot use the feedback link provided by ed instead! (Mozilla Support rules and guidelines) thank you for your understanding... btw. there is also work done under the hood of course - for progress on a multiprocess architecture in firefox see https://air.mozilla.org/inter-presentation-schuster/ for example. Modified by philipp Australis might be attractive to 13 year old kids on their smartphones using facebook and twitter (your so called "social media"), but that's it. It's just pitiful how obviously Firefox is trying to imitate the Chrome look, instead of improving and adding important functions. Sorry if I sounded rude to the-edmeister, it's just that I'm really really annoyed by this update. I always loved Firefox in the past, and this is the first time I feel ashamed for using Firefox. @philipp From what I've heard, the development on that one is basically 'on-hold indefinitely'. Modified by mumei I understand your frustration but this change is good! You should give it a try. The reception on Australis has been wonderful! The interface you see in Firefox 29 is a major re-design (first one since Firefox 4) of Firefox's user interface called "Australis." We could argue about the UI similarity between 29 and Chrome but what good would that do? Main point is that Firefox is still way more customizable than Google and way more secure. If you don't like the new Australis look, you can revert back to the view before Australis using the Classic Theme Restorer addon And if you prefer tabs on bottom: @Moses Of course Firefox is more customizable than Chrome and that's also why it's always been a better browser than Chrome. But that customization also includes the 'tabs on bottom' option by default. This change is definitely not good, because obviously, Firefox is trying hard to look childish (and misinterpreting it as modern). This just absolutely ruins the image of Firefox. I had to try around for an hour or so (and adding several addons) to convert FF29 into an usable browser - which the older Firefox version were by default. Seriously, please delete Australis in FF30. Or at least make the older look the default look. And also, yes, I tried Australis. It was horrifying. The tabs look weird, the new settings button is bad (just as bad as the one in Chrome), and almost every other design change was just as bad. And I'm really curious about where exactly there is this 'wonderul reception'. Most of the comments on FF29 across the net I've seen so far have been negative ones. My guess is your reception was based on the beta testers. Now, that the official update has been released, I guess you'll see lot of negative opinions. Modified by mumei Don't feel like I'm shutting you down because I'm not. philipp is right. This is intended for technical support only and not general discussion. You can submit feedback to the developers here: https://input.mozilla.org/en-US/feedback I will say a couple of things before. - We are all volunteers, not Mozilla employees or Firefox developers. - To be perfectly blunt here (quick disclaimer: this is my opinion as a volunteer and in no way is the view of the Foundation or Corporation) I don't see Australis going away at all. The change is too new and I doubt Mozilla will pull back the UI just because some users don't like it. Even if they did go back to the old UI, Mozilla would be going backwards instead of forward into innovation which is what they're best known for. - If you want tabs on bottom, please download the Tabs on Bottom extension: https://addons.mozilla.org/en-US/firefox/addon/tabs-on-bottom/ - If you want to restore Firefox to the old layout, please download the Classic Theme Restorer extension: https://addons.mozilla.org/en-US/firefox/addon/classicthemerestorer/ This is all the information I have and I'll be closing this thread as it's not a support request. Please direct feedback to input.mozilla.org/feedback using the latest version of Firefox. Thanks for your understanding! Fellow Moderators: Please feel free to unlock if you feel it's necessary
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00224.warc.gz
CC-MAIN-2021-49
5,457
37
https://www.englishtobangla.org/meaning/promoted-in-bengali
code
Promoted Meaning in Bengali What is the meaning of word Promoted in Bengali/Bangla ? Defenition of word Promoted - further the progress of (something, especially a cause, venture, or aim); support or actively encourage. - advance or raise (someone) to a higher position or rank. some regulation is still required to promote competition Other Meaning of Promoted
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499695.59/warc/CC-MAIN-20230128220716-20230129010716-00665.warc.gz
CC-MAIN-2023-06
361
7
https://mrewert.edublogs.org/category/quick-write/
code
Choose one of the options below to write about. - You wake up to find yourself in your five-year-old body and back in time. How do you spend your first 24 hours in this situation? - All of a sudden, the entire world can hear each other’s thoughts. How does the planet cope? - Everybody in the world has disappeared except for you. Describe your next 24 hours. Prompts to help you get started: You can write almost anything that comes to mind from this clip or…. - What does this make you think of? - Why do you think so many people are so interested in space anyways? - Would you ever want to go to space? Why or why not? - Do you think space is actually dangerous? - Tell about a Sci fi story or movie you remember.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662601401.72/warc/CC-MAIN-20220526035036-20220526065036-00177.warc.gz
CC-MAIN-2022-21
720
11
https://cboard.cprogramming.com/tech-board/97308-corrupt-drivers-please-help-printable-thread.html
code
During a forced shutdown of my Acer Aspire 5100 laptop, both the modem and audio got totally disabled. I'm assuming the drivers are corrupt. The device manager doesn't even acknewledge the existance of either one. Is it possible they are part of the motherboad itself? I tried googling, but with no luck. So I come here. How would I find the nessesary drivers to get them back up and running??
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118713.1/warc/CC-MAIN-20170423031158-00161-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
393
1
https://discourse.jupyter.org/t/is-there-a-good-intro-to-jupyter-notebooks-tutorial/1565
code
I hear that @danielskatz is looking for a good hands-on introduction to Jupyter Notebooks (the document specification, the interface, etc). Something that is fit for a tutorial session or a bootcamp with a small-ish group (e.g., < 80 people) Does anybody have something like this? I believe that @KirstieJane and the Turing Way folks have something like this, but other suggestions are most-welcome! Thanks Chris - This will probably be about 30-40 people, and 45 min to an hour. Ideally, I’m looking for something that I can run in binder and project on a screen, and that attendees can follow along on their own binder instances and try out what I show along with different things that they think of and want to attempt. Is https://github.com/ipython/ipython-in-depth/tree/pycon-2019 along the lines of what you are looking for? It is a for a three hour tutorial but you could probably make it shorter. Thanks Tim - that’s helpful. Something I forget to mention is that many of my attendees will be librarians, with some programming experience, but most of them won’t consider themselves programmers, so I’m looking for something that’s really introductory. based on Chris pointing me at Data Carpentry, some things that are close are: Have a look at my introductory learning module: "Get data off the ground with Python" There’s an accompanying online course in our Open edX site, with six video demos (that are also public on YouTube). At GW, the library is teaching a bootcamp using this module, open to all staff and students. Twice it has been “sold out” within days. The Hamilton one is good, thanks! The other requires Datascience, which is not something I want to introduce How about https://towardsdatascience.com/how-to-effortlessly-optimize-jupyter-notebooks-e864162a06ee Regardless of the title, it starts off quite basic. I liked it, but am checking out the links above to. @danielskatz did you already run your tutorial? Do you have a link to the material you are preparing/ended up using? It would be great to lin kit here so people from the future can find it. HI @betatim - it’s in early August, and what I’m currently planning on using is https://github.com/danielskatz/repro-fdtd1d/blob/bca6a0936dfda81466d110b62c596e7637d476a4/Notebook_Demonstration.ipynb I also like what @sqqqrly suggested and have added a link to it in my notebook as a further resource. My colleague Michael Lamoureux helped create this introductory resource: https://intro.syzygy.ca/ Hi, I’m working on an intro for people who have never done any computing or coding before. It’s hands on but very slow. Aiming to get it into the Library Carpentries. Would absolutely love some feedback if you try it out. Beware however that I too am completely new to computing and GitHub (just got in and did it, no training). https://github.com/sarasrking/Introduction-to-Jupyter-Notebooks. In the workshop I use an Etherpad with the whole workshop on it so participants can copy and paste the code if they’re a bit overwhelmed and don’t want to type it out. It’s all about having a good experience with the new thing, and I’m working with people that aren’t necessarily too keen to be there
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00450.warc.gz
CC-MAIN-2022-27
3,206
18
http://www.6pm.com/product/7136042/color/3
code
- FITTING INFORMATION: Women begin with street shoe size. - The choice of the United States Amateur National DanceSport Champion, Maria Manusova. - Leather or satin available. - Fully lined and cushioned with a completely flexible forepart to enhance the dancer's foot. - 3" contoured heel with a nonslip suede sole. - Heel Height: 3 in - Product measurements were taken using size 10, width B - Medium. Please note that measurements may vary by size.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164036943/warc/CC-MAIN-20131204133356-00028-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
451
7
https://m.timesjobs.com/mobile/job-detail/bi-developer-koch-global-business-services-bengaluru-bangalore-3-to-6-yrs-jobid-QQJBWBfuXblzpSvf__PLUS__uAgZw==&bc=+&sequence=21
code
The BI Developer will be a part of an international team that designs, develops and delivers new applications for Koch Industries. Koch Industries is a privately held global organization with over 120,000 employees around the world, with subsidiaries involved in manufacturing, trading, and investments. Koch Technology Center (KTC) is being developed in India to extend its IT operations, as well as act as a hub for innovation in the IT function. As KTC rapidly scales up its operations in India, its employees will get opportunities to carve out a career path for themselves within the organization. This role will have the opportunity to join on the ground floor and will play a critical part in helping build out the Koch Technology Center (KTC) over the next several years. Working closely with global colleagues would provide significant international exposure to the employees. What You Will Do In Your Role Develop business intelligence and data warehousing solutions to meet business needs. Work with users and BI team to develop business requirements and documentation. Maintain, monitor and troubleshoot existing ETL jobs, models and reports. Troubleshoot, fix and explain job failures. Answer users questions regarding cubes and reports data. Develop standard reports and dashboards based on business requirements. Be able to present and partner with non-technical user base. Adopting best practices in the implementation and execution of support processes Collaborating with various IT teams such as infrastructure support, networking, database administrators, web development and business application developers Participate in scheduled Agile meetings Keep your VSTS items up to date The Experience You Will Bring Requirements: Bachelor's degree in Computer Science, or with equivalent professional experience. Exceptional problem solving skills Minimum of 3 years of hands-on BI work 3 years of experience in SSIS, SQL, SSRS, SSAS cubes, programming complex DAX, MDX queries and stored procedures Experience developing Power BI and Tableau models and dashboards Experience in Amazon Web Services (AWS) Data Lake Implementation Experience in an Agile environment Experience with Git and VSTS Experience with Python Experience with Redshift or PostgreSQL Excellent written and verbal communication skills What Will Put You Ahead Worked in the Manufacturing industry Knowledge of supply chain concepts, Finance and Accounting Posted on: 24 Aug, 2021
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00487.warc.gz
CC-MAIN-2021-39
2,462
25
http://www.breaghadesigns.com/product/harris-tweed-brown-and-cream-roddy-backpack
code
Our Brown and Cream Roddy Backpack is handcrafted from luxurious Harris Tweed. The backpack has two large pockets on the front as well as two on the inside, perfect for storing your essentials. The rear straps and top handle are made from a beautiful veg tan leather, and are adjustable. This is the perfect bag for dashing about the town to walking in the country. Lightweight, water-resistant and highly durable, this is a very cool and smart bag. Open - 48cm x 38cm Closed - 42cm x 38cm
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657097.39/warc/CC-MAIN-20190116073323-20190116095323-00076.warc.gz
CC-MAIN-2019-04
489
7
http://www.linuxquestions.org/questions/syndicated-linux-news-67/lxer-ready-set-port-an-app-from-windows-to-linux-542216/
code
Published at LXer: Thanks to the Novell-sponsored open source Mono effort and its contributors, the idea of cross-platform ASP.NET development isn't a dream. It's a reality. But how quickly can you go from being only conceptually possible to becoming a reality and then deployed? For the winner of the first round of the Race to Linux 2 contest, it is five hours and 26 minutes.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00000-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
378
2
https://forum.dynamobim.com/t/free-form-rebar-on-segment-lining/50347
code
I’m a new Dynamo user. I would like to apply rebars in a curved solid (segment lining of tunnels, see attachment) using free form rebar. I was able to manage it in Revit and advices tells me that “Structural Design” package could be the best one to use in Dynamo. I supposed to select one face of the segment lining, define a starting and an end point on it and then apply linear rebars considering a certain offset from the surface itself. Could it be a good way?
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504838.98/warc/CC-MAIN-20210621212241-20210622002241-00120.warc.gz
CC-MAIN-2021-25
470
1
https://en.softonic.com/articles/whats-hiding-inside-this-hard-drive-will-terrify-you
code
Viruses and malware offer us ever-evolving threats in an increasingly digital world. Fortunately, Softonic is here to offer sage advice about how to protect yourself from the worst happening and what to do should a malicious piece of code ever find its way onto your hard drive. Sometimes, however, we’re stumped. There is no software solution to some problems that befall us in this very modern world. U.S. TSA (Transportation Security and Administration) agent Neville Flynn came up against such a problem when he was inspecting a hard drive recently at Miami International Airport. A traveler was trying to board a plane to Barbados when Flynn decided he should she was carrying. Inside, he found a nylon stocking that had been used to hide a baby python. Yep, a woman was trying to use an external hard drive to smuggle a snake onto a plane. A motherfu@*in snake onto a motherfu!@in plane. Honestly, no matter what type of problem you have with your PC, Mac, smartphone, or tablet, come to us and we will be able to help you solve it. We draw the line at snakes, however. Snakes are where Softonic taps out. Softonic can help you if you have a virus, nasty malware, or ransomware to deal with. We can help you if your PC is too slow, your smartphone is too old, your Wi-Fi won’t work, or if you want to learn all the keyboard shortcuts for your Mac. If need any type of Software solution, Softonic can help you. We can’t help you if you have a snake in your hard drive. Definitely not, and we’re not even sorry that we can’t help. Not one bit. Oh yeah, in case you were wondering. Both the lady who was carrying the hard drive and the snake that was hiding inside it missed their flight to Barbados!
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00295.warc.gz
CC-MAIN-2019-09
1,713
5
https://turkish.ava360.com/hang-massive-at-tedxkalamata-video_a80da9267.html
code
About the speaker: Hang Massive is a recent collaboration between Danny Cudd and Markus Johansson. The duo first started playing in the summer of 2010 and since then has performed all over the world. They are gaining recognition globally, captivating their audiences with their powerful yet soothing performances. They have had over 4 million views on YouTube and their debut album " Beats for your feet " (2011) has received great reviews. In the spirit of ideas worth spreading, TEDx is a program of local, self-organized events that bring people together to share a TED-like experience. At a TEDx event, TEDTalks video and live speakers combine to spark deep discussion and connection in a small group. These local, self-organized events are branded TEDx, where x = independently organized TED event. The TED Conference provides general guidance for the TEDx program, but individual TEDx events are self-organized.* (*Subject to certain rules and regulations)
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991650.73/warc/CC-MAIN-20210518002309-20210518032309-00370.warc.gz
CC-MAIN-2021-21
962
4
http://video.stackexchange.com/questions/tagged/mkv?sort=unanswered&pageSize=50
code
How do I author a playable DVD from an MKV file containing MPEG2 video, audio, subtitle and chapter streams? I'm using MakeMKV to back up my DVD library to MKV files without re-encoding the video or audio streams (since space is cheap, and quality is top priority.) I'd like to know if there's a tool ... Basically, I have a folder containing some video files, the entire folder, according to Windows, is 1.98GB. I am using ConvertXtoDVD5 with the plans of burning these files to a disc, the target size ... If I use a program like ASVtoDVD or ConvertXToDVD to convert a .mkv to a .dvd, and the .mkv has multiple audio tracks/subtitle, will the produced .dvd only have one of those, or will it retain more or ... I have a series of videos in mkv format that I need to stitch together into one longer video. The videos are also a little slow; halving the number of frames to save disk space and speed it up would ...
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927458.37/warc/CC-MAIN-20150521113207-00201-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
915
5
http://www.wakeworld.com/forum/showthread.php?p=1845324
code
Figured I'd post an update to see if anyone might have some leads or need someone with my skill set. Been a couple years since the last one. I'm an avid wakeboarder and wakeskater for over 10 years. I have about 8 years of retail experience, just over 3 years in the surf industry at a small, local shop; and almost 5 in .com and TV merchandise buying, marketing, and product development for a major national retailer. I want to make the move back in to the action sports industry, specifically in wake as that's where my passion is. In addition to my professional experience I'm a strong writer, very creative, up on current social media/tech trends, and have great team skills (but also work very well independently). I'm a "jack-of-all-trades master of none" kind of person with a drive to go after any task with enthusiasm. Any leads or ideas would be much appreciated! Will consider relocating for the right opportunity!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121000.17/warc/CC-MAIN-20170423031201-00205-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
925
5
https://www.frontiersin.org/articles/10.3389/fpls.2022.906410/full
code
ORIGINAL RESEARCH article Sec. Technical Advances in Plant Science Volume 13 - 2022 | https://doi.org/10.3389/fpls.2022.906410 Deep Learning Based Greenhouse Image Segmentation and Shoot Phenotyping (DeepShoot) - 1Molecular Genetics, Leibniz Institute for Plant Genetics and Crops (IPK), Seeland, Germany - 2Automation and Computer Sciences Department, Harz University of Applied Sciences, Wernigerode, Germany Background: Automated analysis of large image data is highly demanded in high-throughput plant phenotyping. Due to large variability in optical plant appearance and experimental setups, advanced machine and deep learning techniques are required for automated detection and segmentation of plant structures in complex optical scenes. Methods: Here, we present a GUI-based software tool (DeepShoot) for efficient, fully automated segmentation and quantitative analysis of greenhouse-grown shoots which is based on pre-trained U-net deep learning models of arabidopsis, maize, and wheat plant appearance in different rotational side- and top-views. Results: Our experimental results show that the developed algorithmic framework performs automated segmentation of side- and top-view images of different shoots acquired at different developmental stages using different phenotyping facilities with an average accuracy of more than 90% and outperforms shallow as well as conventional and encoder backbone networks in cross-validation tests with respect to both precision and performance time. Conclusion: The DeepShoot tool presented in this study provides an efficient solution for automated segmentation and phenotypic characterization of greenhouse-grown plant shoots suitable also for end-users without advanced IT skills. Primarily trained on images of three selected plants, this tool can be applied to images of other plant species exhibiting similar optical properties. Image-based high-throughput plant phenotyping became a method of choice in quantitative plant sciences aiming to reveal casual links between phenotypic and genomic plant traits under varying environmental conditions (Li et al., 2014). The ultimate goal is to make an assessment of plant phenotypic traits data as efficient and scalable as genomic screening (Miller et al., 2007; Fahlgren et al., 2015). However, efficient and accurate processing and analysis of large image data from different optical set-ups represent a challenging task constituting one of the major bottlenecks in the pipeline of phenome-genome correlation (Minervini et al., 2015). The first critical step in quantitative analysis of plant image data is image segmentation, which aims to classify all image pixels into two or more distinctive classes, e.g., foreground (plant) and background (non-plant) regions. Due to several natural and technical factors, segmentation of plant structures from background regions renders a challenging task. Inhomogeneous illumination, shadows, occlusions, reflections, and dynamic optical appearance of growing plants complicate the definition of invariant criteria for detection of different parts (e.g., leaves, flowers, fruits, and spikes) of different plant types (e.g., arabidopsis, maize, and wheat) at different developmental stages (e.g., juvenile, adult) in different views (e.g., top or multiple side views) (Henke et al., 2019). Consequently, conventional methods that are typically based on the some suitable image features and tailored to a particular data cannot be extended to new data in a straight forward manner. For example, one such popular approach to unsupervised image segmentation is based on analysis of differences between plant-containing and ‘empty' reference images (Choudhury et al., 2018). Thereby, it is assumed that background intensity/colors remain unchanged after plants were moved into the photo chamber. However, due to shadows and reflections both background and plant regions change their optical appearance. Moreover, these changes are dynamically progressing in course of plant development. Consequently, an 'empty' background image does not provide an ideal reference for the straightforward segmentation of plant structures. Mapping of RGB images onto alternative color spaces such as HSV and/or L*a*b is known to be useful for the separability of fore and background colors (Philipp and Rath, 2002; Pape and Klukas, 2015; Henke et al., 2021). However, it cannot completely solve the problem of overlapping plant and background colors. To overcome the above limitations of uni-modal image analysis, a registration-classification approach to plant image segmentation was suggested in our previous study (Henke et al., 2020), which relies on pre-segmentation of plant regions in image modalities with higher fore-/background contrast, such as fluorescence images, followed by their co-registration with low-contrast image modalities (e.g., visible light or near-infrared images). Since segmentation masks derived from one image modality do not perfectly match another image modality, classification of plants and marginal background structures in masked image regions has to be subsequently performed using pre-trained intensity/color models. In some rare cases of severe plant movements due to the relocation of carriers from one photo chamber to another one, substantial differences between plant contours in two different image modalities can occur. Although the registration-classification showed relatively high accuracy of final segmentation results, the principle requirement of high-contrast multimodal data and occasional movement artifacts limit its application to experiments where only one single image modality (typically visible light images) is acquired. Numerous further supervised approaches to intensity-/color-based plant image segmentation were proposed in the past. In Lee et al. (2018), automated segmentation of arabidopsis top-view images using a super pixel- and random forest classification-based algorithm was presented. In this approach, pre-labeled masks were used to segment each plant from the multi-tray experiment. However, like many other color-based models it is limited to a particular experimental setup and plant type. More recently, Adams et al. (2020) proposed a neural network based shallow learning method for the segmentation of side view visible light images. This approach classifies each pixel based on neighborhood pixel information of the trained ground truth data and outperforms conventional thresholding methods. All the above state-of-the-art techniques require reference images, the presence of particular image features, and expertise in manual parameter tuning for each image to be segmented. Consequently, conventional supervised techniques are typically trained on and applied to particular types of plants, experimental set-ups, and illumination scenes. However, high-throughput phenotyping of thousands and millions of plant images demands fully automated, efficient, and accurate segmentation algorithms with higher order cognitive abilities that can tolerate variation in scene illumination and plant/background colors. In recent times, convolutional neural networks (CNNs) gained high attention especially in computer vision applications, because of the ability to directly extract and train relevant multi-level features from data without prior knowledge and human effort in feature design. CNNs have been shown to outperform conventional approaches when applied to many traditionally difficult tasks of image analysis including pattern detection and object segmentation in biomedical images (Ronneberger et al., 2015; Bai et al., 2018), traffic scenes (Badrinarayanan et al., 2017) and remote sensing (Marmanis et al., 2016). In recent years, they were also used for high-throughput plant phenotyping such as the detection of wheat roots grown in germination paper (Pound et al., 2017), segmentation of roots from the soil in X-ray tomography (Douarre et al., 2018), and segmentation of spikes in wheat plants (Misra et al., 2020). However, most of these studies present exemplary applications and/or computational frameworks that can hardly be handled by end-users without advanced programming skills. The aim of this study is to develop an efficient and handy tool for automated shoot image segmentation and quantification for different plant types using a pre-trained deep CNN framework which could be used in a straight forward manner even by unskilled users. The GUI software tool (DeepShot) developed for this purpose relies on the U-net segmentation model from Ronneberger et al. (2015) which was trained on ground truth images of three different plants (arabidopsis, maize, and barley) acquired from two different views (side, top) in different stages of their development. The article is structured as follows. First, we present our methodological framework including the proposed U-net based framework for shoot image segmentation, ground truth data generation, and training and evaluation procedures. Then, the results of experimental investigations are presented including a model performance by application to segmentation of test shoot images vs. alternative state-of-the-art solutions. 2. Materials and Methods 2.1. Image Data The deep learning-based shoot image analysis tool (DeepShoot) is designed for automated segmentation and quantification of visible light (VIS) images of arabidopsis, maize, and barley shoots acquired from greenhouse phenotyping experiments using LemnaTec-Scanalyzer3D high throughput plant phenotyping platforms (LemnaTec GmbH, Aachen, Germany). Figure 1 shows examples of arabidopsis, maize, and barely images from three different LemnaTec phenotyping platforms tailored to the screening of large, mid-size, and small plants. All three phenotypic platforms have different designs of photo chambers, illumination, colors of background walls, and camera resolutions ranging between 1 and 6 Mpx. Figure 1. Examples of side- and top-view images of arabidopsis (A,B), barley (C,D), and maize (E,F) plants acquired with three different plant phenotyping platforms. 2.2. Ground Truth Generation For the training of CNN segmentation models, a representative set of ground truth images with an accurate annotation of fore- and background image regions is required. In this study, the generation of ground truth images of different greenhouse cultured plants was performed using the GUI software tool kmSeg (Henke et al., 2021), which allows for the efficient annotation of image regions by manual selection of pre-calculated k-means color clusters corresponding to targeted plant structures. Background structures that exhibit similar color fingerprints as plant regions and, thus, could not be separated by color clustering are excluded or subsequently removed using manual region masking and cleaning likewise provided with the kmSeg tool. Semi-automated segmentation of a typical greenhouse image using kmSeg takes between 1–5 min depending on color composition and structural complexity of a given plant shoot image. 2.3. Image Segmentation Using CNN The proposed CNN model is derived from the original encoder-decoder architecture of U-net (Ronneberger et al., 2015), which provides a versatile framework for semantic segmentation. In our model, batch normalization (Ioffe and Szegedy, 2015) is applied after each convolution layer in contrast to the original U-net. Because batch normalization improves the network performance and stability by normalizing the feature maps at respective levels (Ioffe and Szegedy, 2015; Santurkar et al., 2019). Furthermore, the original U-net used dropout layers to remove outliers in the feature maps. But we avoided this layer because the combination of batch normalization and dropout layers can cause worse results (Li et al., 2018). Also, to improve the segmentation quality on largely connected patterns, a larger kernel size is considered in our approach compared to the original U-net (Peng et al., 2017). Finally, our CNN model has less depth (of 3) compared to the original U-net depth of 4 due to the smaller input image size. The detailed comparison of convolutional parameters with respect to the original U-net is summarized in Table 1. Under consideration of the above suggested modifications, the U-net framework was adapted to the task of multimodal shoot image segmentation, refer to Figure 2. This network is designed in such a way that training and testing are performed on patches of input images in original resolution. The advantage of this image masking approach is that it enables model training using a large amount of ground truth data on consumer GPUs without losing high frequency information due to image downscaling. Furthermore, training of CNNs on image patches is more advantageous for learning local features than full-size images (Jha et al., 2020). Therefore, the input and output layers of the network are designed to operate on images of the size 256 x 256. Further details of the network encoder and decoder layers are described below. Encoder network: The encoder network consists of 3 encoder blocks. The first encoder block takes the image patches of size 256 x 256 as input and produces corresponding feature maps of size (256 x 256 x 16) as output. Then, the feature maps are forwarded to the second and third encoder blocks to generate further feature maps for the plant pixel detection. Each encoder block consists of two convolutional layers to learn feature maps at respective levels, where each convolutional layer consists of a 7 × 7 convolution filter followed by batch normalization (Ioffe and Szegedy, 2015) and a non-linear activation function called Rectified Linear Unit (ReLU) (Agostinelli et al., 2014). Followed by each encoder block, a max-pooling operation using a general window size of 2 x 2 (Wang et al., 2015; Jha et al., 2020) is applied for down-sampling the feature maps by half of their original size. The above steps enable a more efficient aggregation of image features. All three encoders are repeated with varying depths of 16, 32, and 64 to detect diverse plant features respectively. Followed by the encoder network, a bridge encoder block without a max-pooling layer is applied. This results in 128 feature maps of the size 32 x 32. Decoder network: The output from the bridge encoder (32 x 32 x 128) is upsampled using 3 x 3 transpose convolution with the same padding and stride 2. This means the size of feature maps (32 x 32 x 128) was doubled to (64 x 64 x 128) by applying the filter of size 3 x 3 to all input elements and boarder elements were computed using zero padding. Then the resulting feature map is concatenated with the corresponding encoder feature maps. This results in feature maps of size (64 x 64 x 256) that are subsequently passed through a convolutional layer like an encoder block but have decreasing channel depth of 64. This process is repeated for the remaining decoder blocks with decreasing channel depths of 32 and 16. Finally, the output of the final decoder block is fed into a convolutional layer of size 1 x 1 x 1 with a “Softmax” activation function (Dunne and Campbell, 1997) to classify each pixel as plant or non-plant at the patch level. The output of the proposed architecture is a predicted mask of size 256 x 256 like the input image patch shown in Figure 2. 2.4. Performance Metrics To evaluate the performance of the proposed U-net model during the training and testing stage, the Dice coefficient (DC) (Zou et al., 2004) is used. It measures the area of intersection between the model and ground truth segmentation and its value ranges from 0 to 1, where 1 corresponds to 100% perfect and 0 to false segmentation. The Dice coefficient is defined as: where P and G are predicted and ground truth binary images, respectively. Pi and Gi are output values 0 and 1 of pixel i in predicted and ground truth binary images, respectively. 2.5. Computational Implementation The proposed U-net architecture was developed under Python 3.8 using TensorFlow (Abadi et al., 2016) with Keras API. In addition, image processing operations such as reading, cropping, and training data preparation were done using PIL, Numpy (Walt et al., 2011), and Scikit-Image (Van der Walt et al., 2014) packages. Then the proposed model was trained on a GPU machine with Linux operating system (Intel(R) Core (TM) i7-10700K CPU @ 3.80GHz) and NVIDIA RTX 3090-24GB graphic card. As discussed above, the model is designed in such a way that training will be performed on patches of the original image. Thus, to generate non-overlapping patches of size 256 x 256, original images were padded with zeros at the image edges so that their width and height are divisible by 256. Out of these non-overlapping patches, both plant and background masks are considered in equal proportion to avoid potential imbalance between plant and non-plant training masks. Then each cropped mask is normalized in the range of [0, 1] for feature consistency in the CNN network. The overview of prepared training data of arabidopsis, barley, and maize and their growth stages are described in Tables 2, 3, respectively. Regarding information on growth stages, an approximately equal number of images from different developmental stages (early, mid, and late developmental phases) were analyzed in this study. Subsequently, based on our experience and previous studies (Crimi et al., 2018; Joseph, 2022), the above prepared data sets were partitioned into training and validation in the ratio of 85:15, respectively. The initial weights of the proposed model were defined randomly with zero mean and SD of 0.05 as proposed by Krizhevsky et al. (2012). Then the model was optimized with an Adam optimizer (Kingma and Ba, 2014) to improve the segmentation performance on training data sets. The binary cross-entropy loss function (Jha et al., 2020) is used to measure the unhappiness of the model during training and it defines the difference between predicted output and ground truth generated by the kmSeg tool as described above. This function compares each pixel prediction (0: non-plant, 1: plant) with the ground truth pixel and averages all pixels loss for computing the total loss of the image. Therefore, each pixel contributes to the overall objective loss function. Then the model was trained for 100 epochs with 16 convolutional channel features and a batch size of 128 as per system constraints. The learning rate alters the magnitude of the updates to the model weights during each iteration and is initialized with 0.001. Then a learning rate scheduler was used to dynamically reduce the learning rate by a factor of 0.2 if the validation loss is not improved in the next 5 iterations. This was introduced in order to avoid a too quick convergence of the model to a suboptimal solution and overfitting in the case of a large learning rate. Whereas, a too small learning rate may never converge and get stuck on the suboptimal solution (Bengio, 2012). Here, note that all data sets (arabidopsis, barley, and maize) were trained in a similar way with the same parameter configuration. As stated above, original shoot images have a variational resolution, whereas the proposed model requires input images of the size 256 x 256. Thus, during the prediction stage, original images are padded with zeros then non-overlapping 256 x 256 masks were generated similar to what was done in the training stage. The model does predictions on these 256 x 256 masks then they are combined into a single output image as shown in Figure 3. This process is dynamic, that means any image with a resolution greater which 256x256 can be segmented in an automated manner. Figure 3. Workflow of the pipeline for image processing and segmentation in the DeepShoot tool. Green and orange color boxes represent the operations of image segmentation and trait calculation: (A) original image, (B) original image patches of size 256 x 256, (C) segmented image patches of size 256 x 256, (D) binary segmentation of the original image, (E) RGB color space of (D). Since the output layer of the model is a Sigmoid activation function or logistic function, the predicted segmentation is a probability map with values ranging in between 0 and 1. Hence, this probability map is converted to a binary image using threshold 'T'. Here, relatively, plant pixels will have high probabilities compared to the background pixels. Therefore, T ≥ 0.6 is chosen to consider all high probability pixels as plant pixels in the final segmentation. After fully automated segmentation, phenotypic traits of segmented plant structures were calculated in the final step. 2.5.3. Graphical User Interface In practice, end-users prefer to have an easy-to-use software solution with a Graphical User Interface (GUI). Therefore, a user-friendly GUI front-end was developed under the MATLAB 2021a environment (MATLAB Optimization Toolbox, 2021) to comfortably operate the complex algorithmic framework of shoot segmentation software. Figure 3 shows the complete workflow involved in the DeepShoot tool for automated plant segmentation and trait extraction. For import of deep learning models trained under Python the MATLAB interoperability routine importKerasNetwork (MATLAB Optimization Toolbox, 2021) was used. According to the specification of this function, the U-net models trained in Python were exported in the so-called h5 file format, which is supported by the recent versions of MATLAB including 2021a. 2.6. Method Comparison The performance of our proposed model is compared with the recently published shallow learning based neural network (NN) by Adams et al. (2020) which was developed and evaluated for the same application as ours, namely, segmentation of greenhouse shoot images. This algorithm classifies each pixel based on 3x3 neighborhood information from red, green, and blue channels using fully-connected neural networks. In this study, the same NN model architecture was retrained on our image data set with a large number of neighborhood features of 5,939,562 and a higher batch size of 4,096 compared to the original study of 51,353 and 1,024, respectively. In addition, the proposed encoder backbone of the U-net architecture is compared with different encoder backbones including vgg19 (Simonyan and Zisserman, 2014), resnet50 (He et al., 2016), and xception (Chollet, 2017). These models were trained on the same image data set with a similar training configuration except for the increased number of filters (64, 128, 256, 512) as discussed in Section 2.5.1. 3.1. Training and Validation As described above, the proposed network was trained and validated on six different data sets including arabidopsis, barley, and maize images acquired from three different plant phenotyping facilities. Thereby, each of these three data sets was subdivided into training and validation sets in the ratio of 85:15, respectively. The model performance is analyzed using binary cross-entropy loss (CE loss) and Dice coefficient at each epoch during the learning stage of the network. Because of the dynamic optical appearance of growing plants, segmentation of shoot regions in side view images represents a more difficult task. This results in discontinuous shoot structures in segmented images. Therefore, it is important to give equal weights to errors related to both background and plant pixels in this study using the Dice coefficient. Figure 4 shows the training and validation performance of the proposed model on six different data sets over 100 epochs. It shows that the training loss of six models was minimized and platen the curve after epoch number 60. Simultaneously, training DC was maximized and achieved more than 90% of the accuracy for all models by the end of the training epochs. In turn, the generalized performance of the model is measured using validation measurements. Similarly, training performance and validation DC also achieved more than 90% accuracy with a low value of loss for all models at the end of the epochs. A brief overview of training and validation measurements is shown in Table 4. Figure 4. Training and validation performance of the shoot model over 100 epochs. X- and Y-axes represent the epoch number and performance measure, respectively. For visualization purposes, logarithmic cross-entropy values are plotted for all models. (A) Arabidopsis side view, (B) Arabidopsis top view, (C) Barley side view, (D) Barley top view, (E) Maize side view, and (F) Maize top view. In addition to the training performance, the exemplary segmentation of all models on test images is shown in Figure 5. It turns out that all models performed with a relatively higher DC of 0.95 except for the arabidopsis side-view model which has a DC of 0.9117 compared to the ground truth generated by the kmSeg tool. Furthermore, trained models were tested on variational data sets from arabidopsis top-view like stress and multi-tray experiments as shown in Figure 6. Here, the model resulted in DC of 0.9664 and 0.9873 for stress and multi-tray experiments image compared to the ground truth, respectively. Figure 5. Segmentation performance: first, second and third row represents the original RGB image, ground truth segmentation by the kmSeg tool and predicted segmentation by the DeepShoot tool, respectively. The DC of each image as following: (A) Arabidopsis side view: 0.9117, (B) Arabidopsis top view: 0.9876, (C) Barley side view: 0.9384, (D) Barley top view: 0.9617, (E) Maize side view: 0.9709, (F) Maize top view: 0.9843. Figure 6. Segmentation performance on variational data sets: DC of stress and multi-tray experiments are 0.9664 and 0.9873, respectively. (A) Stress experiment image, (B) Ground truth, (C) Predicted segmentation, (D) Multi-tray image, (E) Ground truth, and (F) Predicted segmentation. 3.2. Evaluation of the Reference Data Set To measure the performance of the model on unseen data, our CNN model trained on arabidopsis top-view images from LemnaTec-Scanalyzer3D was applied to the set of arabidopsis top-view from Scharr et al. (2016). This dataset was frequently used for CNN model training and evaluation in several previous studies within the scope of CVPPP competitions (https://www.plant-phenotyping.org/CVPPP2018, https://www.plant-phenotyping.org/CVPPP2019 https://www.plant-phenotyping.org/CVPPP2020). However, here it is only used for cross-validation of our model trained on images from our phenotyping facility. Figure 7 shows the mean DC of single and multi-tray experiments from the Scharr et al. data set. The model resulted in the mean DC of 0.93 over 100 images and 0.95 over 27 images for single and multi-tray experiments, respectively. Examples of segmentation of single-tray images from the references data set are shown in Figure 8. Figure 7. Evaluation of image segmentation on the references data set from Scharr et al.: Dice coefficient of arabidopsis top-view model over 100 and 27 images for single- and multi-tray experiments, respectively (A,B). The dotted orange line represents the mean DC value. Figure 8. Examples of segmentation of arabidopsis top-view images from Scharr et al. All images were segmented with DC over 0.9. 3.3. Evaluation of DeepShoot vs. Alternative Solutions The proposed U-net was compared with the recently published shallow learning based neural network (NN) by Adams et al. (2020) which was originally developed and evaluated for shoot side view image segmentation. Figure 9 shows the comparative analysis of 17, 25, and 20 side view images of arabidopsis, barley, and maize plants, respectively. It briefs that the proposed U-net outperforms DC > 0.9 for all images, whereas neural networks predictions have DC between 0.5 and 0.8. An exemplary segmentation of three plants using a neural network and proposed U-net with respect to ground truth is shown in Figure 10. Also, the computational time of both segmentation models required for the prediction on Intel(R) Xeon(R) Gold CPU @2.10 GHz with 20 CPU cores is listed in Table 5. Figure 9. Performance of neural network (blue) and proposed U-net (orange) segmentation models on (A) 17 arabidopsis, (B) 25 barley, and (C) 20 maize side view images. Figure 10. Evaluation of neural network segmentation with respect to the proposed U-net on arabidopsis, barley and maize side view images respectively. (A) NN DC: 0.7824, DeepShoot DC: 0.8342, (B) NN DC: 0.6973, DeepShoot DC: 0.8924, (C) NN DC: 0.8746, DeepShoot DC: 0.9360. Table 5. The computational time of shoot segmentation algorithms in seconds per image on a system with Intel(R) Xeon(R) Gold 6130 CPU @2.10GHz with 20 CPU cores. Furthermore, a comparison of different encoder backbones (vgg19, resnet50, and xception) of the U-net architecture was performed. Figure 11 shows the performance of alternative U-net backbones by training on arabidopsis top view images. It shows that both resnet50 and xception networks have higher validation loss (>0.004) and it increases over several iterations. On the other hand, vgg19 and the proposed U-net are promising comparable performances with a lower validation loss of 0.0033. In addition, the complexity of alternative U-net models with different encoder backbones on arabidopsis top view images is shown in Table 6. Figure 11. Loss performance of alternative U-net models with different encoder backbones on arabidopsis top view images. Table 6. The complexity of alternative U-net models with different encoder backbones on arabidopsis top view images. 3.4. DeepShoot GUI Tool Figure 12 shows the GUI of DeepShoot software which is freely available as a precompiled executable program from https://ag-ba.ipk-gatersleben.de/ds.html. In addition to automated image segmentation, DeepShooot calculates 35 shoot traits that are categorized into 4 feature groups (i.e., area, bounding box traits, convex-hull area, and statistical color features). Further information on the definition of traits can be found in Supplementary Table 1 accompanying this article. Figure 12. Graphical User Interface of the DeepShoot tool: left, middle, and right images represent the original image, predicted probability map, and predicted color image, respectively. In order to restrict the analysis to the region of interest (ROI), users can define a custom ROI as a rectangle or polynomial shape using the crop or Clear outside buttons of the DeepShoot GUI. The DeepShoot tool can be applied for analysis of single images in a step-by-step manner or for automated processing of all images in a selected folder. Regarding DeepShoot time performance, image segmentation and traits calculation all together take an average of 18.5 s to process and analyze a 5-megapixel image on a system with Intel(R) Xeon(R) Gold 6130 CPU @2.10GHz with 20 CPU cores. Automated processing and quantitative analysis of a large amount of phenotypic image data represent a critical point in determining the efficiency and accuracy of trait computation. The deep learning-based tool for automated shoot image segmentation and phenotypic analysis developed in this study aims to address this challenging task. Our experimental tests on three different plant types (arabidopsis, barley, and maize) and two different views (side and top) showed that the performance of the model during the training is improved over the number of iterations. On the other hand, the model trained (for all plants) before iteration number 40 was under performed and showed worse performance for model validation. However, due to the dynamic reduction in learning rate by a factor of 0.2 a stable performance with more than 90% Dice coefficient for all shoot models was achieved. Additional information on the impact on learning rate can be found in Supplementary Figure 1 accompanied by this article. Moreover, arabidopsis and maize models achieved low CE loss values, whereas barley models have slightly higher CE loss values due to the variational leaves like yellow and brown color leaves. This is reflected in the lower DC of barley side- and top-view test images (0.9384 and 0.9617) compared to the arabidopsis top-view and maize models (> 0.97). Also, the trained model exhibited a low value of DC (0.9117) for the arabidopsis side view test image compared to the other models due to the low contrast of secondary stems which have intensity similar to the background pixels. In addition, the trained arabidopsis top-view model is validated on reference data sets including examples of stressed and multi-tray experiments. Our experimental results showed that the model achieved a remarkably high DC of 0.9664 (stressed plants) and 0.9873 (multi-tray images) on these unseen data. However, small background noisy objects which have intensity and patterns similar to the leaves require additional application morphological operations (e.g., min cluster size) that are also available with the DeepShoot GUI tool. Furthermore, the model achieved a very high DC (> 0.9), especially on untrained images with a different background from Scharr et al. dataset. Overall our results indicate that the CNN model trained a particular set of images can also be applied to unseen data exhibiting similar plant shoot patterns but different background regions. The performance of the proposed U-net was compared with the shallow learning neural networks. Thereby, it was shown that most of the arabidopsis and maize images have a relatively low discrepancy between the predicted DC of both algorithms, because these images contain, mostly, high contrast green color pixels for the target structures. In contrast, the shallow neural network exhibited a significantly lower DC on barley images. We draw the observation back to the fact that barley plants have more variable color fingerprints including brown and yellow leaves. This shows that the neural network is only capable of segmenting high contrast shoot structures, whereas the U-net model is capable of segmenting both high contrast and color-altering shoot structures. Because CNN frameworks are capable of generating multi-level features including neighborhood information, color, spatial patterns, and textural features compared to shallow learning methods where only neighborhood information was calculated. Therefore, rich information makes DeepShoot outperforms shallow networks. Furthermore, tests of computational performance of the shallow neural network vs. the proposed U-net model demonstrated the superior performance of the latter. In summary, the DeepShoot tool enables users to perform segmentation and analysis of plant shoot images faster and more accurately in comparison to the shallow neural network. Furthermore, the performance of the proposed U-net model is compared with vgg19, resnet50, and xception encoder backbones. Thereby, it was observed that lower depth architecture vgg19 achieved better results in comparison to deep depth architectures such as resnet50 and xception that tend to overfit. This can be attributed to the higher complexity of these multi-layer networks that generate too many redundant features. However, the vgg19 model still contains a large number of convolution layers with trainable weights which makes it 10 times larger in size than our proposed U-net. Therefore, our proposed model achieves optimum results at the lower level of complexity which enables us to perform high-throughput plant phenotyping on both lower and higher hardware configuration systems in real time. It is known that U-net captures not only color but also spatial pattern information. From this perspective, one can expect larger segmentation errors by application of DeepShoot to optical scenes strongly deviating from plant and background structures used by our model training. Nevertheless, our tests with unseen shoot images indicated that the present CNN framework can also be applied to the analysis of quite different optical scenes or filed-like images as long as the target plant structures are optically somewhat similar to images used in our training sets. Users are free to try and evaluate the performance of provided segmentation models on their particular images. From that perspective, there are no other restrictions as the requirement of RGB image with the size ≥256 x 256. Moreover, segmentation of thin or twisted leaves, flowers as well as shadowed or light-reflecting regions (such as metallic surfaces) is more prone to misclassification, which in turn may lead to fracturing of targeted structures or false-segmented background regions. Nevertheless, improvements in model accuracy and generability can be certainly expected by extending the training set of ground truth images with more and more variable data, in particular, more examples of stressed/aged phenotypes exhibiting non-green colors, e.g., brown, yellow, red leaves. Furthermore, the tool can be extended by automated detection of the plant type and the camera view (side or top) that have to be manually selected in the present implementation from the list of pre-trained CNN models. Finally, further investigations are required to quantitatively assess and compare different model architectures as well as the performance of binary vs. multi-class segmentation models. Data Availability Statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. NN, MH, and EG conceived, designed, and performed the computational experiments, analyzed the data, wrote the manuscript, prepared figures, and tables, and reviewed drafts of the manuscript. KN executed the laboratory experiments, acquired image data, and reviewed drafts of the manuscript. FS read and corrected the drafts of the manuscript. TA co-conceptualized the study and reviewed the manuscript. All authors contributed to the article and approved the manuscript in its present form. This study was funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - HE 9114/1-1. MH was supported from European Regional Development Fund-Project SINGING PLANT (No. CZ.02.1.01/0.0/0.0/16 026/0008446). Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpls.2022.906410/full#supplementary-material Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen Zhifeng Citro, C., Corrado, G. S., et al. (2016). Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv [Preprint] arXiv:1603.04467. doi: 10.48550/arXiv.1603.04467 Adams, J., Qiu, Y., Xu, Y., and Schnable, J. C. (2020). Plant segmentation by supervised machine learning methods. Plant Phenome J. 3, e20001. doi: 10.1002/ppj2.20001 Agostinelli, F., Hoffman, M., Sadowski, P., and Baldi, P. (2014). Learning activation functions to improve deep neural networks. arXiv [Preprint] arXiv:1412.6830. doi: 10.48550/arXiv.1412.6830 Badrinarayanan, V., Kendall, A., and Cipolla, R. (2017). Segnet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39, 2481–2495. doi: 10.1109/TPAMI.2016.2644615 Bai, W., Sinclair, M., Tarroni, G., Oktay, O., Rajchl, M., Vaillant, G., et al. (2018). Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. J. Cardiovasc. Magn. Reson. 20, 65. doi: 10.1186/s12968-018-0471-x Bengio, Y.. (2012). “Practical recommendations for gradient-based training of deep architectures,” in Neural Networks: Tricks of the Trade (Montreal, QC: Springer), 437–478. Chollet, F.. (2017). “Xception: deep learning with depthwise separable convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, HI: IEEE), 1251–1258. Choudhury, S. D., Bashyam, S., Qiu, Y., Samal, A., and Awada, T. (2018). Holistic and component plant phenotyping using temporal image sequence. Plant Methods 14, 1–21. doi: 10.1186/s13007-018-0303-x Crimi, A., Bakas, S., Kuijf, H., Menze, B., and Reyes, M. (2018). “Brainlesion: glioma, multiple sclerosis, stroke and traumatic brain injuries,” in Third International Workshop, BrainLes 2017, Held in Conjunction with MICCAI 2017, Quebec City, QC, Canada, September 14, 2017, Revised Selected Papers, volume 10670 (Quebec City, QC: Springer). Douarre, C., Schielein, R., Frindel, C., Gerth, S., and Rousseau, D. (2018). Transfer learning from synthetic data applied to soil-root segmentation in x-ray tomography images. J. Imaging 4, 65. doi: 10.3390/jimaging4050065 Dunne, R. A., and Campbell, N. A. (1997). “On the pairing of the softmax activation and cross-entropy penalty functions and the derivation of the softmax activation function,” in Proceedings of the 8th Australasian Conference on Neural Networks, Vol. 181 (Melbourne, VIC: Citeseer), 185. Fahlgren, N., Gehan, M. A., and Baxter, I. (2015). Lights, camera, action: high-throughput plant phenotyping is ready for a close-up. Curr. Opin. Plant Biol. 24, 93–99. doi: 10.1016/j.pbi.2015.02.006 He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Las Vegas, NV: IEEE), 770–778. Henke, M., Junker, A., Neumann, K., Altmann, T., and Gladilin, E. (2019). Comparison and extension of three methods for automated registration of multimodal plant images. Plant Methods 15, 1–15. doi: 10.1186/s13007-019-0426-8 Henke, M., Junker, A., Neumann, K., Altmann, T., and Gladilin, E. (2020). A two-step registration-classification approach to automated segmentation of multimodal images for high-throughput greenhouse plant phenotyping. Plant Methods 16, 1–10. doi: 10.1186/s13007-020-00637-x Henke, M., Neumann, K., Altmann, T., and Gladilin, E. (2021). Semi-automated ground truth segmentation and phenotyping of plant structures using k-means clustering of eigen-colors (kmseg). Agriculture 11, 1098. doi: 10.3390/agriculture11111098 Ioffe, S., and Szegedy, C. (2015). Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv [Preprint] arXiv:1502.03167. doi: 10.48550/arXiv.1502.03167 Jha, R. R., Jaswal, G., Gupta, D., Saini, S., and Nigam, A. (2020). Pixisegnet: pixel-level iris segmentation network using convolutional encoder-decoder with stacked hourglass bottleneck. IET Biometr. 9, 11–24. doi: 10.1049/iet-bmt.2019.0025 Joseph, V. R.. (2022). Optimal ratio for data splitting. Stat. Anal. Data Min.: ASA Data Sci. J. doi: 10.1002/sam.11583. [Epub ahead of print]. Kingma, D. P., and Ba, J. (2014). Adam: a method for stochastic optimization. arXiv [Preprint] arXiv:1412.6980. doi: 10.48550/arXiv.1412.6980 Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). “Imagenet classification with deep convolutional neural networks,” in Advances in Neural Information Processing Systems (New York, NY), 1097–1105. Lee, U., Chang, S., Putra, G. A., Kim, H., and Kim, D. H. (2018). An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis. PLoS ONE 13, e0196615. doi: 10.1371/journal.pone.0196615 Li, L., Zhang, Q., and Huang, D. (2014). A review of imaging techniques for plant phenotyping. Sensors 14, 20078–20111. doi: 10.3390/s141120078 Li, X., Chen, S., Hu, X., and Yang, J. (2018). “Understanding the disharmony between dropout and batch normalization by variance shift,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (Long Beach, CA: IEEE). Marmanis, D., Wegner, J. D., Galliani, S., Schindler, K., Datcu, M., and Stilla, U. (2016). Semantic segmentation of aerial images with an ensemble of cnss. ISPRS Ann. Photogramm. Remote Sens. Spatial Inform. Sci. 3, 473–480. doi: 10.5194/isprs-annals-III-3-473-2016 MATLAB Optimization Toolbox (2021). Mathworks, Matlab and Statistics Toolbox Release 2021a. Natick, MA: The MathWorks. Miller, N. D., Parks, B. M., and Spalding, E. P. (2007). Computer-vision analysis of seedling responses to light and gravity. Plant J. 52, 374–381. doi: 10.1111/j.1365-313X.2007.03237.x Minervini, M., Scharr, H., and Tsaftaris, S. A. (2015). Image analysis: the new bottleneck in plant phenotyping [applications corner]. IEEE Signal Process Mag. 32, 126–131. doi: 10.1109/MSP.2015.2405111 Misra, T., Arora, A., Marwaha, S., Chinnusamy, V., Rao, A. R., Jain, R., et al. (2020). Spikesegnet-a deep learning approach utilizing encoder-decoder network with hourglass for spike segmentation and counting in wheat plant from visual imaging. Plant Methods 16, 1–20. doi: 10.1186/s13007-020-00582-9 Pape, J.-M., and Klukas, C. (2015). “Utilizing machine learning approaches to improve the prediction of leaf counts and individual leaf segmentation of rosette plant images,” in Proceedings of the Computer Vision Problems in Plant Phenotyping (CVPPP) (Gatersleben), 1–12. Peng, C., Zhang, X., Yu, G., Luo, G., and Sun, J. (2017). “Large kernel matters-improve semantic segmentation by global convolutional network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (Beijing). Philipp, I., and Rath, T. (2002). Improving plant discrimination in image processing by use of different colour space transformations. Comput. Electron. Agric. 35, 1–15. doi: 10.1016/S0168-1699(02)00050-9 Pound, M. P., Atkinson, J. A., Townsend, A. J., Wilson, M. H., Griffiths, M., Jackson, A. S., et al. (2017). Deep machine learning provides state-of-the-art performance in image-based plant phenotyping. Gigascience 6, gix083. doi: 10.1093/gigascience/gix083 Ronneberger, O., Fischer, P., and Brox, T. (2015). “U-net: convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention (Freiburg: Springer), 234–241. Santurkar, S., Tsipras, D., Ilyas, A., and Madry, A. (2019). How does batch normalization help optimization? Scharr, H., Minervini, M., French, A. P., Klukas, C., Kramer, D. M., Liu, X., et al. (2016). Leaf segmentation in plant phenotyping: a collation study. Mach. Vis. Appl. 27, 585–606. doi: 10.1007/s00138-015-0737-3 Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv [Preprint] arXiv:1409.1556. doi: 10.48550/arXiv.1409.1556 Van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., Yager, N., et al. (2014). scikit-image: image processing in python. PeerJ. 2, e453. doi: 10.7717/peerj.453 Walt, S. v. d., Colbert, S. C., and Varoquaux, G. (2011). The numpy array: a structure for efficient numerical computation. Comput. Sci. Eng. 13, 22–30. doi: 10.1109/MCSE.2011.37 Wang, L., Guo, S., Huang, W., and Qiao, Y. (2015). Places205-vggnet models for scene recognition. arXiv [Preprint] arXiv:1508.01667. doi: 10.48550/arXiv.1508.01667 Zou, K. H., Warfield, S. K., Bharatha, A., Tempany, C. M., Kaus, M. R., Haker, S. J., et al. (2004). Statistical validation of image segmentation quality based on a spatial overlap index1: scientific reports. Acad. Radiol. 11, 178–189. doi: 10.1016/S1076-6332(03)00671-8 Keywords: greenhouse image analysis, image segmentation, deep learning, U-net, quantitative plant phenotyping Citation: Narisetti N, Henke M, Neumann K, Stolzenburg F, Altmann T and Gladilin E (2022) Deep Learning Based Greenhouse Image Segmentation and Shoot Phenotyping (DeepShoot). Front. Plant Sci. 13:906410. doi: 10.3389/fpls.2022.906410 Received: 28 March 2022; Accepted: 14 June 2022; Published: 13 July 2022. Edited by:Dirk Walther, Max Planck Institute of Molecular Plant Physiology, Germany Reviewed by:Joshua Koh, Department of Jobs, Precincts and Regions, Agriculture Victoria, Australia Jason Adams, Sandia National Laboratories (DOE), United States Copyright © 2022 Narisetti, Henke, Neumann, Stolzenburg, Altmann and Gladilin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Narendra Narisetti, [email protected]; Evgeny Gladilin, [email protected] †Present address: Michael Henke, Plant Sciences Core Facility, CEITEC-Central European Institute of Technology, Masaryk University, Brno, Czechia
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653501.53/warc/CC-MAIN-20230607010703-20230607040703-00283.warc.gz
CC-MAIN-2023-23
49,175
130
http://blog.thehun.net/tag/oral/
code
Every heard of Monty Python? They’re a British comedy group known for creating numerous sketches and songs for their comedy series Monty Python’s Flying Circus. They created a gem of a song called ‘sit on my face’. I’m sure you’ll enjoy it: ‘Sit on my face’ was released in 1982 and was banned in the US for it’s lyrics. A radio station playing the song on the radio 1992 got charged with FCC Indecency Fines and ended up having to pay $9,200 for airing the song. I think the lyrics won’t be too offensive for our Hun audience though :)… Enjoy… And happy New Year! A couple waiting for pizza was charged with indecent behaviour last year. Both partners pleaded guilty. It was kind of senseless to do otherwise: everything was caught on a security camera. The couple ordered a stuffed crust pizza and thinking of that nice, juicy, stuffed crust got their juices flowing. Unable to wait ’till they got home they decided to go for it, right then and there, against the Domino’s pizza counter. The security cam registered them having oral sex and intercourse for 18 minutes, quite an accomplishment on an empty stomach! The couple had been spared jail time. They were given 12 month community orders and a sex-month curfew which prohibits them from leaving their houses between 7pm and 7am. We would suggest firing the Domino’s worker that took 18 minutes to prepare a stuffed crust pizza in the first place! Word has it Michigan Senate passed a bill that effectively bans all sorts of sodomy, anal and oral, gay and non-gay (sec. 158)! Punishment is set to up to 15 years in prison! Sodomy is generally anal or oral sex between people, or a sexual activity between a person and a non-human. The word Sodomy originates from the story of Sodom and Gomorrah, from the Book of Genesis in the Bible. A thorough report on the matter reveals that these ‘punishable crimes’ are a favorite past time in prison though!
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204857.82/warc/CC-MAIN-20190326054828-20190326080828-00151.warc.gz
CC-MAIN-2019-13
1,939
8
https://community.intel.com/t5/Intel-Moderncode-for-Parallel/Beginning-With-Parallel-Programming/td-p/764434
code
I am new to parallel programming. Till now I have learned about CUDA( ||el programming language for NIVIDIA GPU) , OpenCL (for AMD streams, Nividia and Intel). I have installed AMD APP SDK on Windows and Fedora 16. But as I am not having Visua Studio for windows and some graphics incompatibility with fedora. I am trying OpenCL with Intel CPU. I am having Intel i3 core processor.So I have installed Intel SDK for OpenCL on windows 7(64-bit). I am unable to run OpenCL samples as visual studio is not installed on my system.But I want to start with OpenCL programming using Intel Offline Compiler. I have read user guide and know how to build, show Assembly code etc. using it. Now I need to know how to write OpenCL simple program code and to execute it using Offline Compile e.g. a simple vector addition code.Can anyone help me to find resource for guidance? I have read and understood this:http://opencl.codeplex.com/wikipage?title=OpenCL%20Tutorials%20-%201 Still I didn't get how to run a comlete single program code in Intel Offline Compiler. I want to write and executable program. Actually I am asking whether there is some alternative to MS-Visual studio for execution? I'm a little bit confused by a term '...for execution'. Do you mean acompilation of sources? Visual Studio is an Integrated Development Environment ( IDE ) and allows to do lots of things, like editing, compilation and linking, debugging, profiling, etc. You could also try to use Eclipse IDE.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816586.79/warc/CC-MAIN-20240413051941-20240413081941-00074.warc.gz
CC-MAIN-2024-18
1,474
10
https://seamenschurch-archives.org/collections/show/59
code
Capt. Jim Troup has spent 60+ years working on the river, eventually working his way up to Captain. Jim has spent the latter part of his career working for Marquette Transportation and is still on the job now in his late seventies. Johnathan Thayer (interviewer); James Troup (interviewee) 2014 April 17 - Oral Histories - Troup, James
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646257.46/warc/CC-MAIN-20230531022541-20230531052541-00606.warc.gz
CC-MAIN-2023-23
335
5