url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://code.cubewise.com/canvas-docs/load-balancing-with-canvas
code
The following article explains the concept of load balancing and how it can be implemented with Canvas. What is Load Balancing? Load balancing is very common in the computing world, lots of companies such as Netflix which have lots of users logging in to their application are using it to optimize the use of their servers. To avoid all users logging to the same server, the load balancer can be configured to split users into different servers. A good definition of load balancing can be found in the following link: Load balancing with Canvas Canvas uses an Apachet Tomcat server, there are lots of documentation online which explains how the load balancing can be set up with Apache Tomcat: Keep in sync the CWAS folder One important thing to consider before setting up load balancing with Canvas is that you will have to install Canvas on each server. It means that you will have one CWAS folder per server. The CWAS contains the Canvas application (HTML, JS files..) so you will need to keep this folder in sync to make sure that all your users have access to the same Canvas application.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00549.warc.gz
CC-MAIN-2019-35
1,093
7
https://museum.syssrc.com/artifact/software/685/
code
Waiting for PCx86 to load Apple Writer was written by Paul Lutus and published in 1979 by Apple Computer as a word processor for the Apple II family of computers. The original version supported 40-column text display; all text was displayed in uppercase, but lowercase could be toggled with the ESC key. Apple Writer did not display formatting on screen, but it would appear when the document was printed; it was specified with dot-commands, each requiring its own line. Compute!'s 1980 review claimed that the review was written with Apple Writer and that "I have looked at other text editors for the Apple, some of which were overloaded with features. Given the hardware limitations of the Apple II, I feel that Apple Writer is a very useful document creation tool". In 1985, II Computing ranked Apple Writer as third to the top Apple II software, based on sales and market-share data.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250611127.53/warc/CC-MAIN-20200123160903-20200123185903-00048.warc.gz
CC-MAIN-2020-05
887
3
http://www.solitaryroad.com/c607b.htm
code
Prove. Let a function f(z) be analytic with a continuous derivative f '(z) within and on the boundary of a region R, either simply or multiply-connected, and let C be the entire boundary of R. Then Proof. We first prove the theorem for a simply-connected region. Let the region R be simply-connected. Let f(z) = u(x, y) + i v(x, y). Then the integral around the boundary C is given by Since f(z) is analytic with a continuous derivative in R and on the boundary C it follows that the conditions for Green’s theorem are met. Applying Green’s theorem to the two integrals in the right member we get Because f(z) is analytic the Cauchy-Riemann equations are satisfied and we see that both integrands in the right member of 2) are zero. Thus Note. We have given the proof with the added condition that f '(z) be continuous in R but a proof can be given without this condition. See Spiegel. Complex Variables. (Schaum) Chap. 4. Proof for multiply-connected regions: Consider the multiply-connected region shown in Fig. 1 where cross-cuts have been constructed connecting the interior and exterior boundaries thus transforming the multiply-connected region into a simply-connected one where Cauchy’s theorem for simply-connected regions applies. We let the curve C of the theorem be the boundary of the simply-connected region shown in Fig. 1. The theorem states that The total amount of curve traversed is equal to the boundaries C, C1, C2, C3, C4 and C5 plus all the cross-cuts traversed. However, each cross-cut is traversed in opposite directions and the line integrals on the cross-cuts cancel each other out so the net amount of curve traversed that counts in the total is simply the boundaries C, C1, C2, C3, C4 and C5 . And this is what Cauchy’s theorem of multiply-connected regions states. Spiegel. Complex Variables. (Schaum) Chap. 4.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809229.69/warc/CC-MAIN-20171125013040-20171125033040-00212.warc.gz
CC-MAIN-2017-47
1,847
9
http://stackoverflow.com/questions/5924785/how-to-implement-set-a-data-breakpoint
code
I need to generate an interrupt, when a memory location changes or is written to. From an ISR, I can trigger a blue screen which gives me a nice stack trace with method names. - Testing the value in the timer ISR. Obviously this doesn't give satisfying results. - I discovered the bochs virtual machine. It has a basic builtin debugger that can set data breakpoints and stop the program. But I can't seem to generate an interrupt at that point. - bochs allows one to connect a gdb to it. I haven't been able to build it with gdb support though. - A kind of "preview instruction" interrupt that triggers for every instruction before executing it. The set of used memory-writing instructions should be pretty manageable, but it would still be a PITA to extract the adress I think. And I think there is no such interrupt. - A kind of "preview memory access" interrupt. Again, I don't think its there. - Abuse paging. Mark the page of interest as not present and test the address in the page fault handler. One would still have to distinguish read and write operations and I think, the page fault handler doesn't get to know the exact address, just the page number.
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396459.32/warc/CC-MAIN-20160624154956-00024-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
1,161
7
https://community.plm.automation.siemens.com/t5/Solid-Edge-Student-Edition-Forum/Clicked-on-Smart-Dimension-to-the-Line-Turns-Yellow-then-Blue-no/td-p/245836
code
The issue I am having with driving dimensions control (or drive) the elements on which they are placed. Driving dimensions are red. Driven dimensions are blue. While following the drawing in the Introduction to Modeling Parts with Ordered Features, at the step to select the Smart Dimension. I’ll select the line with the dimension goes to yellow, click line goes blue no dimension edit box. So I tried another Part Modeling test drive, I went to the step Start the Rectangle command and the step with the 3 point rectangle, finished the rectangle the dimensions as soon as I clicked the the driving dimensions came on. Just curious tried the Smart Dimension went to yellow line to blue line. All of this is done after I deleted Student Edition, the previous edition did the same thing, I don’t know what I did I cannot find. I sent the issue on the forum no luck. Any help will be appreciated. Might be useful if you could make a screen capture movie using Jing ...seems to be a theme going on here, but it'd be nice to see it, and have an AH HA! moment. Also....can you list your computer specs . Sean Cresswell Design Manager Streetscape Ltd Solid Edge ST10 [MP3] Classic [x2 seats] Windows 10 - Quadro P2000 The reason your dimensions are blue in the ordered file is because the driving lock is off when you placed them. After starting the dimension command, click the lock on the command bar so the lock is "locked" before placing the dimensions. Alternatively you can click on existing dimensions and do the same... Sent from my Motorola Electrify using Tapatalk 2
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814036.49/warc/CC-MAIN-20180222061730-20180222081730-00001.warc.gz
CC-MAIN-2018-09
1,574
9
http://dba-oracle.com/t_oracle_add_years.htm
code
I need a SQL statement to identify all rows that are more than two years old and set my status flag based on my DATE column. Is there a minus_years function in Oracle SQL? If not, how can I make a SQL function to subtract two years in my Can I use the add_years built-in function (BIF)? Answer: Because Oracle is so flexible, there are several ways to satisfy your request using Also note here on how to do Oracle arithmetic functions and you can also use date functions This primitive example demonstrates the concept, but it will not always work because leap years will have 366 days: trunc(sysdate) < myrow_date -(365*2) A correct solution would use the add_years BIF, something trunc(sysdate) < (myrow_date - add_years(myrow_date,2));
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00760.warc.gz
CC-MAIN-2023-14
738
14
https://usenetfoal.web.app/unreal-engine-download-earlier-versions-loxa.html
code
Vehicles are now fully supported in Unreal Engine! This sample game features an off-road vehicle and a looping track within a desert setting. An event-driven plugin for the Thalmic Myo (https://www.thalmic.com/en/myo/). Has a modern rewritten architecture since v0.9.0 see the github repo for documentation An Unreal Engine 4 implementation of the Polygonal Map Generator for generating islands found at http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/ - Jay2645/Unreal-Polygonal-Map-Gen Also, Oldunreal is not associated with Epic Entertainment or any possible subsidiaries, that's what makes it unofficial. This Direct3D 10 renderer for Unreal, Unreal Tournament, Deus Ex and Rune aims to provide a good, consistent looking and future proof renderer for these games. Find out what users are saying about Unreal Engine. Read user Unreal Engine reviews, pricing information and what features it offers.Unreal Download Free Full Game | Speed-Newhttps://speed-new.com/unreal-full-pc-gameUnreal Download Free Full Game is a first-person shooter video gamedeveloped by Epic MegaGames and Digital Extremes and published by GT Interactive in May 1998. It was powered by an original gameplay and In depth and exclusive coverage of Epic's open world and 'Boy and His Kite' realtime projects. Exporting our object from Blender | manualzz.com Unreal Engine 4 West Coast DevCon 2014 Launch iOS from Xcode • PC with remote Mac: – Build from PC with Visual Studio, using remote Mac with Xcode • Modify Engine\Saved\UnrealBuildTool\BuildConfiguration. Patch versions for the game itself carry 3-digit version numbers and often a letter (examples: 224v, 225f, 226b, 226f,.. A few special versions may be numbered slightly differently, for example one of the patches for the Macintosh is… Examples of operating systems that do not impose this limit include Unix-like systems, and Microsoft Windows NT, 95, 98, and ME which have no three character limit on extensions for 32-bit or 64-bit applications on file systems other than… Epic Games develops the Unreal Engine, a commercially available game engine which also powers their internally developed video games, such as Fortnite and the Unreal, Gears of War and Infinity Blade series. It was founded in December 1999 by Jeremy Gordon, Otavio Good, and Josh Adams. Download Unreal Engine for free to create stunning experiences for PC, console, mobile, VR and the Web. 5 Jun 2019 “The same Unreal Engine that powers the world's most popular video their licenses anew or upgraded from earlier versions of ARCHICAD – are Follow @UnrealEngine and download Unreal for free at unrealengine.com. Wwise Integrations for Unity and Unreal Engine can be downloaded and installed directly from the Wwise Launcher. Release Notes Older Versions. 3D Game Design with Unreal Engine 4 and BlenderJune 2016 Engine 4 now has support for Blender, which was not available in earlier versions. This has If you have your own Unreal project created in an older version of Unreal Engine then you need to upgrade your project to Unreal 4.18. To do this,. Clone or download Welcome to the FaceFX Unreal Engine 4 plugin source code! If you are upgrading from a previous version of the FaceFX UE4 Plugin, 7 Dec 2018 Download Unreal Engine 4.2.1 free. A complete set of game development tools ✓ Updated ✓ Free download. Vehicles are now fully supported in Unreal Engine! This sample game features an off-road vehicle and a looping track within a desert setting. Unreal Engine 4 is a suite of integrated tools for game developers to design and build games, simulations, and visualizations. Thanks for considering Unreal Engine 4 for your development needs. Here you will find a list of common questions answered to make informed decisions with little guesswork. Interfaces were only supported in Unreal Engine generation 3 and a few Unreal Engine 2 games. UnrealScript supported operator overloading, but not method overloading, except for optional parameters. Download PDF Unreal Engine 4 for Design Visualization: Developing Stunning Interactive Visualizations, Animations, and Renderings (Game Design) FREE Table of Contents 1) Series Introduction 2) Download & Install 3) Project Creation 4… Learn to design and build Virtual Reality experiences, applications, and games in Unreal Engine 4 through a series of practical, hands-on projects that teach you to create controllable avatars, user interfaces, and more. r/unrealengine: The official subreddit for the Unreal Engine by Epic Games, inc. right click on the old project file and switch engine version to the new one, or.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00028.warc.gz
CC-MAIN-2021-49
4,638
6
http://userbase.kde.org/index.php?title=Thread:KDE_UserBase_talk:About/translation&lqt_oldid=1283
code
We hadn't considered that. However, we are now assessing whether there are any practical difficulties (those pages linked from the footer aren't in the Main namespace). If there are no unsurmountable problems we'll do it. happy to hear that. its world-user-friendly to those user whose primary language is not English . seems this page can be translated right now. so, do you guys plan to put Special:MyLanguage in the url at footer? More completely, I think it's good to test all links to make sure Special:MyLanguage is added to url.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00078-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
535
2
https://www.python.org/jobs/4126/
code
Python (Django) Developer Magno IT Recruitment Amsterdam, Holland, Netherlands Job TitlePython (Django) Developer We are looking for a Python (Django) Developer for Liberty Global in Amsterdam. The first contract will run until the end of December with option of rolling extensions. A Senior Application Developer to be part of the Network Operations Team which provides network support to a big project and affiliate customers. The main projects of this team are Observer API development and network automation. The role of the contractor will mainly focus on the backend application development and will be a replacement of the current developer who is leaving. General responsibilities Provide support to the Network Operations team to introduce more automation and improve their day to day tasks by reducing repetitive work Ensure relevant processes are followed (Problem resolution, Change control and others) Tooling, Automation and software development Write reusable, testable and efficient code Design and implementation of data processing systems and software Implementation of low - latency and highly available APIs Integration of existing databases in processing pipeline Support with implementation of user interfaces Networking and IP Skills Basic Networking knowledge including: - Routing protocols - OSI model - IP and MAC addressing Must-have skills & knowledge: • High level of proficiency in Python and Django framework • SQL knowledge, including query evaluation and optimisation techniques. • Experience in Object Oriented Programming/Design. • Knowledge of Linux internals and common UNIX daemons. • Knowledge of internet enabling technologies, protocols & networking. • Process oriented skills for troubleshooting, problem solving and problem resolution. Preferred skills & knowledge: • Degree in a related field. • Bash and Shell Scripting. • Prior experience of industry-standard software design and development principals such as Unit Testing, MVC Patterns. • Exposure to version control and collaboration systems such as GIT and Phabricator. • Elasticsearch Does this role spark your interest? Then please provide me with your most recent resume and contact details at [email protected] , so that we can discuss this vacancy more detailed by phone! - No telecommuting - No Agencies Please Bash, Shell Scripting, Python, Django, Basic networking skills. Specialism Onsite Requirement *NO REMOTE WORK Relocation to Netherlands Required Work Permit can be provided, Own sponsor relocation About the Company - Contact: Hnin Ei Ei Tun - E-mail contact: [email protected] - Web: https://www.magno-it.nl/Vacancies.aspx?keywords=python&ctl01%24lstlocation=0&vacancy=116736735077
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670921.20/warc/CC-MAIN-20191121153204-20191121181204-00139.warc.gz
CC-MAIN-2019-47
2,733
20
http://www.idnforums.com/forums/66851-post1.html
code
I'm currently operating a $15 backorder service. I don't offer IDNs at this point, but would like to gauge the interest level. Please let me know if you would be interested in this. Also let me know if you would like to backorder any non IDNs for $15 as this service is currently available. IDN Backorders are now available!!! Place orders in puny code and include the language for the lang. tag. PM me or for more details check this thread:
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.60/warc/CC-MAIN-20170423031202-00115-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
441
4
https://www.theladders.com/job/analytics-architect-ecolab-chicago-il_43467762
code
The Analytics Architect will work as member of the Commercial Digital Solutions Analytics team. This position with work with the latest cloud-based and on-premise Microsoft BI/DW solutions. This position is hands-on (does development) and provides technical leadership that will enhance the capabilities of the data warehouse environments used by our customers, field organization and back office associates. This position works closely with the business and the rest of the commercial digital solutions team to deliver value through Business Intelligence and analytics. What’s in It for You: - Opportunity to create a long-term career path based on your interests, skills and passion. - Access to best in class resources, tools, and technology. - Thrive in a company that values a culture of safety, sustainability, inclusiveness, and performance. What You Will Do: - Architect, design, build, implement, and maintain solutions to meet global business needs for the data warehouse and data mart environments using the Microsoft BI/DW stack - Works with the Principle architect, BI director, and the business to define the architecture and strategy for BI/DW across the commercial digital solutions environment - Provides technical direction to ETL developers, BI developers, and application developers to architect, design, and implement solutions - Stays current on the technology, tools, and approaches in the BI/DW/Analytics/cloud space - Solves complex problems and develops proofs of concept to validate ideas or technologies - Works on multiple initiatives simultaneously - Creates and owns best practice standards and templates - Helps support the system and troubleshoots system related issues - Bachelor’s and/or Master’s Degree in Information Technology - 8+ years of experience in Information Technology - 5+ years of experience architecting, designing, and developing BI/Data Warehouse solutions using the Microsoft BI/DW stack (e.g. SQL DW, ADLS, HDInsight, SQL Server, ADF, SSAS tabular, Power BI, Azure) - 5+ years of experience working with SQL, SQL stored procedures, Change data capture, and relational/multidimensional databases - 5+ years of experience with data architecture/design in data warehouse and transactional environments - 5+ years of experience working with BI tool architectures, functions, and features - 3+ years of experience with Azure cloud development - Experience with Big Data solutions such as HDInsight, data lakes, IOT, etc. and advanced analytics/data science - Visa sponsorship is not provided for this opportunity - Proven ability to quickly learn new technologies - Knowledge of service data and service-based businesses - Experience with agile methodologies - Strong problem-solving orientation - Can manage multiple competing tasks/projects at the same time - Strong team and customer service focus
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400208095.31/warc/CC-MAIN-20200922224013-20200923014013-00043.warc.gz
CC-MAIN-2020-40
2,855
29
https://www.gamefront.com/forums/aom-problems-errors-and-help/aom-eso-login
code
I had a problem auto-updating the game so i went to the AoM site and d/l'd the patch there. Now when I try to login it locks up for a sec then says "ESO is not accepting Login requests" And when i try to make a new account it says "ESO is not accepting account creation" The last time i was on was about 2weeks ago but since then i wiped my hard drive. I know its not my password. What should i do? Or is the AoM online server down? These msgs appeared on July31, Aug1, Aug2 thx for any help you can give :uhm: :uhoh: have you tried to reinstall AoM without the patch? or atleast reinstall AoM and the patch? not sure exactly what the problem is..but those are always good trouble shooting ideas. Hope that helps. :) I tried that once when i was having troubles auto-updateing the patch but it never worked i guess i'll try again for this.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107919459.92/warc/CC-MAIN-20201031151830-20201031181830-00591.warc.gz
CC-MAIN-2020-45
839
6
http://www.slideserve.com/viveka/the-earth-as-a-system
code
The Earth as a System. Earth’s Spheres. Earth’s Spheres. Atmosphere Hydrosphere Lithosphere/ Geosphere Biosphere Cryosphere. Atmosphere. The gaseous sphere that covers the Earth. Consists of a mixture of gases composed primarily of nitrogen, oxygen, carbon dioxide, and water vapor. The Earth as a System
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121305.61/warc/CC-MAIN-20170423031201-00609-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
308
2
http://simbad.u-strasbg.fr/simbad/sim-ref?bibcode=2011A%26A...528A.144K
code
Astronomy and Astrophysics, volume 528A, 144-144 (2011/4-1) Evolution of the binary population in young dense star clusters. KACZMAREK T., OLCZAK C. and PFALZNER S. Abstract (from CDS): Field stars are not always single stars, but can often be found in bound double systems. Since binary frequencies in the birth places of stars, young embedded clusters, are sometimes even higher than on average the question arises of how binary stars form in young dense star clusters and how their properties evolve to those observed in the field population. We assess, the influence of stellar dynamical interactions on the primordial binary population in young dense cluster environments. We perform numerical N-body simulations of the Orion nebula cluster like star cluster models including primordial binary populations using the simulation code nbody6 ++. We find two remarkable results that have yet not been reported: The first is that the evolution of the binary frequency in young dense star clusters is independent predictably of its initial value. The time evolution of the normalized number of binary systems has a fundamental shape. The second main result is that the mass of the primary star is of vital importance to the evolution of the binary. The more massive a primary star, the lower the probability that the binary is destroyed by gravitational interactions. This results in a higher binary frequency for stars more massive than 2M☉ compared to the binary frequency of lower mass stars. The observed increase in the binary frequency with primary mass is therefore most likely not due to differences in the formation process but can be entirely explained as a dynamical effect. Our results allow us to draw conclusions about the past and the future number of binary systems in young dense star clusters and demonstrate that the present field stellar population has been influenced significantly by its natal environments. binaries: general - galaxies: clusters: general - galaxies: clusters: individual: ONC - methods: numerical
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00205.warc.gz
CC-MAIN-2019-51
2,037
6
https://www.emacswiki.org/emacs/2020-12-24
code
I’ve been making some changes to the wiki engine, adding header, nav, and footer elements instead of using divs with classes. This means that some CSS files might break in unexpected ways. If you notice anything weird, please let me know by leaving a comment, here. I also changed the default CSS to automatically pick the dark theme if your browser says it prefers the dark theme. – Alex Schroeder
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649731.59/warc/CC-MAIN-20210619203250-20210619233250-00458.warc.gz
CC-MAIN-2021-25
402
2
https://www.pcmag.com/article2/0%2C2817%2C1166845%2C00.asp
code
- Assign Custom Icons to Windows Explorer Folders - Folders: Download It Here As mentioned above, Folders creates or modifies a hidden file called Desktop.ini in each folder to give that folder a custom icon. However, there's a bit more to the story: Each folder must also be marked with the system attribute. The latest version of Shell32.dll provides functions for this purpose, but they don't work! The fancy icon-selection dialog is activated by a simple call to the undocumented Windows API function PickIconDlg. Folders provides its own uninstall routine, because a standard uninstaller couldn't manage the option of finding and removing all the custom icons added by Folders. One of the biggest challenges in writing Folders was getting Windows Explorer to notice changes in a folder's icon; with advice from Microsoft, I developed a multipronged approach that seems to do the job.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688671.43/warc/CC-MAIN-20170922055805-20170922075805-00337.warc.gz
CC-MAIN-2017-39
888
3
https://dergvisuppthemo.cf/peoplesoft-for-the-oracle-dba.php
code
Suggested Companies. Resume Builder. - Account Options. - PeopleSoft/ Oracle DBA Hays India. - Orderly Fashion: A Sociology of Markets? - Distributed Processing and Database Systems. - Pete Finnigan's Oracle Security Weblog! Job Tools. Local Jobs Salary Estimator. Contact Us. Peoplesoft for the Oracle DBA United States. United States Canada. Keywords Location. Search Jobs. Security, Training and Software to Secure Your Oracle Database List View. Date Added Anytime 24 hours 7 days 14 days 30 days. Sort by Relevance Date. Apply Filters. PeopleSoft Administrator. PeopleSoft DBA. PeopleSoft Developer. Oracle DBA. Simply Apply. - Visibility. The Seeing of Near and Distant Landscape Features. - Peoplesoft for the oracle dba download. - Wake Up America; - Specialist IT Recruitment and Services. - Table of contents. Oracle Applications DBA. Free delivery worldwide. Bestselling Series. Harry Potter. Popular Features. New Releases. Product details Format Paperback pages Dimensions x x Illustrations note XXII, p. Other books in this series. Add to basket. Oracle Insights Dave Ensor. Table of contents A table of contents is not available for this title. About David Kurtz David Kurtz began working with version 5. Oracle Database Administrator/PeopleSoft Administrator In , he joined PeopleSoft U. Since there was virtually no internal documentation about how PeopleSoft related to the database, he started by working out the relationship between the application and database for himself. This led to fixing performance problems in PeopleSoft systems. Soon enough, David was spending all of his time on performance-related consultancy. There, he provides performance and technical consultancy, mostly to PeopleSoft users, mostly on Oracle. Since then, Kurtz has learned to apply principles of response-based performance, not just to the database, but holistically to the entire application stack.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396300.22/warc/CC-MAIN-20200527235451-20200528025451-00420.warc.gz
CC-MAIN-2020-24
1,902
22
http://www.1-script.com/forums/laptop/toshiba-tecra-510cdt-lock-up-37619-.htm
code
Do you have a question? Post it now! No Registration Necessary. Now with pictures! - Posted on - Toshiba Tecra 510CDT Lock Up July 9, 2007, 4:21 am rate this thread question is a D-Link DWL-G650 Revision B5. Anyway, I installed the drivers just fine. I turned off the pc, put the card in, and powered back up. Once I got to Windows 98, it did the found new hardware thing, and I pointed it to the driver. It told me to restart. Here's where the problem starts. When turning on my pc, I see the bios screen, then I see Windows 98 Startup, then I see something i've never seen before, It says "Panning Support Driver V2.1" And below that is a C:> and below that is the underscore blinking. I could leave it here for days with no results. Without the card in Windows boots fine, but if I insert it in windows the system locks and I need a reboot. I know the system in question is older, but any help would be greatly appreciated. On a side note, even though it is an older system, it does support 32 bit cardbus. Thanks! Re: Toshiba Tecra 510CDT Lock Up might have changed or reinstalled the video driver) or you might simply have not noticed it before. Other than that I don't know what's going on, but frankly in my view this machine is too old to screw with.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719453.9/warc/CC-MAIN-20161020183839-00419-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
1,258
23
https://www.superprof.co.in/profession-data-scientist-tech-mba-would-like-teach-math-science-cbse-icse-bangalore-kolkata.html
code
My teaching approach is very smooth and easy. I always make sure that the concept of a topic is 100% cleared, for that, I use a very appropriate and convenient way. I prepare question papers for weekly evaluation and performance. Also, take care of the students' daily school homework and tests. By profession Data scientist, business analyst. I have done b.tech(Electrical Engineering), MBA(Marketing, Data analysis). Having more than 6 years of teaching experience. As a student, I have an excellent academic background. I have an excellent approach to make a student understand any complex topic, like geometry, calculus, logarithm, physics, algebra, etc. Fees would be according to the distance of travel, way of teaching, class. By profession Data scientist, business analyst. I have done b.tech(Electrical Engineering), MBA(Marketing, Data analysis). I have olympiad certificates in maths and science (A, A+) maths classes closeby? Here's a selection of tutor listings that you can check out. Superprof can also suggest algebra classes to help you. Learning isn't a problem, physics classes for all! Taking arithmetic classes has never been easier: you're going to learn new skills. |at her home||at your home||By webcam|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00225.warc.gz
CC-MAIN-2021-04
1,227
11
http://stackoverflow.com/questions/11136128/why-is-this-reverse-string-function-giving-a-seg-fault?answertab=votes
code
It's almost certainly because j pass each other (whether they're indexes or pointers is irrelevant here). For example, any string with an even number of characters will have this problem. Consider the following sequence for the string 0123 <- indexes s = "drum", i = 0, j = 3, swap d and m. s = "mrud", i = 1, j = 2, swap r and u. s = "murd", i = 2, j = 1, swap u and r, oops, we've passed each other. s = "mrud", i = 3, j = 0, swap m and d. s = "drum", i = 4, j = -1, swap who knows what, undefined behaviour. Note that for a string with an odd length, you won't have this problem since i eventually equals j (at the middle character). i < j check also fixes this problem since it detects both equality of pointers and the pointers passing each other.
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398453805.6/warc/CC-MAIN-20151124205413-00309-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
752
13
https://tales-of-trails.com/trails-in-kansai/
code
The website which hosted my trails was sold. This is a small tragedy for this project, since the new owner doesn’t support free access. It means you have to sign up and pay on a monthly base to be able to download a trail provided by the community. I’m not even able to download my own trails anymore. This is against my philosophy and I strongly refuse to support this kind of business! Well, at this point it means some inconvenience, since my technical knowledge is limited and this collections of trails is a small hobby project. I provide all the trails of Hiking around Ōsaka. Click the following link to download the zip folder containing both, a .klm and .gpx file, of the chosen trail. I recommend to load the .klm file into maps.me, an open street map based app which also works offline.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00475.warc.gz
CC-MAIN-2019-35
802
2
https://forge.puppetlabs.com/tags/pam-access?page_size=100
code
Found 3 modules tagged with 'pam-access' Puppet Module for wrapping defined types from several puppet modules that doesn't appear on Puppet's ENC Foreman Manage access.conf entries Total downloads for all releases of this module. Composite score (between 0 and 5) for the current release of this module, based on user feedback and automatic module quality scoring.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224643388.45/warc/CC-MAIN-20230527223515-20230528013515-00102.warc.gz
CC-MAIN-2023-23
364
5
https://www.profillic.com/search?query=Liliang%20Zhang
code
Models, code, and papers for "Liliang Zhang": Detecting pedestrian has been arguably addressed as a special topic beyond general object detection. Although recent deep learning object detectors such as Fast/Faster R-CNN [1, 2] have shown excellent performance for general object detection, they have limited success for detecting pedestrian, and previous leading pedestrian detectors were in general hybrid methods combining hand-crafted and deep convolutional features. In this paper, we investigate issues involving Faster R-CNN for pedestrian detection. We discover that the Region Proposal Network (RPN) in Faster R-CNN indeed performs well as a stand-alone pedestrian detector, but surprisingly, the downstream classifier degrades the results. We argue that two reasons account for the unsatisfactory accuracy: (i) insufficient resolution of feature maps for handling small instances, and (ii) lack of any bootstrapping strategy for mining hard negative examples. Driven by these observations, we propose a very simple but effective baseline for pedestrian detection, using an RPN followed by boosted forests on shared, high-resolution convolutional feature maps. We comprehensively evaluate this method on several benchmarks (Caltech, INRIA, ETH, and KITTI), presenting competitive accuracy and good speed. Code will be made publicly available. Sketch-based face recognition is an interesting task in vision and multimedia research, yet it is quite challenging due to the great difference between face photos and sketches. In this paper, we propose a novel approach for photo-sketch generation, aiming to automatically transform face photos into detail-preserving personal sketches. Unlike the traditional models synthesizing sketches based on a dictionary of exemplars, we develop a fully convolutional network to learn the end-to-end photo-sketch mapping. Our approach takes whole face photos as inputs and directly generates the corresponding sketch images with efficient inference and learning, in which the architecture are stacked by only convolutional kernels of very small sizes. To well capture the person identity during the photo-sketch transformation, we define our optimization objective in the form of joint generative-discriminative minimization. In particular, a discriminative regularization term is incorporated into the photo-sketch generation, enhancing the discriminability of the generated person sketches against other individuals. Extensive experiments on several standard benchmarks suggest that our approach outperforms other state-of-the-art methods in both photo-sketch generation and face sketch verification. This paper investigates how to extract objects-of-interest without relying on hand-craft features and sliding windows approaches, that aims to jointly solve two sub-tasks: (i) rapidly localizing salient objects from images, and (ii) accurately segmenting the objects based on the localizations. We present a general joint task learning framework, in which each task (either object localization or object segmentation) is tackled via a multi-layer convolutional neural network, and the two networks work collaboratively to boost performance. In particular, we propose to incorporate latent variables bridging the two networks in a joint optimization manner. The first network directly predicts the positions and scales of salient objects from raw images, and the latent variables adjust the object localizations to feed the second network that produces pixelwise object masks. An EM-type method is presented for the optimization, iterating with two steps: (i) by using the two networks, it estimates the latent variables by employing an MCMC-based sampling method; (ii) it optimizes the parameters of the two networks unitedly via back propagation, with the fixed latent variables. Extensive experiments suggest that our framework significantly outperforms other state-of-the-art approaches in both accuracy and efficiency (e.g. 1000 times faster than competing approaches).
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986682998.59/warc/CC-MAIN-20191018131050-20191018154550-00022.warc.gz
CC-MAIN-2019-43
4,019
4
http://www.glocktalk.com/forums/showpost.php?p=19863489&postcount=2
code
Hi, it looks like I missed this class, but I have been looking for an edged weapons training in the Las Vegas area because I found some great [url=http://www.earlyvegas.com/promo_codes.html]Las Vegas Deals[/url] on hotels. I am wondering if there are any other classes that Chuck Burnett is offering in 2013, and if not, are there any other ones you would recommend? Any info would be greatly appreciated. Thanks! Last edited by T James; 01-23-2013 at 13:17..
s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654687.23/warc/CC-MAIN-20150417045734-00039-ip-10-235-10-82.ec2.internal.warc.gz
CC-MAIN-2015-18
459
2
https://hitmarker.net/jobs/ubisoft-senior-technical-animator-1050983
code
Senior Technical Animator About Ubisoft & Shanghai Studio: Ubisoft’s 19,000 team members, working across more than 40 locations around the world, are bound by a common mission to enrich players’ lives with original and memorable gaming experiences. Their dedication and talent has brought to life many acclaimed franchises such as Assassin’s Creed, Far Cry, Watch Dogs, Just Dance, Rainbow Six, and many more to come. Ubisoft is an equal opportunity employer that believes diverse backgrounds and perspectives are key to creating worlds where both players and teams can thrive and express themselves. If you are excited about solving game changing challenges, cutting edge technologies and pushing the boundaries of entertainment, we invite you to join our journey and help us create the unknown. Created in 1996, Ubisoft Shanghai studio, is a vibrant and exciting place where our 600+ talents get opportunities to either co-develop great AAA blockbuster games, create cutting-edge online games or produce fun mobile games. To learn more, please visit our website. As a Technical 3D Animator, you will be responsible to create the game characters rigs and to develop some tools for the Animation team. What you will do: - Quickly learn an existing AAA game animation system and pipeline - Train animators in workflow, tools and anything else surrounding implementation of animation assets into the game-engine - Supervise all integration of the animation assets into the game engine - Have good knowledge of existing engines and animation packages within the industry - Communicate with Engine and Animation Programmer - Provide constant technical support of the tools and engine to the animation team and communicate regularly the technical restrictions and their rationale - Assist the design, integration and validation of animation assets - Professional experience in a similar position, preferably with experience in AAA games - Excellent knowledge of Maya and Maya mel/Python, 3D Studio Max and Max Script; good knowledge of motion builder - Excellent knowledge of Human Anatomy - Good engine application and project development thinking; Good ability to teach. - Able to write technical documentation - Good capacity to communicate verbally as well as written. What you will also get: - An international working environment - FUN Culture - Lots of love from your colleagues!
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00340.warc.gz
CC-MAIN-2021-43
2,387
25
https://lists.samba.org/archive/samba-technical/2003-February/027160.html
code
Machine Account Passwords are changed on the "WRONG" server!! Brian M Hoy Brian.Hoy at opus.co.nz Fri Feb 14 03:21:22 GMT 2003 When machine account passwords are changed, they are usually updated on one of the "BDC" servers rather than the PDC server. We are solely using Samba 2.2.7 file servers (Linux and Solaris) and OpenLDAP for authentication. Our corporate network consists of 51 branch offices and 1 head office. The samba daemon at head office is the PDC. The master LDAP replication daemon also resides on the same machine. Although Samba does not have SAM replication functionality, by using LDAP replication we are achieving the same thing. In other words all user password changes are changed on the PDC, which updates the local LDAP server, which replicates to all the LDAP daemons on the branch offices, who in turn are queried by the local samba daemons. All in all it works very extremely well.....except for.... Machine account passwords These are changed on the PC's current "secure channel partner" (one of the BDCs usually). This URL explains it in more detail: The following excerpt is from the Microsoft document above: "... Netlogon is also responsible for changing the machine account password. By default, this password is reset every seven days. The workstation sends the request to the secure channel partner. The secure channel partner passes the request to the PDC...." As any PC generally uses the same "BDC", this only causes occasional On our network with 1200 PCs we get the following problems: 1. laptops on the move need to be rejoined to the domain because the machine account passwords are out of sync. 2. occasionally desktop PCs cannot authenticate against the domain and need to be rejoined too. The second point happens, because the PC will _occasionally_ use a different DC to authenticate against (it's secure channel partner in MS parlance). If it just so happens to change its machine account password with this SCP, then the machine's domain membership is broken next time it uses its "normal" SCP. I have a written a Perl script which fetches the machine account details from every LDAP server on our network and then figures out which one has the most recent machine account password, and then submits the change to the LDAP master so that it is replicated everywhere, thereby getting around these problems. It works, but is not ideal A quick look at the Samba source suggests that it would not handle LDAP referrals. Am I right here? If it did, then LDAP could be configured to give a referral to the LDAP master for changes, solving the problem (at least for LDAP users). If you believe the MS document, then the Samba "BDC" should pass the machine account password change request to the PDC. That would be nice! More information about the samba-technical
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513144.48/warc/CC-MAIN-20210117174558-20210117204558-00208.warc.gz
CC-MAIN-2021-04
2,806
48
https://huntly.ai/employers/developers/full-stack/python-javascript/
code
Python with its Django/Flask frameworks for back-end development; React/Vue.js/Angular for front-end development; Express.js for building and integrating APIs; storing and retrieving data with SQL and/or NoSQL databases. The list of requirements can vary depending on a particular position; the Huntly team is ready to assist you with checking candidates’ skills during our pre-screening process. Django (Python) with React or Vue.js for robust backend and interactive UIs; Flask (Python) for lightweight, flexible backend with React, Vue.js, or other JS frameworks for frontend; Tornado (Python) for scalable web servers paired with React or Angular for dynamic UIs. How to Hire Full-Stack Developers Through Huntly Post a job for free and set up the referral bonus Interview candidates who passed our pre-screening Pay the one-time bonus upon a successful hire What Needs is Huntly Best for? Unique SkillsetHuntly specializes in closing difficult tech positions that require knowledge of niche technologies such as .NET, Node.js, Angular, React, and more. Insufficient Internal Recruitment ResourcesIf your company doesn’t have a dedicated HR team or expertise required for managing technical recruitment, Huntly will help! High Volume RecruitmentAs an innovative external recruitment tool, Huntly will play a crucial role in meeting increased hiring requirements. Recruitment Has Never Been so Safe! 3 Months Guaranteeso you are able to replace the candidate within three months. Free Candidate Replacementso you can get another candidate at our cost if the hired one doesn’t handle their work well. Money Back Promiseso you get a risk-free experience due to our refund policies.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474377.60/warc/CC-MAIN-20240223085439-20240223115439-00360.warc.gz
CC-MAIN-2024-10
1,688
20
https://aws.amazon.com/blogs/apn/tag/consolidated-coronavirus-research-hub/
code
AWS Partner Network (APN) Blog Tag: Consolidated Coronavirus Research Hub Building a Knowledge Graph for Scientific Research with MarkLogic and AWS Organizations that prioritize data search and discovery are more productive and innovative. Deploying an intelligent search and discovery system requires organizations to change the way they integrate and curate data using semantic graphs (or knowledge graphs) to build rich search and discovery experiences. MarkLogic Data Hub Service has built-in semantic search capabilities, allowing you to quickly build knowledge graph-based applications.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645089.3/warc/CC-MAIN-20230530032334-20230530062334-00616.warc.gz
CC-MAIN-2023-23
592
4
https://lanailens365.wordpress.com/2011/08/27/day-149-ministry-of-silly-walks/
code
Monty got her name by being the chicken that quite literally does a walk almost exactly identical to that featured in the Monty Python “Ministry of Silly Walks” skit. I have tried in vain to capture a picture of her in full silly stride, but to date have failed to do so. I am not sure if you can tell or not, but her left leg is noticeably smaller than her right one. This was a birth defect, and we have been assured by the previous owner that she was born this way, and her silly walk is through no fault of ours. She has a massive what looks like a wound on her back just above that leg, though she is sprouting new feathers in the area, and seems to be just fine. She is I do believe the smallest of our chickens, and when given the opportunity to free range, she rarely leaves the chicken yard. You guessed it right if you jumped to the conclusion that she has already stolen my heart with her little bitty self, and oh so silly walk. I must admit, however, that we are trying to keep her silly walk from the Ministry, as I am almost one hundred percent positive she has not legally registered it.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647519.62/warc/CC-MAIN-20180320170119-20180320190119-00581.warc.gz
CC-MAIN-2018-13
1,107
1
https://www.workrockers.com/profile.php?user_id=1d6b6cd8deae78fb8fd31878bd2820a0
code
- Private Earned/Yr - 0 Feedbacks Hi, I'm mobile and web developer with 6+ years of experience. I am mobile & web Expert developer who knows the value of time, very hard working and always delivers the work on time. My Motive is to make my employer happy without adding additional charges. I am definitely trying to earn a good living through my developer skills. hoping to work for great people and making them happy through my work is my priority. Hire me with the confidence and peace of mind that comes with hiring a professional. #Expert At (7 years of experience) >>>Android/iOS App Development(Native and Hybrid) >>>Game Development with Unity 3D >>>Website Development with Ruby on rails, django, nodejs, Codeigniter, wordpress and so on.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578762045.99/warc/CC-MAIN-20190426073513-20190426095513-00057.warc.gz
CC-MAIN-2019-18
746
12
http://indieiosfocus.com/issues/71
code
Great summary of the different networking options in iOS. If you are looking for a job in iOS development, understanding networking is crucial to your career. Aleksander hits on HTTP, REST, NSURLSession, and even Alamofire, breaking down how to implement each. A Crash Course on Networking in iOS by Aleksander Koko I agree totally. Though there are times you do have to eat, sleep, code, and repeat, making this a normal part of your life is a great way to become unhappy and unfulfilled. You have to have a life outside of code and that life will lead to you being a better, more rounded coder. “Eat, sleep, code, repeat” is such bullshit by Dan Kim
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250597458.22/warc/CC-MAIN-20200120052454-20200120080454-00232.warc.gz
CC-MAIN-2020-05
655
4
http://www.nlindia.com/hybrid-app-development.html
code
Hybrid app development needs mostly only one code base for multiple platforms. A hybrid mobile app developer can build it once and then using a bridging technology such as PhoneGap, submit the app to all platforms (iPhone, Android, and Windows Phone.) Reach a wider audience by combining the best of both worlds. Avoid the extra time developing the same app for each platform. Cross-platform apps can be immediately distributed between app stores as they are easier and faster to develop and deploy. Also skip the costs associated with developing two different versions for Android and iOS.
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424564.72/warc/CC-MAIN-20170723142634-20170723162634-00608.warc.gz
CC-MAIN-2017-30
590
2
https://serverfault.com/questions/874995/using-own-rhel-license-for-compute-engine/875573
code
I wish to create a compute engine instance with RHEL OS and then subscribe the instance with my license. Can I use my RHEL license for Compute Engine? If I can use my RHEL license, do I still need to pay for Premium image charges to Google? You can bring your own RHEL subscription to Google Compute Engine with the help of Red Hat Cloud Access feature. For more information, refer to the documentation, select the 'RHEL' tab mentioning "As an added benefit for subscribers of RH Enterprise products, Red Hat Cloud Access enables enterprise customers to migrate their current subscriptions for use on Google Compute Engine." and Red Hat Cloud access.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657163613.94/warc/CC-MAIN-20200715070409-20200715100409-00546.warc.gz
CC-MAIN-2020-29
650
3
https://www.masshist.org/database/viewer.php?item_id=4179&ft=%20Massachusetts%20Debates%20a%20Womans%20Right%20to%20Vote
code
Votes for Women: “Let the People Rule” To order an image, navigate to the full display and click "request this image" on the blue toolbar. Choose an alternate description of this item written for these projects: - Who Counts? A Look at Voter Rights through Political Cartoons In 1912, as the candidate of the Progressive ("Bull Moose") Party, Theodore Roosevelt proclaimed, "Let the people rule!" While Roosevelt supported voting rights for women, there remained the question of whether women would be accepted as full participants in American political life—if they truly were "people."
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00203.warc.gz
CC-MAIN-2020-50
593
7
http://www.pisces.ucsd.edu/pisces/
code
PISCES Research Program The UCSD-PISCES Research Program is located at the University of California in San Diego (UCSD) in the main La Jolla Campus. It resides within the UCSD Center for Energy Research. The members of the PISCES team contribute widely to the research field of boundary plasma science for fusion applications.
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292181.27/warc/CC-MAIN-20160823195812-00293-ip-10-153-172-175.ec2.internal.warc.gz
CC-MAIN-2016-36
326
4
https://push-zb.helmholtz-muenchen.de/frontdoor.php?source_opus=53031&la=de
code
Median filtering is among the most utilized tools for smoothing real-valued data, as it is robust, edge-preserving, value-preserving, and yet can be computed efficiently. For data living on the unit circle, such as phase data or orientation data, a filter with similar properties is desirable. For these data, there is no unique means to define a median; so we discuss various possibilities. The arc distance median turns out to be the only variant which leads to robust, edge-preserving and value-preserving smoothing. However, there are no efficient algorithms for filtering based on the arc distance median. Here, we propose fast algorithms for filtering of signals and images with values on the unit circle based on the arc distance median. For non-quantized data, we develop an algorithm that scales linearly with the filter size. The runtime of our reference implementation is only moderately higher than the Matlab implementation of the classical median filter for real-valued data. For quantized data, we obtain an algorithm of constant complexity w.r.t. the filter size. We demonstrate the performance of our algorithms for real life data sets: phase images from interferometric synthetic aperture radar, planar flow fields from optical flow, and time series of wind directions.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711417.46/warc/CC-MAIN-20221209144722-20221209174722-00313.warc.gz
CC-MAIN-2022-49
1,287
1
https://developer.android.google.cn/preview/behavior-changes-all?hl=ar
code
The Android 11 platform includes behavior changes that may affect your app. The following behavior changes apply to all apps when they run on Android 11, regardless of targetSdkVersion. You should test your app and then modify it as needed to support these properly, where Make sure to also review the list of behavior changes that only affect apps targeting Android 11. JobScheduler API call limits debugging Android 11 offers debugging support for apps to identify JobScheduler API invocations that have exceeded certain rate limits. Developers can use this facility to identify potential performance issues. For apps with the debuggable manifest attribute set to true, invocations beyond the rate limits will return Limits are set such that legitimate use cases should not be affected. In Android 11, whenever your app requests a permission related to location, microphone, or camera, your app is granted a temporary one-time permission. To learn more about this change, see the one-time permissions section on the page that discusses privacy changes related to permissions in Android 11. User choice can restrict when a permission dialog appears Android 11 discourages repeated requests for a specific permission. If the user taps Deny twice for a specific permission during your app's lifetime of installation on a device, this action implies "don't ask again". To learn more about this change, see the permission dialog visibility section on the page that discusses privacy changes related to permissions in Android 11. Background location access If your app targets Android 11, you cannot directly request all-the-time access to background location. Even if your app targets Android 10 (API level 29) or lower, users see a system dialog that includes buttons to control foreground location access. To learn more about this change, see the background location access section on the page that discusses privacy changes related to location in Android 11. Android 11 introduces several user-facing changes related to storage permissions, including the name of the Storage runtime permission and the contents of the dialog that explains your app's request for a storage permission. To learn more about these changes, see the permissions section on the page that discusses privacy changes related to storage in Android 11. Change to ACTION_MANAGE_OVERLAY_PERMISSION intent behavior Beginning with Android 11, intents always bring the user to the top-level Settings screen where they can grant or revoke the permissions for apps. Any package: data in the intent is ignored. In earlier versions of Android, the could specify a package, which would bring the user to an app-specific screen for managing the permission. This functionality is no longer supported in Android 11. Instead, the user must first select the app they wish to grant or revoke the permission to. This change is intended to protect users by making the permission grant more intentional. All Files Access Some apps have a core use case that requires broad file access, such as file management or backup & restore operations. They can get All Files Access by declaring the special To learn more about this permission, see the All Files Access section on the page that discusses privacy changes related to storage in Android 11. Non-SDK interface restrictions Android 11 includes updated lists of restricted non-SDK interfaces based on collaboration with Android developers and the latest internal testing. Whenever possible, we make sure that public alternatives are available before we restrict non-SDK interfaces. If your app does not target Android 11, some of these changes might not immediately affect you. However, while you can currently use non-SDK interfaces that are part of the greylist (depending on your app's target API level), using any non-SDK method or field always carries a high risk of breaking your app. If you are unsure if your app uses non-SDK interfaces, you can test your app to find out. If your app relies on non-SDK interfaces, you should begin planning a migration to SDK alternatives. Nevertheless, we understand that some apps have valid use cases for using non-SDK interfaces. If you cannot find an alternative to using a non-SDK interface for a feature in your app, you should request a new public API. To learn more about the changes in this release of Android, see Updates to non-SDK interface restrictions in Android 11. To learn more about non-SDK interfaces generally, see Restrictions on non-SDK interfaces. File descriptor sanitizer (fdsan) Android 10 introduced fdsan (file descriptor sanitizer). fdsan detects mishandling of file descriptor ownership, such as use-after-close and double-close. The default mode for fdsan is changing in fdsan now aborts upon detecting an error; the previous behavior was to log a warning and continue. If you're seeing crashes fdsan in your application, refer to the
s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875148671.99/warc/CC-MAIN-20200229053151-20200229083151-00512.warc.gz
CC-MAIN-2020-10
4,909
52
http://ubuntuforums.org/archive/index.php/t-1090869.html
code
View Full Version : [kubuntu] Is there a BASH command to minimize a running program? March 9th, 2009, 12:57 AM As the title says. Is there a CLI command to minimize a program? Thanks in advanced. March 9th, 2009, 01:00 AM Not that I know of. March 9th, 2009, 01:02 AM Maybe if your Window Manager has a built-in one, but I have yet to hear of one. March 9th, 2009, 01:03 AM March 9th, 2009, 01:09 AM After the first persons reply I figured it must be apart of the enviroments and not a CLI tool. (Rather some programming language). Powered by vBulletin® Version 4.2.2 Copyright © 2016 vBulletin Solutions, Inc. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049277286.54/warc/CC-MAIN-20160524002117-00074-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
633
12
http://bmgf.bulbagarden.net/blogs/42306/category0/
code
Entries with no category So I believe I fricked up trying to fix photoshop and now i made it even worse and I think it won't work anymore ever again. and i forgot the password to reinstall it. [dying moose noises intensify] *flops onto the ground* So my parents ordered me a 3ds for my birthday and I should be getting it in a few days. Also I'm getting 60 dollars so I can get myself X version and either KH: Dream Drop Distance or Prof. Layton and the Miracle Mask and a friend is getting me Bioshock 1 over Steam i'm happy Just, a lot of people around me, online and in person are playing Gen 6 games and here I am stuck with gen 5 and fangames. Hopefully birthday money (and Christmas money, if it comes down to it) will be enough to get a 3DS and X version. And, hopefully procrastination and art blocks will cease to be a thing to exist so I can continue my comics. Too many fandoms I'm stumbling blindly through. Toooo many Then Dangan Rompa or whatever the heck you call it aaand now Welcome to Night Vale Whyyy must all this stuff be interesting it's distracting me from doing productive stuff. Getting myself Black 2 tomorrow finally. Finally got enough (birthday) money this weekend. Also, kind of a question. Say that I have a homestuck styled comic that will (hopefully) receive suggestions and stuff, where would I post this? Art, Roleplay area, what?
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164452243/warc/CC-MAIN-20131204134052-00093-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,365
12
http://superuser.com/questions/tagged/sorting+filter
code
I know there's a formula for this in Excel but I can't get it right. I've got a list of 7000+ names in column A, and a list of 4000+ names in column B. Some are duplicates, appearing on both column A ... I have two rules set up to sort incoming bug reports. The first is specific to a single device: Apply this rule after the message arrives sent to SMS Distribution and with ... Lately people at work have been sending with mailing lists in blind copies instead of just sending to the mailing lists. I think the intent is to prevent people from accidentally replying to all, but ... I know this is super easy, I just couldn't find it. I'd like the first row in an excel document to be the "title" field. I want that: Whenever I scroll down, this row will remain fixed I can sort ... The general question entails sorting a large Excel 2007 list to find entries that match smaller subset list. I have a couple of ideas on how to approach the problem, but I lack the technical ...
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00457-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
978
5
https://forum.audacityteam.org/t/question-about-creating-a-non-commercial-t-shirt-with-the-audacity-logo/99046
code
I’m not a native English speaker, so please excuse any lack of clarity in my message. I deeply admire the Audacity logo, especially its version for this release, and I wish to create a T-shirt printed with the Audacity logo for personal, non-commercial use. While I might wear this apparel in public settings such as YouTube videos, I want to ensure that I have the proper permission to do so, for peace of mind. If this question is inappropriate for this forum, I apologize. I look forward to your response.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817239.30/warc/CC-MAIN-20240418191007-20240418221007-00276.warc.gz
CC-MAIN-2024-18
510
3
https://www.tr.freelancer.com/projects/graphic-design-banner-design/blog-header-design/
code
I am designing a capture page and I want the capture page to have a personalized header with my photo on the left, my title on the right and the graphic should contain a picture of a home in the background. The website will be called Chris Eby's, Homeowners Insurance Quotes Today, Real Quotes for Michigan Home Owners by Real People. The size should be 960 x 200 and I can provide the images I would like to be used in the project. 30 freelancer bu iş için ortalamada 37$ teklif veriyor This is simple work that I have done recently as my hobby.. let me work yours sir.. the fastest and cheapest price I offer.. please check your PM, I give my portfolio I am expert in photoshop and web things then i do not think so u ll get less than anything what have u expacted.... please check your PM and have a look on my links...
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807084.8/warc/CC-MAIN-20171124031941-20171124051941-00167.warc.gz
CC-MAIN-2017-47
824
5
https://superuser.com/questions/987842/why-cant-i-authenticate-to-my-gmail-account-neither-from-my-printer-nor-from-m
code
I'm trying to set up a print that will send its scans by email. I created a gmail account specifically for the purpose. However no matter which way I configure the SMTP settings, I cannot get it to authenticate with the Gmail smtp server. The printer is an HP LJ M476dw with firmware datecode 20150608. According to this google apps support document, the proper SMTP are hostname smtp.gmail.com, a choice between port 465 with SSL or port 587 with TLS, and use the full email address (not just username), as the username. I've put screencap of the printer settings page below. There is a single checkbox for SSL/TLS, and I've tried that with both ports 465 and 587, and even port 25 with no SSL, to no avail. I get a generic authentication error: username or password invalid. Just to rule some things out (maybe the printer's smtp client is broken or doesn't support the latest SSL protocols?), I tried to connect to the same Gmail account with my Apple Mail client (v9.0 (3094), running on Mac OS X El Capitan 10.11 (15A284), so completely modern, should definitely support Google's most modern and secure protocols). Now when configuring an email account with Apple Mail, you have an option to configure it as an account with a default well-known provider (Gmail,AOL,Yahoo,Hotmail,etc), and all you have to do is enter the account name, and all configuration is done for you. Doing this, I was able to access the account, both via SMTP and IMAP. But since I'm trying to verify the exact configuration settings necessary, I configured it as a generic IMAP account. Here again, Apple Mail is unable to authenticate, I get generic username/password invalid error. In Apple Mail the methods for authentication are numerous: External (TLS certificate), Kerberos v5 (GSSAPI), NTLM, MD5 Challenge-Response, and Password. See screenshot below. Since the Google help document did not specify an authentication protocol, I tried them all, to no avail. Also I was not able to connect to IMAP configured manually. Now the google support document I followed was for Google Apps accounts, not standard Gmail accounts. Perhaps they use different settings? I looked and looked, but I couldn't actually find a Google support page listing the SMTP, POP, and IMAP settings necessary for connecting a generic email user agent to GMail for the standard GMail service. Only dumb walkthroughs that tell you to use your iPhone's default GMail settings. I wonder if the Google Accounts page is out of date, because using a Google Accounts account failed in exactly the same way as a standard gmail.com Gmail account. This is very strange. What is the difference between my manual IMAP and SMTP configured email account and the default provider configuration? Why can my printer not connect to GMail SMTP via a known good account + password? Why does Google not publish the requisite SMTP email client configuration settings? I found this question asked previously on Super User, but with seemingly no answer.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00488.warc.gz
CC-MAIN-2023-06
2,986
7
http://heartofbalance.blogspot.com/2012/02/nuclear-vs-nuclear-vs-nuclear.html
code
This needs serious attention. The other two options of burying it in a big hole and 'Moxing' it have few advantages and many negatives. At last there's something of a technical fix in actuality rather than on the horizon. It should be supported and encouraged by all of us...vigorously. Nuclear vs Nuclear vs Nuclear
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583835626.56/warc/CC-MAIN-20190122095409-20190122121409-00202.warc.gz
CC-MAIN-2019-04
316
2
https://www.baqend.com/about.html
code
Building a faster web. What we do We provide a comprehensive platform for developers to build fast and scalable apps in less time. The Baqend GmbH (founded in July 2014) develops the Backend-as-a-Service platform Baqend, which leverages new research results to achieve unique benefits. The platform empowers customers to develop data-driven websites and mobile apps with higher productivity while at the same time guaranteeing unprecedented performance in of response times and scalability. The target audience of the platform that is available as both a software and a cloud service are Indie-developers, startups, agencies, e-commerce providers and software companies. Page load times have an immense impact on user behavior and business metrics. Amazon for instance found that already 100 milliseconds lost during page load reduce revenue by 1%. Baqend’s vision therefore is to enable developers to build applications with loading times below human perception. Through 5 years of intensive research and development at the database research group at the University of Hamburg (project "Orestes"), we were able to derive database and backend techniques to solve this problem. The technical core of this research is the Cache Sketch method, which we employ to globally guarantee page load times that are more than factor 2,5 below those of comparable platforms. In our approach, expiration-based caches (e.g. browser caches and proxies) are kept consistent through Bloomfilter-based data structure, whereas invalidation-based caches (e.g. content-delivery networks) are proactively updated. Baqend was initially supported by the Federal Ministry of Economic Affairs and is now funded by a program for innovation-driven startups by the city of Hamburg. For press inquiries please email: [email protected] Who we are Baqend is a startup that has its roots in the database group of the University of Hamburg. We have strong background in databases, distributed systems and web technologies. CEO, Research, Founder CTO, Development, Dev-Ops, Founder Development & Security, Founder CFO, Sales & Marketing, Founder
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121865.67/warc/CC-MAIN-20170423031201-00183-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,124
18
https://build.opensuse.org/package/show/SUSE:ALP:Rings:0-Bootstrap/tar?expand=1
code
GNU implementation of tar ((t)ape (ar)chiver) This package normally also includes the program "rmt", which provides remote tape drive control. Since there are compatible versions of 'rmt' in either the 'star' package or the 'dump' package, we didn't put 'rmt' into this package. If you are planning to use the remote tape features provided by tar you have to also install the 'dump' or the 'star'
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00474.warc.gz
CC-MAIN-2023-06
396
6
https://community.jmp.com/t5/Discussions/How-to-Remove-a-Distribution-Comparison/td-p/231823
code
Hi. I'd like to know if there's a way to remove a "Compare Distributions" from an existing Distribution platform output using JSL. For example: Names Default To Here(1); dt = Open("$SAMPLE_DATA/Big Class.jmp"); obj = Distribution( //These are things I've tried to remove the CmpDist for the Height Distribution, but did not work: obj << Remove Fit; obj << (Fit Handle << Remove Fit); obj << Fit Distribution(0); I want to completely remove the "Compare Distributions" outline box and its contents, along with the fit curve in the plot. IOW, as if I'd never issued the Fit Distribution("All") command at all for that distribution. IOOW, as if I went to the red down arrow for the Height distribution > Continuous Fit and unchecked "All". I have not yet attempted to delete the display boxes from the report window via brute force. Hoping there's a simpler option?
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400227524.63/warc/CC-MAIN-20200925150904-20200925180904-00314.warc.gz
CC-MAIN-2020-40
862
10
https://www.blograby.com/playing-nicely-with-other-apis/
code
Many interesting APIs that started out as part of the HTML5 specification were eventually spun off into their own projects. Others have become so associated with HTML5 that sometimes it’s hard for developers (and even authors) to really tell the difference. In this chapter, we’ll talk about those APIs. We’ll spend a little time working with the HTML5 history API, and then we’ll make pages on different servers talk with Cross-document Messaging, Then we’ll look at Web Sockets and Geolocation, two very powerful APIs that can help you make even more interactive applications. We’ll use the following APIs to build those applications: Manages the browser history. [C5, S4, IE8, F3, OlO.l IOS3.2, A2] Sends messages between windows with content loaded on different domains. [C5, S5, F4, IOS4.1, A2] Creates a stateful connection between a browser and a server. [C5,S5, F4, IOS4.2] Gets latitude and longitude from the client’s browser. [C5, S5,F3.5, 010.6, IOS3.2, A2]
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00137.warc.gz
CC-MAIN-2021-49
983
7
https://forum.duplicati.com/t/removing-files-from-backup/13014
code
Yes, I agree with the @PeteM description and the quoted help text. The 2017 discussion is a bit dated. Test smart retention policy #3008 We had some good discussions how the feature works, we have introduced that backups outside the specified time frame get deleted, and we had some more weeks with no complaints that anything was seriously broken. This should be sufficient to let this optional feature pass to “experimental”. Feature/issue 3008 retention optimizations #3026 As discussed in #3008: --retention-policy now deletes backups that are outside of any time frame. No need to specify --keep-time as well. Help text was changed accordingly. --retention-policy is now mutual exclusive to the other retention options --keep-version. Help text was changed accordingly. --retention-policy now complies with --allow-full-removal, deleting even the most recent backup, if it’s outside of any time frame It’s automated if you can express it with one of the retention policies, e.g. are willing to say how long the backup should retain old versions of files to guard against unwanted deletions or changes. If it’s noticed soon that some files got backed up that you don’t want, an easier and safer-than-purge way is to delete entire unwanted backup version right then with The DELETE command. The latest version is always 0. This method isn’t selective, so make sure that you leave enough recent backups to feel comfortable for other files, but from a storage point of view it might be more attractive than typing up space a long time. Purge is fast too, but if you mis-aim a purge-these-files-in-all-versions you can seriously harm a backup.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057033.33/warc/CC-MAIN-20210920070754-20210920100754-00266.warc.gz
CC-MAIN-2021-39
1,657
14
https://eplus.com/services/training-services/artificial-intelligence-machine-learning-training
code
Learn about how AI/ML/DL helps you upskill your own teams, can deliver additional support via the addition of a virtual workforce, and helps the drive towards more automated and intelligent decision making. Our courses explore how modern AI technologies can be leveraged to unleash powerful capabilities that enable applications to be more engaging, secure, and contextually aware—and make highly-accurate predictions with complex data that surrounds us. In this session we will study SKIL and its importance in data science and workflows. We’ll cover how you can train your Keras model, as well as register it in SKIL, deploy in a production grade-service, and get predictions.Learn more In this session we will dive into Deep Learning and focus on its applications in the Life Sciences Today. We’ll examine key general use cases as well as focus on tools and platforms, and wrap up with a live demo.Learn more The abundance of data and affordable cloud scale has led to an explosion of interest in Deep Learning. This course introduces Deep Learning concepts and TensorFlow library to students.Learn more In this two-day course we will examine basic Kubeflow concepts, Machine Learning workflow lifecycles, and more. Our focus will be on how to use Kubeflow to take Machine Learning models to production in a scalable and portable way.Learn more This course teaches doing Machine Learning at Scale with the popular Apache Spark framework. We assume no previous knowledge of Machine Learning—we teach popular Machine Learning algorithms from scratch.Learn more
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476432.11/warc/CC-MAIN-20240304065639-20240304095639-00726.warc.gz
CC-MAIN-2024-10
1,569
6
https://developpaper.com/overview-of-browser-architecture/
code
Overview of browser architecture Browser is essentially an application software running under the computer operating system In short, the operating system can run multiple tasks at the same time. The real parallel execution of multi tasks can only be implemented on multi-core CPU. However, in practical application, because the number of tasks is far more than the number of CPU cores, the operating system will automatically schedule many tasks to each core in turn, so that each task is executed alternately. Task 1 executes 0.01 seconds, switches to task 2, task 2 executes 0.01 seconds, and then switches to task 3 and executes 0.01 seconds This is repeated. On the surface, each task is executed alternately, but because the CPU is so fast, we feel like all tasks are executed at the same time. When the code is not running, we call it program; running code becomes Process is the unit of system resource allocation, which is simply a running program; When a process is started, the system will allocate address space to it (each process has its own independent address space, so it does not affect each other). A data table is established to maintain code segments, stack information and data segments, occupying a lot of resources Processes have their own way of communication (IPC), ICP Thread is the unit of system resource scheduling and the smallest unit of program execution; Can be understood as the execution of code in the process Exists in a process and executes any part of the programThe scheduling algorithm of the system determines how it works Threads share the data in the process and use the same address space; therefore, CPU switches threads quickly, faster than operating processes; The communication between threads is fast because they share global variables and static data under the same process 1. User interface The menu bar, address bar, forward / backward button, bookmark directory, etc. in the browser, all places except the page display window belong to the user interface. 2. Browser engine Browser engine is the core of communication between components. It transfers instructions between user interface and rendering engine. It provides the interface for the rendering engine to provide the given web address and the user browser operation (such as refresh, forward, backward, etc.) information on the user interface to the rendering engine; at the same time, the browser engine provides various information for the user interface, such as error prompt, resource download progress, etc. In addition, it can read and write data in the client’s local cache. 3. Rendering engine The rendering engine is responsible for displaying the requested content. For example, if the requested content is an HTML file, it is responsible for parsing the HTML, CSS and other information in the file, and rendering the web content. The interior of rendering engine includes HTML parser, CSS parser and so on. Processing network requests, such as HTTP requests for web pages, image resources, etc. 6. UI backend The user interface described above is shown to the user. At the back end of the interface is the graphic library of the browser, which is used to draw the basic controls in the browser window, such as input box, button, radio button, combo box and window. The visual effects of different browsers are not the same, but the functions are basically the same. 7. Data persistence Manage user data, store various data associated with browsing session on the hard disk, such as saving bookmarks, cookies, cache, preferences and other data, which can be called through the API provided by the browser engine. Browser is essentially a software (application program), using a multi process architecture, the software will create multiple processes when running, work together to ensure the normal operation of the program - Process classification: Browser process (main process): manage user interaction (such as address bar, bookmark...), subprocess management, file storage, etc. (responsible for work outside the tab) Rendering process (sandbox mode): responsible for page rendering, JS script execution, page event processing, etc. multiple rendering processes (tab pages) can be opened at the same time; >Both JS engine and render engine run under this process -Rendering thread (engine) -JS thread (engine) Network process: provide network download function for rendering process and browser process GPU (graphics processor) process: processes GPU tasks, responsible for 3D rendering Plug in process: manage browser extensions, etc Other processes
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304810.95/warc/CC-MAIN-20220125100035-20220125130035-00104.warc.gz
CC-MAIN-2022-05
4,581
28
https://forums.autodesk.com/t5/inventor-forum/ipart-edit-via-spreadheet-not-updating-back-to-inventor/m-p/3445259?nobounce=
code
Using Inventor 2010 and Office 2010. Have already done the registry trick so Excel launches. I have created an Ipart and added a couple rows in Inventor. I can switch between them fine. I would like to add a lot more to this list by using Excel. I can right click the table and choose Edit Via Spreadsheet. Excel launches just fine with a REALLY long garbled filename.xls [compatibility mode] in the title bar of the window. The full text of the parameters is visible. The problem is, regardless of what change I make when I hit save and close the workbook - it doesn't go back to inventor. In inventor it's hanging. If i hit ESC a couple times it returns to usability, but any changes made via excel aren't added. Is there something special that has to be done after the fact to get it to take? This is driving me nuts! Search the Autodesk Knowledge Network for more content. New: Get an Activation Code Mac OS X 10.12 Support Windows 10 Support Autodesk Online Store Help Serial Numbers & Product Keys Installation & Licensing Online Activation & Registration Network License Administration Created by the community for the community, Autodesk Exchange Apps for Autodesk Inventor helps you achieve greater speed, accuracy, and automation from concept to manufacturing.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170741.49/warc/CC-MAIN-20170219104610-00544-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
1,270
16
http://www.verycomputer.com/419_1b3e7f3cb10236c0_1.htm
code
I have 2 servers, I have one server acting as a DC and another server that is a secondary that is running exchange. Every night I have the backup software stopping the exchange services to backup the stores, but when the backup goes to restart exchange I get global catalog errors that say that it cannot find a global catalog server. But some nights this operation works perfectly. So I was thinking that maybe I should make the exchange server the global catalog server so it didnt have to go find the other server. Any thoughts on this?
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400244231.61/warc/CC-MAIN-20200926134026-20200926164026-00110.warc.gz
CC-MAIN-2020-40
539
8
https://www.be-happy-together.com/blog/archive/2022/05
code
I am the 13th generation of the Arima family. In the family tree, it is connected to Fujiwara no Kamatari. So, most people in Japan are connected to Fujiwara no Kamatari. I have parents, and they also have parents. Human ancestors are spread in multiples of 2. About 2000 people in their teens About 2 million people in their 20s About 2 billion people in their 30s I have 2 billion ancestors about 1000 years ago, even from one person. It is said that the population of
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337516.13/warc/CC-MAIN-20221004152839-20221004182839-00622.warc.gz
CC-MAIN-2022-40
470
10
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms166475(v=sql.90)
code
Subscription Processing Architecture After events are collected, Notification Services can process subscriptions, generating notifications. Evaluating subscriptions against events is the job of the generator. To generate notifications, the application developer creates one or more rules for the application. These rules are written as Transact-SQL queries that specify how events and subscriptions relate, as well as any other conditions that must be met to generate a notification. In a simple application, when the generator fires a rule, the application evaluates all available subscriptions against the current batch of events visible in an event view. When a single event matches a single subscription, the notification generator creates a notification. This notification contains data about the event; it also references data about the subscriber, the subscriber device, sometimes the subscriber locale, and other information required for distribution. The notification is not sent as soon as it is created. Instead, the generator writes the notification to an internal notification table. When a batch of notifications is ready, the notifications are formatted and distributed by the distributor. If an application supports scheduled subscriptions, when the generator processes the scheduled subscriptions, it sees only subscriptions that are due for evaluation. For example, if the generator runs every 15 minutes, at 8:00 A.M. the generator evaluates all subscriptions that were scheduled between 7:45 A.M. and 8:00 A.M. If an application uses historical data, the application can store this data along with event and subscription information in a supplemental table, called a chronicle, and can then use this data to produce notifications. The generator runs according to a schedule defined by the quantum duration in the application definition. The quantum determines how often the generator wakes up and fires rules. A short quantum period causes the generator to run more frequently and consume more system resources. A long quantum interval causes a longer delay between the arrival of events and the generation of notifications. The work of the generator is determined by the rules defined for the application. You can create the following types of rules: - Event chronicle rules store or update the history of events in chronicle tables defined by the application developer. Each time the generator runs, it fires this type of rule first. - Event rules generate notifications for event-driven subscriptions. This type of rule runs after the event chronicle rule if an associated batch of events is available. This type of rule can also manage chronicle tables. - Scheduled rules generate notifications for scheduled subscriptions. This type of rule runs after the event chronicle rule for any related subscriptions that are due to be processed. This type of rule can also manage chronicle tables. Rule Action Types Event rules and scheduled rules specify the action to perform when the rule is fired. Each action is a Transact-SQL query that defines a unit of work for the generator to perform. These queries can generate notifications, but can also do other work such as maintaining chronicle data. Event and scheduled rules can use simple, parameter-based actions or more flexible condition actions: - Simple actions are Transact-SQL queries that fully define the notification generation query, including all WHERE clauses. Simple actions obtain the WHERE clause expressions from subscription and event data. For example, a weather application might allow subscribers to specify a city for weather notifications. The simple action query would then use a WHERE clause such as WHERE subscription.city = event.city, joining event and subscription data on city names. When a rule uses a simple action, subscribers provide parameters for the query, such as the city name. - Condition actions allow subscribers to fully define their query search conditions. For example, you could expose the event data schema in your subscription management interface, and allow subscribers to create their own search conditions over this data, such as WHERE event.State = Washington AND event.LowTemperature < 0. Your subscription management interface can make writing these search conditions as simple as selecting columns and operators from list boxes and then entering values in text boxes. Simple actions produce a limited set of search conditions for the generator to evaluate, and therefore typically perform better than condition actions. Condition actions are more powerful, but have the overhead of evaluating more search conditions for the event or scheduled rule.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.58/warc/CC-MAIN-20180422022057-20180422042057-00264.warc.gz
CC-MAIN-2018-17
4,672
21
https://bustatech.com/change-window-7-logon-screen/
code
Today, I am going to show you how to change the Window 7 Logon Screen. The default Window 7 Logon Screen (I can’t get the print screen of my Logon Screen, I get this image from Google Image): To use custom Logon Screen background, you will need to edit the Registry. Click Start button and type “regedit”, double click on “regedit.exe”, to open Registry Editor. At the Registry Editor, open HKEY_LOCAL_MACHINE > SOFTWARE > Microsoft > Windows > CurrentVersion > Authentication > LogonUI > Background, and open OEMBackground. Change the Value data to 1. If you can’t find OEMBackground under that folder, you will need to create a new REG_DWORD called “OEMBackground” under that folder, and set the Value data to 1. Now close Registry Editor and go to C Drive > Windows > System32 > oobe (C:\Windows\System32\oobe). Create a folder called “info” under “oobe” folder. Skip this step if the “info” folder already exists. Inside “info” folder, create a folder called “backgrounds”. Skip this step if the “backgrounds” folder already exists. Now you can put the desire logon screen background JPG file under this folder, and rename it to “backgroundDefault.jpg”. Also you have to make sure that this file does not exceed 256kb in the file size. For my PC, I use the blurred print screen of my desktop as the logon screen background, which is kind like a glass at the logon screen before it reach my desktop. This is my desktop print screen: And I apply the blur effect to the image above: And I use the blurred image as the Logon Screen background (I capture this image using my N73, can’t find a way to print screen Logon Screen): Now change yours!
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887054.15/warc/CC-MAIN-20180118012249-20180118032249-00762.warc.gz
CC-MAIN-2018-05
1,690
12
http://stackoverflow.com/questions/24475302/notsupportedexception-for-form-controls-after-receiving-data-with-serial-port
code
I am developing a program for Windows CE with visual studio 2008 . I am using Serialport data_received event to get data but after receiving data(string) when I want to set it to Textbox or Label 's Text property a NotSupportedException from TextBox.Text is throwing. Instead, if I set a local variable with received data ,It works without any problem. An unhandled exception of type 'System.NotSupportedException' occurred in System.Drawing.dll I hardly remember that years ago when I worked with sockets I had such problem which was related to threads ! can any one Help me with this?
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00182-ip-10-28-5-156.ec2.internal.warc.gz
CC-MAIN-2015-11
586
4
https://vciba.springeropen.com/articles/10.1186/s42492-019-0020-y
code
- Original Article - Open Access Scalable point cloud meshing for image-based large-scale 3D modeling Visual Computing for Industry, Biomedicine, and Art volume 2, Article number: 10 (2019) Image-based 3D modeling is an effective method for reconstructing large-scale scenes, especially city-level scenarios. In the image-based modeling pipeline, obtaining a watertight mesh model from a noisy multi-view stereo point cloud is a key step toward ensuring model quality. However, some state-of-the-art methods rely on the global Delaunay-based optimization formed by all the points and cameras; thus, they encounter scaling problems when dealing with large scenes. To circumvent these limitations, this study proposes a scalable point-cloud meshing approach to aid the reconstruction of city-scale scenes with minimal time consumption and memory usage. Firstly, the entire scene is divided along the x and y axes into several overlapping chunks so that each chunk can satisfy the memory limit. Then, the Delaunay-based optimization is performed to extract meshes for each chunk in parallel. Finally, the local meshes are merged together by resolving local inconsistencies in the overlapping areas between the chunks. We test the proposed method on three city-scale scenes with hundreds of millions of points and thousands of images, and demonstrate its scalability, accuracy, and completeness, compared with the state-of-the-art methods. 3D modeling of large-scale scenes has attracted extensive attention in recent years. It can be applied in many ways such as virtual reality, urban reconstruction, and cultural heritage protection. Nowadays, there are many techniques for obtaining the point cloud of large scenes; the laser-scanner-based and image-based methods appear to be the most widely used. Terrestrial laser scanners can efficiently obtain billions of points [1,2,3]. The image-based method takes multi-view images as the input, and produce per-pixel dense point clouds using the structure-from-motion (SfM) and multi-view stereo (MVS) algorithms [4,5,6,7]. For city-scale scene reconstructions, the image-based modeling approach is more convenient and cost-effective, because of the rapid developments of drones and oblique photography. However, noise and outliers are unavoidably included in the MVS point cloud. Thus, extracting a watertight mesh model from noisy MVS point clouds is a key step toward ensuring the 3D model’s quality. Surface reconstruction from point clouds has extensively been researched in the field of computer graphics, and there are various reconstruction methods in terms of the input point clouds. The Poisson surface reconstruction (PSR) (such as refs. [8,9,10]) is a popular point meshing algorithm. It frames the surface reconstruction as a spatial Poisson problem, defines an indicator function to represent the surface model, and uses the points and estimated normal vectors to obtain the solution of the function by solving the Poisson equation. Finally, the approximate surface model with the entity information is obtained by extracting the isosurface directly. The PSR is a global optimization method, and the reconstructed mesh model based on it is watertight, with detailed characteristics. Another traditional surface reconstruction method is marching cubes , which uses the divide-and-conquer strategy. It fits the surface into a cube, and processes all the cubes sequentially. For each cube, the surface intersections are identified via linear interpolation, and the inner isosurface is approximated by triangles. Finally, a polygonal mesh can be extracted. There are many variations of this method (such as refs. [12,13,14,15]). In addition, there are several image-based methods for surface reconstruction. One of the most important methods is based on the Delaunay triangulation , a global optimization algorithm and the basis for several other methods (such as refs. [5, 17,18,19,20,21,22,23,24,25]). This approach considers the inevitable noise and outliers in the MVS point cloud and exploits the visibility information of cameras, thereby producing a better surface than the traditional approaches. As a state-of-the-art algorithm in image-based surface reconstruction, ref. defines surface reconstruction as a global optimization problem, and obtains a complete result. However, as the size of the point cloud increases, problem-solving will consume so much time and memory that impedes the efficiency of the computer. This study aims to solve this challenge. When applied to large-scale point clouds, the traditional approaches encounter bottlenecks due to the drastic increase in time and memory. Some studies have been conducted to address these problems. Using the marching cubes and the results obtained in ref. , Wiemann et al. handled large-scale data using an octree-based optimized data structure and a message-passing-interface (MPI)-based distributed normal estimation provided by the Las Vegas surface reconstruction toolkit that can assign the data to a computing cluster . They incorporated parallelization into it, and proposed a grid-extrusion method to replace the missing triangles by adding new cells dynamically. Subsequently, Wiemann et al. used a collision-free hash function in place of the octree structure to manage the voxels in a hash map to obtain better results. This function can instantaneously identify the adjacent cells under certain conditions. Wiemann et al. , when handling the data, serialized them into chunks that are geometrically related; then, the partitions are sent to the slave nodes to be rebuilt in parallel. However, this method may have the undesirable effect of generating more triangles than necessary in the mesh. Gopi et al. proposed an unique and fast project-based method to incrementally develop an interpolatory surface. Although, their approach has linear-time performance, it cannot effectively handle the sharp curvature variation. Marton et al. circumvented some of the challenges cannot handle. Their method is based on incremental surface growing , an approach that does not require interpolation, and can preserve all the points. However, their approach is a greedy type, and it is not guaranteed to obtain the same result as the global optimal solution. More recently, Er et al. proposed a new approach whereby the data is sampled and reconstructed based on the witness complex method , using the original data as the constraint. After sampling, although the size of the data may be smaller, it is difficult to approximate the sampling rate for the different datasets, which affects the final reconstruction result. Ummenhofer and Brox and Fuhrmann and Goesele have also researched large-scale reconstructions. The former proposed a global energy cost function, and they extracted the surface by conducting energy minimization on a balanced octree. However, this method does not solve the scale problem that characterizes large-scale reconstruction due to its global formulation. The latter proposed a local approach that is parameter-free for datasets, and applicable to large, redundant, and potentially noisy point clouds. However, this local approach will generate many gaps, and cannot fill larger holes. Recently, Mostegel et al. proposed a scalable approach that can process enormous point clouds. They used an octree to divide the data, and run meshing method locally. The final surface can be obtained by extracting overlaps and filling the holes with a graph-cut algorithm. This method is able to obtain a watertight mesh for extremely large point clouds. However, the octree structure necessitates several repetition of the calculation to obtain enough overlaps, thereby increasing time consumption and memory usage. To circumvent the limitations of current state-of-the-art methods, we propose a scalable point-cloud meshing approach that can efficiently process city-scale scenes based on MVS points with minimal memory usage. As shown in Fig. 1, we first divide the entire scene into several chunks with overlapping boundaries along the x and y axes, following which we perform the Delaunay-based optimization to extract the mesh for each chunk in parallel. Finally, the local meshes are merged by resolving the inconsistencies in the overlapping areas between the chunks. The main contributions of this study are as follows: We propose a practicable and efficient scalable meshing approach to handling MVS points with minimal and adjustable memory that can obtain a reconstructed surface similar to that generated by the global-based method . We achieve a region-partition method that can divide the scene into chunks with overlapping boundaries, each chunk being compatible with the computer memory. In this method, each overlapping grid is calculated two or four times, thus eliminating some redundant computations in ref. . In this study, we deal with a large-scale point cloud computed from images that are generated using the SfM and MVS algorithms. Without any auxiliary sensor information, the point cloud generated from the SfM and MVS lies in an arbitrary coordinate. However, for outdoor scenes, it can easily be transformed according to the geographical coordinates using the camera’s in-built GPS information, or using the ground control points for greater precision. This study mainly focuses on outdoor city-scale scenes; thus, we assume that the MVS point cloud has already been geo-referenced. Therefore, it is reasonable to partition the scene on the ground plane (x-y plane), but not along the vertical axis (z-axis). The pipeline of the proposed method is shown in Fig. 2; it has three main steps: region partition, local surface reconstruction using Delaunay-based optimization, and surface merging. All three steps are detailed in the following subsections. Region partitioning is a straightforward strategy for solving memory limitation problems by partitioning large point clouds into chunks, and processing each one individually before merging them. Our point-cloud partitioning process incorporates the region partitioning into the approach of Mostegel et al. . In ref. , they divide the point cloud into voxels managed by an octree structure, and run local computation on all the voxel subsets to extract the surface hypotheses. However, their process inevitably results in the repetition of many facets. Consequently, for large-scale scenes, there will be several voxels, and the computation on a voxel will be repeated many times, leading to redundancy. To circumvent this limitation, we propose region partitioning. We first divide the point cloud into regular grids on the x-y plane; each grid contains all the points whose x and y coordinates are within it. The grid is treated as the smallest unit. Given the maximum number of points that a single computer node can handle, the challenge of partitioning the grids accordingly into chunks arises. Here, we extract the grids along the x and y axes, and the scene can be divided into portions. The extracted grids will be incorporated into their adjacent parts as boundaries; extraction will be performed repeatedly and adjusted until the number of points in each part falls below the maximum we set (designated Nmax). Finally, we can obtain a group of chunks with overlapping boundaries and limited number of points that will be processed in parallel (Fig. 3). Delaunay triangulation and minimum s-t cut We run the Delaunay-based optimization algorithm locally on each chunk to obtain the local surfaces, after which the local mesh is cleaned to obtain more consistent local surfaces. Local Delaunay-based surface computation The local surface-reconstruction algorithm is based on the method by Vu et al. . First, the Delaunay triangulation is performed using the point cloud. Then, the visibility of the points and the quality of the surface are used to build the energy function representing the energy for extracting the final surface that can be obtained using the global minimizing function with the minimum s-t cut algorithm. The energy function is defined as follows: where S is the final surface and λ is a balance factor. In this study, we use λ = 0.5, which can achieve favorable results across all the experiments; this value is also the default setting in OpenMVS . In Eq. 1, the visibility term Evis(S) conforms to the principle that the line of sight from the cameras to the points should not cross the final surface. Thus, it fully exploits the visibility of points, and can effectively filter out outliers. Besides, the quality item Equal(S) is defined to penalize triangles with improper size or edge length, both of which tend to have less visibility than the others on dense surfaces. The evaluation criterion of the triangles is related to the angle between a triangle and its empty circumspheres. Following the minimum s-t cutting, every tetrahedron is labeled as inside or outside; the triangles that lay between both constitute the surface. Note that not all the points are inserted during Delaunay triangulation. A point can only be inserted when the re-projection distance between it and the other points that have been inserted exceeds a certain distance that is set to be compatible with the computer memory and, effectively reconstruct the places with overly dense points. Local mesh clean up Some extent of cleaning is performed to eliminate noise after local surface reconstruction for each chunk, which includes removing non-manifold and overly long edges, isolating components and vertexes connected to a single facet or none, and filling the holes. The cleaning process is necessary because the Delaunay-based method cannot obtain the most complete and consistent surface at once; furthermore, cleaning is not as time-consuming as the other steps. It may be observed that hole filling is necessary, because some facets may be removed in the process of removing edges that are too long. The hole-filling algorithm here is implemented by the Visualization and Computer Graphics Library (VCG) and is a heuristic algorithm that can fill holes with the specified side length as far as possible. Once the local surfaces for each chunk are generated, using the overlapping boundaries as an intermediate, we merge them by extracting proper facets, and resolving the inconsistencies in the overlapping areas. Consistent triangle extraction The local surfaces are computed individually, and inconsistent facets exist mainly in the boundary grids. To resolve these inconsistencies, we first extract the triangles located in the internal grids (not boundary grids) that are computed just once. Then, we extract the triangles that span the internal and boundary grids, because in the areas between the internal and boundary grids, these triangles are farther from the outer boundaries of the chunk than the triangles generated by the other chunks, and the Delaunay tetrahedralization will be more stable. These triangles are more suitable for selection and are consistent with the first kind of triangles extracted. Following this, we focus on extracting the triangles within the boundaries, and they are computed two or four times. We extract repeated triangles that are repeated the same number of times as the grids where they are located. These repeated triangles are also consistent, because they are the same in all adjacent chunks. Based on the efficiency of the Delaunay-based method, most triangles in the boundaries are repeated ones, thus, simplifying the subsequent hole-filling task to an extent. By combining the selected consistent triangles above, we can obtain a surface mesh with some holes in the boundary grids. Then, to minimize these inconsistencies, we attempt to fill these holes. This step is similar to the method in ref. . We remove triangles that will intersect the surface or generate non-manifold edges if they are added first, and then cluster the rest by the edge connectivity in each chunk. Specifically, we put the triangles that can merge to form only one connected domain in a chunk, and refer to each group of triangles after clustering as a patch. The patch will be used as the smallest unit for hole filling. It is better to prioritize patches that are farther from the outer boundaries of a chunk, because Delaunay tetrahedralization is more stable in these regions, compared to those close to the outer boundaries. We use the centroid of a patch to represent the average position of all the points in the patch, and find the outer boundaries of the chunk where the patch is located. The farther the centroid is from these outer boundaries, the higher its selection priority. We define the offset of a patch P as follows: Where cx and cy are the x and y coordinates of the centroid of P and Bx and By are the x-and-y-coordinate sets of the outer boundaries of the chunk where P is located. We sort the patches by their offset in descending order and visit them sequentially. If a patch does not cross the surface and generate non-manifold edges, it will be added to the final surface. Note that patches are used instead of single triangles for hole filling because using the former can reduce the number of required checks (checks for intersections or generation of non-manifold edges). This step can effectively fill the holes caused by inconsistent boundary computation. Finally, we remove the non-manifold vertexes using VCG and apply HC-Laplacian smoothing as a post-processing step to obtain a smoother surface. These tasks were not performed when the local meshes were being cleaned, because they displace the points and change the topology of meshes, thereby greatly reducing the number of repeated facets in boundaries. Note that we cannot theoretically guarantee that the final result has no holes (likewise in the global optimization based method ), but from the experimental results, most areas of the surface are watertight; occasionally, there may be few small holes where noise is particularly large. Besides, when the input point cloud contains isolated outliers somewhere (for example, for the aerial photography of urban scenes, some outliers may appear deep below the ground plane, which although being very rare cannot be completely ruled out), we may incorporate them into our final surface. However, this problem is not difficult to resolve. We can eliminate the noise using the visibility information by finding the visibility of the points in the cameras of points. If a point is not visible in any of the cameras that has acquired it, it can be considered big noise and removed. Results and discussion The proposed method is evaluated by varying the partition numbers, and comparing it with other state-of-the-art approaches. Here, we used a 20-core workstation with 2.4 GHz CPU and 128 GB RAM. The API development environment of our experiments is Ubuntu 18.04, 64 bit. Datasets and parameters We use three large-scale datasets, Temple, City1, and City2, all obtained using drone aerial photography. For all three datasets, the points are computed from the images using off-the-shelf SfM [41, 42] and MVS algorithms. A detailed description of the datasets is shown in Table 1, and the illustration of the point clouds, as well as the cameras, is shown in Fig. 4. The proposed algorithm has two main parameters: grid size, δ, and maximum number of points in one chunk, Nmax. δ is set according to the size of objects in the scene, and we set δ as 6 m for all the datasets. The Nmax values are determined by the limits of the computing resources (mainly the memory). Evaluation of different partition numbers The partition numbers are affected by the upper limit of the number of points in a chunk. To verify the robustness of the proposed algorithm against the number of chunks, we modify Nmax from 1.3 ×107 to 9.0 × 106, and 6.0 × 106, to vary the number of partitions, and evaluate the results on the Temple dataset. As may be seen in Fig. 5, as the number of points in a chunk decreases, although the reconstruction result is still good, the completeness and accuracy are relatively compromised. With different partition numbers, we record the running time of the main steps in our method and the peak memory consumption, as shown in Table 2. As may be observed, as the Nmax decreases, the algorithm consumes less memory, and the local Delaunay-based computation is less time-consuming; however, surface merging (mainly hole filling) becomes more time-consuming. Therefore, our method is more advantageous when the partition numbers are relatively small. We prefer to choose Nmax based on the memory limit of the computer, because the number of partitions obtained is directly proportional to the number of holes to be filled and the required computation time. Comparison with state-of-the-art approaches In this part, we qualitatively compare our method with the state-of-the-art methods [5, 34, 35, 43] for the three datasets. First, we compare our method with the global Delaunay-based optimization method (hereafter referred to as “Global”). We also compare our method with the global dense multiscale reconstruction (GDMR) [34, 43] and the floating scale surface reconstruction (FSSR) . These two methods are based on the implicit functions, and instead of visibility information, they deploy a scale parameter that affects the time and memory consumption and reconstruction completeness. FSSR and GDMR are 64-bit executable programs provided respectively by refs. [44, 45]. Global is the codes in OpenMVS . In our method, we use Nmax = 1.3 × 107; for FSSR and GDMR, we use the same scale parameters according to the length of the edges from one point to its neighbors; thus, for both, we use 0.08 on the Temple dataset, 1.5 on the City1 dataset, and 1.0 on the City2 dataset, which yield the optimal results we try. We first compare the reconstruction completeness of these methods. We can see in Fig. 6 that the FSSR does not effectively fill larger holes; Global and our method outperform it in this aspect. Furthermore, Global and our method can also retain more details with less noise than GDMR and FSSR. Time and memory consumption All the methods are run on the CPU. The local reconstruction work in the proposed method is run in parallel, while that of Global is run sequentially. When we run the executable programs of FSSR and GDMR, we find that parallel computations are added when certain operations are performed. The time consumption and memory usage of the different methods can be seen in Table 3, from which it may be observed that our method outperforms Global in memory usage and time consumption. FSSR and GDMR run faster sometimes because they do not utilize the visibility information of the points and cameras; however, there is a tradeoff between this speed and the capacity to retain details and scene completeness. Thus, in terms of detail retention and scene completeness, our method and Global outperform FSSR and GDMR. In this paper, we propose a scalable point-cloud meshing approach to image based 3D modeling that can enable the reconstruction of large-scale scenes with minimal memory usage and time consumption. Different from the current distributed points meshing algorithms based on the regular voxel partition of the scene, we propose a region-partitioning method that can divide a scene into several chunks with overlapping boundaries, each chunk satisfying the memory limit. Then, the Delaunay-based optimization is used to extract the mesh for each chunk in parallel. Finally, local meshes are merged by resolving local inconsistencies on the overlapping areas between the chunks. We evaluate the proposed method on three city-scale scenes with hundreds of millions of points and thousands of images, and demonstrate its scalability, accuracy, and completeness, compared with the state-of-the-art methods. In this study, the hole-filling task was performed as a sequential computation. However, in future work, we will mainly focus on achieving simultaneous parallel computation when filling the holes to further improve the running speed and efficiency of our method. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Floating scale surface reconstruction Global dense multiscale reconstruction Multiple view stereo Possion surface reconstruction Visualization and Computer Graphics Library Wang WX, Zhao WS, Huang LX, Vimarlund V, Wang ZW (2014) Applications of terrestrial laser scanning for tunnels: a review. J Traffic Trans Eng 1(5):325–337 https://doi.org/10.1016/S2095-7564(15)30279-8 Wiemann T, Mitschke I, Mock A, Hertzberg J (2018) Surface reconstruction from arbitrarily large point clouds. In: Abstracts of IEEE international conference on robotic computing. IEEE, Laguna Hills, https://doi.org/10.1109/IRC.2018.00059 Li RH, Bu GC, Wang P (2017) An automatic tree skeleton extracting method based on point cloud of terrestrial laser scanner. Int J Opt 2017:5408503 Frahm JM, Fite-Georgel P, Gallup D, Johnson T, Raguram R, Wu CC, et al (2010) Building Rome on a cloudless day. In: Daniilidis K, Maragos P, Paragios N (eds) Computer Vision - ECCV 2010 11th European conference on computer vision, Heraklion, September, 2010. Lecture notes in computer science (Lecture notes in artificial intelligence), vol 6314. Springer, Heraklion, Crete Vu HH, Labatut P, Pons JP, Keriven R (2012) High accuracy and visibility-consistent dense multiview stereo. IEEE Trans Pattern Anal Mach Intell 34(5):889–901 Furukawa Y, Curless B, Seitz SM, Szeliski R (2010) Towards internet-scale multi-view stereo. In: Abstracts of IEEE computer society conference on computer vision and pattern recognition. IEEE, San Francisco Furukawa Y, Ponce J (2010) Accurate, dense, and robust multiview stereopsis. IEEE Trans Pattern Anal Mach Intell 32(8):1362–1376 https://doi.org/10.1109/TPAMI.2009.161 Kazhdan M, Bolitho M, Hoppe H (2006) Poisson surface reconstruction. In: Abstracts of the fourth eurographics symposium on geometry processing. ACM, Cagliari, Sardinia Zhou K, Gong MM, Huang X, Guo BN (2008) Highly parallel surface reconstruction. Microsoft Research Asia, Beijing Kazhdan M, Hoppe H (2013) Screened poisson surface reconstruction. ACM Trans Graph 32(3):29 https://doi.org/10.1145/2487228.2487237 Lorensen WE, Cline HE (1987) Marching cubes: a high resolution 3D surface construction algorithm. ACM Siggraph Comput Graph 21(4):163–169 Hill S, Roberts JC (1995) Surface models and the resolution of N-dimensional cell ambiguity. In: Paeth AW (ed) Graphics gems V. Elsevier, San Diego, pp 98–106. https://doi.org/10.1016/B978-0-12-543457-7.50023-1 Chernyaev EV (1995) Marching cubes 33: construction of topologically correct isosurfaces. Europen Organization for Nclear Research, Geneva Lewiner T, Lopes H, Vieira AW, Tavares G (2012) Efficient implementation of marching cubes’ cases with topological guarantees. J Graph Tools 8(2):1–15 Nielson GM (2003) MC*: star functions for marching cubes. In: Abstracts of IEEE visualization. IEEE, Seattle. https://doi.org/10.1109/VISUAL.2003.1250355 Delaunay B (1934) Sur la sphère vide. A la mémoire de georges voronoï. Bull l'Académie Sci l'URSS 6:793–800 Chen ZG, Wang WP, Lévy BO, Levy B, Liu LG, Sun F (2014) Revisiting optimal Delaunay triangulation for 3d graded mesh generation. SIAM J Sci Comput 36(3):A930–A954 https://doi.org/10.1137/120875132 Jancosek M, Pajdla T (2011) Multi-view reconstruction preserving weakly-supported surfaces. In: Abstracts of IEEE conference on computer vision and pattern recognition. IEEE, Colorado Springs Labatut P, Pons JP, Keriven R (2007) Efficient multi-view reconstruction of large-scale scenes using interest points, Delaunay triangulation and graph cuts. In: Abstracts of the IEEE 11th international conference on computer vision. IEEE, Rio de Janeiro. https://doi.org/10.1109/ICCV.2007.4408892 Hiep VH, Keriven R, Labatut P, Pons JP (2009) Towards high-resolution large-scale multi-view stereo. In: Abstracts of IEEE conference on computer vision and pattern recognition. IEEE, Miami, pp 1430–1437 https://doi.org/10.1109/CVPR.2009.5206617 Labatut P, Pons JP, Keriven R (2009) Robust and efficient surface reconstruction from range data. Comput Graph Forum 28(8):2275–2290. https://doi.org/10.1111/j.1467-8659.2009.01530.x Dey TK, Goswami S (2006) Provable surface reconstruction from noisy samples. Comput Geom 35(1–2):124–141 Dey TK, Goswami S (2003) Tight cocone: a water-tight surface reconstructor. In: Abstracts of ACM symposium on solid modeling and applications. ACM, New York, pp 127–134 Amenta N, Choi S, Dey TK, Leekha N (2002) A simple algorithm for homeomorphic surface reconstruction. Int J Comput Geom Appl 12(1–2):125–141 https://doi.org/10.1142/S0218195902000773 Amenta N, Bern M, Kamvysselis M (1998) A new voronoi-based surface reconstruction algorithm. In: Abstracts of the 25th annual conference on computer graphics and interactive techniques. ACM, New York Wiemann T, Annuth H, Lingemann K, Hertzberg J (2013) An evaluation of open source surface reconstruction software for robotic applications. In: Abstracts of 2013 16th international conference on advanced robotics. IEEE, Montevideo Wiemann T, Mrozinski M, Feldschnieders D, Lingemann K, Hertzberg J (2016) Data handling in large-scale surface reconstruction. In: Menegatti E, Michael N, Berns K, Yamaguchi H (eds) Intelligent autonomous systems 13. Advances in intelligent systems and computing, vol 302. Springer, Cham, pp 499–511. https://doi.org/10.1007/978-3-319-08338-4_37 Wiemann T, Nüchter A, Hertzberg J (2012) A toolkit for automatic generation of polygonal maps-Las Vegas reconstruction. In: Abstracts of ROBOTIK 2012; 7th German conference on robotics. IEEE, Munich, pp 1–6 Gopi M, Krishnan S (2002) A fast and efficient projection-based approach for surface reconstruction. In: Abstracts of XV Brazilian symposium on computer graphics and image processing. IEEE, Fortaleza-CE Marton ZC, Rusu RB, Beetz M (2009) On fast surface reconstruction methods for large and noisy point clouds. In: Abstracts of IEEE international conference on robotics and automation. IEEE, Kobe. https://doi.org/10.1109/ROBOT.2009.5152628 Mencl R, Muller H (1997) Interpolation and approximation of surfaces from three-dimensional scattered data points. In: Abstracts of conference on scientific visualization. IEEE, Dagstuhl Li E, Zhang XP, Chen YY (2014) Sampling and surface reconstruction of large scale point cloud. In: Abstracts of the 13th ACM SIGGRAPH international conference on virtual-reality continuum and its applications in industry. ACM, Shenzhen Guibas LJ, Oudot SY (2008) Reconstruction using witness complexes. Discrete Comput Geom 40(3):325–356 Ummenhofer B, Brox T (2017) Global, dense multiscale reconstruction for a billion points. Int J Comput Vis 125(1–3):82–94 https://doi.org/10.1007/s11263-017-1017-7 Fuhrmann S, Goesele M (2014) Floating scale surface reconstruction. ACM Trans Graph 33(4):46 https://doi.org/10.1145/2601097.2601163 Mostegel C, Prettenthaler R, Fraundorfer F, Bischof H (2017) Scalable surface reconstruction from point clouds with extreme scale and density diversity. In: Abstracts of IEEE conference on computer vision and pattern recognition. IEEE, Honolulu OpenMVS (2015): Open multi-view stereo reconstruction library. https://github.com/cdcseacave/openMVS Jancosek M, Pajdla T (2014) Exploiting visibility information in surface reconstruction to preserve weakly supported surfaces. Int Sch Res Notices 2014:798595 https://doi.org/10.1155/2014/798595 Cignoni P Ganovelli F (2016) The visualization and computer graphics library (VCG). http://www.vcglib.net/ Vollmer J, Mencl R, Müller H (1999) Improved laplacian smoothing of noisy surface meshes. Comput Graph Forum 18(3):131–138 https://doi.org/10.1111/1467-8659.00334 Schönberger JL, Frahm JM (2016) Structure-from-motion revisited. In: Abstracts of conference on computer vision and pattern recognition. IEEE, Las Vegas Schönberger JL, Zheng EL, Frahm JM, Pollefeys M (2016) Pixelwise view selection for unstructured multi-view stereo. In: Leibe B, Matas J, Sebe N, Welling M (eds) Computer vision - ECCV 2016. 14th European conference on computer vision Amsterdam, October, 2016. Lecture notes in computer science, (Lecture notes in artificial intelligence), vol 9907. Springer, Amsterdam Ummenhofer B, Brox T (2015) Global, dense multiscale reconstruction for a billion points. In: Abstracts of IEEE international conference on computer vision. IEEE, Santiago. https://doi.org/10.1109/ICCV.2015.158 Fuhrmann S, Langguth F, Goesele M (2014) MVE - a multi-view reconstruction environment. In: Klein R, Santos P (eds) Eurographics workshop on graphics and cultural heritage. The Eurographics Association, Germany Ummenhofer B, Brox T (2015) Global, Dense multiscale reconstruction for a billion points. https://lmb.informatik.uni-freiburg.de/people/ummenhof/multiscalefusion/ This work was supported by the Natural Science Foundation of China (Nos. 61632003, 61873265) The authors declare that they have no competing interests. Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Cite this article Han, J., Shen, S. Scalable point cloud meshing for image-based large-scale 3D modeling. Vis. Comput. Ind. Biomed. Art 2, 10 (2019). https://doi.org/10.1186/s42492-019-0020-y - Delaunay-based optimization - Large-scale scenes
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00785.warc.gz
CC-MAIN-2022-49
33,622
108
http://www.novelreleases.com/novels/25669/Universe_2025/
code
Onboard a spaceship over a hundred years after ETs abducted her ancestors, Ruth grapples with frightening changes all around her. Light years away, and three centuries after the abductions (from Earth’s perspective), the aliens ask for help to fix the collapsing colony ship before it reaches the planet they’ve prepared for the human captives. Sexual Content: Semi-graphic Traumatizing Content: Confinement, violence, sexism, violence against women motivated by gender. Rape is not depicted but may be referenced.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104675818.94/warc/CC-MAIN-20220706151618-20220706181618-00233.warc.gz
CC-MAIN-2022-27
518
3
http://webmasters.stackexchange.com/questions/tagged/amazon?sort=unanswered&pagesize=15
code
What do the different parameters in an Amazon Associates (affiliates) URL do? (Why use the Associates interface?) I know it's possible to turn any Amazon link into an affiliate link simply by appending your Associates tracking ID to the end of the Amazon URL, so this: amazon.com/gp/product/B002RPCOH8/ becomes ... The API is returning a LargeImage node with an image that is 110px by 110px for this product: http://www.amazon.com/Syma-X5C-Exlorers-2-4G-Quadcopter/dp/B00OCFMVHE Where are the larger sizes of the ... I'm getting the PurchaseURL from the CartCreate operation but i dont know how to link to a mobile friendly purchase operation or confirm add to cart, i'ts liniking to a site that has a really small ...
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701148428.26/warc/CC-MAIN-20160205193908-00019-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
718
4
https://github.com/terenine/emacs_libs
code
This is my Emacs configuarion. It has been cobbled and borrowed from people who know more about Emacs and Emacs Lisp than I do. Here are some of my sources: http://www.emacswiki.org/emacs/ http://www.emacswiki.org/emacs/ESSWindowsAdvice https://github.com/boorad/emacs https://github.com/overtone/live-coding-emacs.git You are more likely to find my configurations useful if: - you come from a Windows + Visual Studio background - you want to use Emacs primarily for Erlang development If you think I've butchered things please let me know. I have used Homebrew to install several packages that are not emacs specific. You will need to install these in order to get certain parts of the config to function properly. These include: - Erlang (I have R14B03) - Python 3 How to use these files If you don't already have Emacs install it. I'm using Emacs 23.2. Clone (or download and extract) this repository so that the directory "emacs_libs" is in your home directory (Linux " /" Windows "%HOME%"). Once you have a "/emacs_libs" directory edit your "~/.emacs" file to require emacs_libs by adding these two lines: (add-to-list 'load-path "~/emacs_libs") (require 'config-runner) This config includes OrgMode, Erlang Mode (and associated tools), and some Python tools. You can comment out the loading of any one of these features in the config-runner.el file in the emacs_libs directory if you do not wish to use them. This config also includes Wrangler, a refactor tool for Erlang. In order to get it to work you need to take the following steps after cloning the repository: - open a terminal window and navigate to the ~/emacs_libs/wrangler-0.9.2.4/ directory - type "./configure" and hit enter - type "make" and hit enter - type "sudo make install" and hit enter Once this is complete your emacs should start without error. Optional Setup script There is a script named 'home-folder-init' that will automatically set links to configuration files in 'emacs_libs/home-config-files' directory. Please open the home-folder-init file and understand that it will delete existing files and replace them with links. If this is not what you want then copy the config files or create links to those you want. You've been notified.
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158450.41/warc/CC-MAIN-20180922142831-20180922163231-00052.warc.gz
CC-MAIN-2018-39
2,220
23
https://apps.support.sap.com/sap/support/knowledge/en/2577623
code
- Import/Export is not respecting the defined target criteria. - Position target criteria is working in Org Chart, but not in Import/Export. - Can't limit the Export/Import according to target criteria set in RBP. - SuccessFactors Cloud HCM: MDF - SuccessFactors Cloud HCM: MDF Import and Export Reproducing the Issue - Go to "Import and Export data" page. - Choose Export Data -> Select the object you want to export the data for. - Go to "Monitor Job" page and download the status of the job you run in the above step. - Open the downloaded file and notice the data inside the root .csv file for that object. All the data is available in the exported file irrespective of what target criteria was set for the user who exported the data. For same user data is limited/restrcited in accordance with target criteria defined in RBP, on the "Manage Data" page or "Position Org Chart". This is expected behavior and is by design in Framework. Data flow via MDF Import/Export does not respect target criteria defined in the RBP. It is true for all the MDF Entities, because in case of Export, the framework exports all the data regardless of how the data is controlled via the RBP in the Manage Data page.(or Position Org Chart) . Same goes for Import as well. Further, for OData APIs based data flows into and out of the system: Target criteria is respected for Non-Admin, whereas for Admin target criteria is not followed. - MDF: Metadata Framework - Custom MDF RBP Permissions - MDF Target Criteria - Imort and Export data. - Export Position data
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00225.warc.gz
CC-MAIN-2021-31
1,544
20
http://www.cs.princeton.edu/courses/archive/spring11/cos333/projectideas.html
code
New item(s) at the front. The ideas here have come from a variety of friends on campus. I've edited their ideas into a somewhat uniform style and format, but have tried to leave their ideas in their own words. You're welcome to approach the people listed directly, and I would be happy to act as an intermediary if you prefer. New for the Spring 2011 semester: The Daily Princetonian website presents articles and several forms of multimedia content related to campus and local-area news. Completely redesigned in 2007, we have developed a sophisticated web application built on Django and a MySQL/Apache backend that is used by over 100 staff members to publish the paper, and averages 30,000 pageviews from the general public every day. Developed, managed, and maintained entirely by students, the web team of the Daily Princetonian offers unparalleled opportunities to make a significant impact and do highly visible development and design work. We are looking for students that would be willing to work in collaboration with the web team to develop significant, modular projects that can be integrated with our existing website to make the site more engaging and useful for our readers and present the news in new and interesting ways that are only possible with the innovative use of web technology. A few reasons why choosing us for a 333 project is a good decision: IPHONE/ANDROID APP: The Daily Princetonian is looking for a team of developers to produce a mobile phone application (Iphone preferred, other platforms possible as well), that would allow access to all of the content and functionality of the Daily Princetonian web site in an intuitive and powerful mobile interface. AllPrinceton.com is an experimental news and information platform for the community of Princeton, NJ (both town and gown). AllPrinceton's goal is to provide web-based news-gathering, information organization, and practical distribution of any and all information that the community finds useful. This includes original articles, written both by AllPrinceton and selected authors; a feed aggregator for RSS feeds from the community; a multimedia feature section; a live twitter aggregator; directories, both civic and commercial; free classifieds; aggregated events; the ability to integrate web-based community tools like SeeClickFix, Flickr, AllOurIdeas, and third-party video hosting services. AllPrinceton.com seeks assistance in developing advanced information gathering and distribution systems. Already some former COS 333 students are helping to develop a read-only iPhone app for the site. We are looking for additional help this year in extending our reach into the mobile Web in the following ways: AllPrinceton is also interested in entertaining suggestions from COS 333 students on ways to improve the user interface of the site. We would be happy to offer the AllPrinceton platform for experimentation in community news solutions. Did you know that less than half of Princeton's 7 million books are in Firestone? That more than 2 million are warehoused miles off campus? What would it be like to browse an alternate-reality Firestone Library, with book stacks that never got full, in a universe where things could be in more than one place at a time? As Edward Tufte points out in his classic book, /The Visual Display of Quantitative Information/, the lists and tables of traditional catalogs often fail to convey the rich information they contain. The catalog contains a wealth of data about the libary's items, including how big they are, how old they are, and where they would be found if the physical library were actually to be arranged according to a classification scheme, but all that information is difficult to see. If there were a way to tease that information out and present more perspicuously, the library could provide a more servicable browsing interface. In this project, students would work with the Coordinator of Library Digital Initiatives (a digital humanist with an MS in computer science) to develop a three-tier system: One idea that comes to mind for the Frist Campus Center is something we’ve struggled with for some years now, and are only able to resolve via a labor intensive manual process which takes hours and hours – that is student staff scheduling application. In short, the concept be hoping to develop would be the following – There are retail applications available that do some of these things, but each we have evaluated thus far suffers from ‘square peg in round hole’ syndrome, and if we can create one in-house personalized to Princeton, that would be great. Ultimately if this is successful, it could scale to other facilities I manage, including Richardson Auditorium and Chancellor Green – both of whom are reliant on student staff to operate. The Commons is a "serious" game that interactively teach players about sustainability and sustainable principles via the multiplayer computer game paradigm, with a rule system based on "The Settlers of Catan". The Commons is unique in that it combines compelling gameplay mechanics with a strong didactic component concerning the important role that sustainability plays in both individual and collective decision making. We are looking for an innovative and creative team (with strong graphic interface skills) to design and develop this game based on a board game prototype that has been extensively test played at Princeton University. The computer game will be Web based and be played by more than 100 players at the time over the course of a couple of weeks. The Commons is an economic and social expansion-based game that teaches sustainable principles to players through the choices they make. By playing this game players will become conscious of humanity's impact on the global environment, as well as the issues and challenges that individuals and policymakers face in taking on this increasingly urgent problem. The game takes place in a modern setting on international expansion and development, where developed countries face decisions regarding their choice of infrastructure and their already considerable carbon footprint, and developing countries must balance their desire for economic expansion and increased quality of life with the concomitant potential environmental impact. The game map represents a theoretical distribution of environmental resources that can be exploited by players. Players represent nations interacting with each other and the global environment. Players control where and when to build cities, and roads, which will affect the resources that they gather and the amount of emissions that they produce. Players also manage the resources that are available to them. Players achieve victory by earning victory points, which can be earned by building cities or by acquiring sustainability points. Development comes at the cost of creating emissions, which take away victory points from players. Players engage in unilateral and multilateral strategies in order to maximize their own ability to earn victory points (i.e. maximize growth while minimizing emissions). The expansion of the US criminal justice system over the past three decades has made imprisonment a common experience in the lives of low-income, minority males. Virtually all who go to prison eventually return to their communities, where they struggle to avoid re-arrest. Both researchers and ex-prisoners believe that finding employment is the number one priority upon leaving prison; yet, they face numerous barriers to getting a job. The stigma of a criminal record is a well-documented obstacle over and above their already disadvantaged place in the labor market. While a few employment reentry programs have been successful, they are costly. More limited programs have produced null or even negative results. Even in the face of these barriers, nearly half of ex-prisoners find formal labor market work within the first few months. Largely due to methodological limitations, we know little about the type of work they find and how they find it. The use of smartphones as a data collection tool will help fill this gap, contributing to sociological theory on finding work and to policy interventions supporting ex-prisoners in their job search. Students will develop an open-source Android application, which will be distributed to over 100 reentering prisoners to track their job search experiences. It is also possible that the application will be piloted on a small group of reentering prisoners at the end of the spring 2011 semester. This will be the third smartphone application ever developed for social science researchers and the only application ever developed for a traditionally hard-to-study population. The application will be a three-tier system and will involve the development of: The application will have multiple, discrete finishing points. These tasks combine the functionality of two previous open-source smartphone applications developed by the MIT Media Lab and Princeton's OPR. These finishing points are listed, in order of importance: Princeton Panda aims to become the first port of call for Princeton students looking to answer any university-related question. As such, it will contain original content, organised links to university resources, and easy-to-use interfaces that make previously available information more easily accessible. Much of the useful information that a Princeton student needs---from library hours of operation, to dinky schedules, to departmental requirement lists---is available somewhere on the internet, but is at least three clicks too far away to be convenient. There is also no central repository of information, such that students can be confident that they'll be able to find what they're looking for, while the information that does exist on various official Princeton sites is not always easily indexed from google. Finally, sometimes students are looking for something (e.g., "what are ALL the important dates to remember during junior spring in COS?") where the question is not well-specified: they don't know what it is they don't know (e.g. when do you apply for internships? When do you find a thesis advisor? When are the main deadlines for competitions they haven't yet heard of?) and so the website needs to organise material based on other people's experiences in order to serve users who don't yet know quite what they're looking for. There will be a dedicated Panda content team to find and write information for the site--we're registering as a student group in the Spring--but the tech side is clearly key. There are four potential COS 333 projects which would make the website work, and fulfil the vision of a genuinely useful student website. Panda is in the fairly unique position that a lot of its users will be categorisable by type (year group, international status, major) and these interests map well onto the kind of information that will be most relevant to them. As such, it may be possible to make the site "self organising" by student type. This would require categorising every page by its usefulness/relevance to different types of students, and making a website that can re-orgainse itself according to importance for each type. Perhaps the most exciting option, and probably the hardest to code, is to make the website automatically self-organising on the basis of users' revealed preferences. This would function something like Amazon's "People Who Enjoyed This Book Would Also Like..." function; basically, the website tracks which pages a user has visited (and stayed on), predicts which other pages will be most relevant for her based on the groups of pages that other users have enjoyed, and then makes the right pages most prominent for each user. The other useful option for a customisable wiki would be one where users themselves could customise the site to suit their purposes. For example, users could 'upvote' pages that they especially liked, and those pages (and other pages "tagged" as similar by Panda editors) could be displayed more prominently for that user. It's surprisingly difficult to do good search on a wiki. A lot of the same terms come up in multiple searches, and it takes a pretty good search algorithm to know which page is most relevant. Also, a lot of information is not in text form---for example, some of the Hours of Operations pages on the website just use iframes to display content from other websites. I think that a really good search function would combine some kind of 'tagging' of the different pages (inspired by sites such as Flickr, which have done a lot of work on indexing non-text content) with some measure of different pages' popularity, along with the ideas behind more traditional search. It seems to me that google does a pretty bad job of ranking the importance of different pages within a particular website (try searching site:princeton.edu on google, for example). OIT often comes top for various Princeton searches (e.g. hours of operation), and I'm not sure that's because people are more interested in that. The ideal Panda search would be able to figure out the relative popularity of different pages and rank search results accordingly. One unique element of this site is that many of the users will be (relatively) similar, so the most popular pages might be almost-universally popular. Taking this to another level, it might be possible to use information about the user (e.g. class year, search history) and situation (e.g. time of day, week or year) to predict which page a person is looking for. This is the segment I have least idea about if and how it would work in practice. May even be too difficult: search is not easy. Current wiki software, while making it relatively easy to edit a page, are still complex enough to stop non-technical people from getting involved. I think there's a lot of space to make an easier wiki-editing interface, especially if we enable contributions to be moderated (so that people can't accidentally or malevolently destroy content). What I'm imagining is an interface for the user similar to "Track Changes" on Microsoft Word: she can click on any piece of text and simply type new content, with her changes appearing on screen for her. These changes could either be submitted to a moderator or could directly affect the site. Since starting this project I've looked at a lot of the current offerings in this area and, frankly, all of them make it far, far too hard to edit. Even Wikipedia makes it needlessly hard and needlessly intimidating to edit content. First, many people are lazy and the extra ten seconds it takes to enter a separate text editor might stop them from submitting a simple correction or contribution. Second, many people are extremely technophobic and even the appearance of complexity in a text editor will stop them submitting. I know this might sound odd but I truly believe that even a small improvement in simplicity could have a massive impact on people's willingness to contribute. A simple in-screen text editor could make all the difference, perhaps enabling users to perform two or three simple functions such as creating headings and inserting links but basically nothing else. I actually think this project has decent commercial potential: there could be content-based websites, such as newspapers, universities and businesses, who would benefit from better encouraging users to submit content and, importantly, correct errors. This is a niche part of the Panda project but it could be a lot of fun: it would be great to create a campus map that could tell you where your nearest printer and laundry were. You click or type your location and the map calculates the shortest distance to that utility. It might also be fun to add a more general "shortest route" feature that calculated the shortest/fastest route from any part of campus to another. Neither of these features would be life-changing for students, but I think both would be quite fun and useful. I am heading up the team that is developing the iPrinceton app suite (for iPhones, Blackberries and Android; available now). The suite includes an SDK for adding modules to the suite (but, sadly, only for iPhones). The tasks for COS 333 students would be to develop a module for that suite. (Students can download the suite from the App store to see what it currently supports.) Incidentally, we will shortly add a Help module (contact information), campus tours and a dining services module (menus). Holdovers from previous semesters, probably still with current interest: Roughly, the project would create an interactive and innovative communication tool to enhance campus alcohol educational efforts, a web-based education and information gathering tool for the Alohol Initiative. A web interface would present information in various formats (text, graphics, video, etc.). We would also like to gather information through questions that users would answer, and so a back-end database piece would also be involved. Of course, there might also be interaction between the two -- e.g., answers the user provides might affect subsequent content/questions. The library has a large number of potential projects as they make increasing use of digital technology. Here are a few: The Princeton University Digital Library: PUDL provides access to digital versions of a rich collection of rare books, manuscripts, musical scores, photographs, and engravings. There are large amounts of XML data and several terabytes of digital photographs. Students interested in information retrieval, information visualization, high-resolution image display, workflow management, image processing, and a variety of information-science problems will find many opportunities here. Daily Princetonian Digital Archives: The Library has begun to digitize the historical run of the Daily Princetonian, from its inception in 1876 through 1997. As with the PUDL, the Prince archive is a trove of complex metadata and digital images currently being served through a commercial digital-library interface. Projects that refine or extend that interface, perform data mining or text analysis, or provide web services for the archive are among many possibilities. Voyager Locator: Several years ago a Princeton CS student wrote a program that shows users how to find their way to books in the stacks. The library wants to convert this application to work on mobile devices like the iPhone and Blackberry. OIT has signed an agreement with Terribly Clever (now a part of Blackboard) to develop a set of mobility apps targeted at iPhone, Blackberries, and browser-capable smartphones. OIT provides a data feed (for example, a feed from the events calendar) and Blackboard builds an app that displays the data on various mobile devices. Our contract with Blackboard also allows us to develop our own modules as part of this mobility "suite", and we'd be happy to have students participate. Each such module will typically need to pull from a data feed, process the data, and generate output for various devices. There are many pieces of art both outdoors and indoors. It would be nice to be able to locate information quickly when we find a work of art on campus. I know that the art museum has been thinking about this, but a group of motivated students could really put some momentum behind it. Princeton faculty and students are engaged in wonderful research projects. We normally hear about them when they appear on the University homepage, or if we wander to departmental sites. It would be great to have web and mobile interfaces to search and read about the important work going on. This could be a great marketing platform targeting potential donors and potential industry partners for research. Build a robust, comprehensive room reservation program for the residential colleges, that makes it easy for students to view all of our resources (via photos or videos), reserve spaces, provide account information, etc. Right now each college has their own home-grown version of this, but from the perspective of 'users' (students), it would be great to have a comprehensive portal of sorts, not least since they are often looking for specialized spaces (such as theaters, dance studios, multimedia labs) that not all colleges have -- so it is a lot of work for them to track down where these things exist, who to contact to arrange to use them, etc. The idea is to write a Workflow Administrator. This would be a system that holds a library of templates for tasks that we do around here -- install a new system, move a system from one building to another, decommission an old system -- and a way to instantiate one of these tasks. There would be a part of the system that allowed you to create and edit tasks. The important thing for us, and what would distinguish this from a ticketing system (we already have one of those) is that the proposed system would know, for each step in the overall task, what data would need to be collected at that step (e.g. a computer's MAC address, rack location in the computer room, etc.) The system would also have the ability to run a script at each step that could, for example, populate one of our other databases with the information collected at the various steps. The system would presumably be web based with some kind of database behind it, so it would hopefully have some relevance to today's computing environment. It might present some interesting challenges -- for example, within the part of the system that edits templates, how do you, on a web page, specify steps of the overall task that can be performed in parallel versus those that have to be performed serially? Here are three ideas from the Pace Center, Princeton's center for civic engagement (http://pace.princeton.edu). Each of these would help us to improve significantly the effectiveness of public service done by Princeton students. The Pace Center sponsors projects and initiatives that involve more than 1400 participants each year. Programs range from the Student Volunteers Council (SVC -- a student-led organization dedicated to promoting and facilitating student volunteerism in the local communities of Princeton and Trenton (http://www.princeton.edu/svc) to Community Action (a pre-orientation program that introduces students to local communities) to Community House (which works with local schools to close the minority achievement gap) to Breakout Princeton (a civic action program) to a lively summer public service internship program. Community Action Participant Assignment. Each year approximately 150 incoming freshmen participate in a week of service called Community Action (CA). The participants are accepted and assigned to one of 13 groups based on stated preferences, gender, residential college, allergies, and other relevant criteria. Ideally a program could be crafted that would take all these fields from the database and create groups based on them. Public Service Internship Application Management System. Each year, hundreds of students apply for summer public service internships. We sift through applications in order to select the 160+ who will receive Princeton sponsorship. Ideally we would have a system for processing application information stored in a database. The data from students' completed applications have already been "dumped" into an Access database. We need to be able to sort and manipulate those data into different reports and formats for several colleagues and reviewers. We also need to be able to analyze the data for our evaluative needs.
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824293.62/warc/CC-MAIN-20171020173404-20171020193404-00314.warc.gz
CC-MAIN-2017-43
23,582
56
https://research.tue.nl/nl/publications/design-and-processing-of-invertible-orientation-scores-of-3d-imag
code
The enhancement and detection of elongated structures in noisy image data are relevant for many biomedical imaging applications. To handle complex crossing structures in 2D images, 2D orientation scores U: R 2× S 1→ C were introduced, which already showed their use in a variety of applications. Here we extend this work to 3D orientation scores U: R 3× S 2→ C. First, we construct the orientation score from a given dataset, which is achieved by an invertible coherent state type of transform. For this transformation we introduce 3D versions of the 2D cake wavelets, which are complex wavelets that can simultaneously detect oriented structures and oriented edges. Here we introduce two types of cake wavelets: the first uses a discrete Fourier transform, and the second is designed in the 3D generalized Zernike basis, allowing us to calculate analytical expressions for the spatial filters. Second, we propose a nonlinear diffusion flow on the 3D roto-translation group: crossing-preserving coherence-enhancing diffusion via orientation scores (CEDOS). Finally, we show two applications of the orientation score transformation. In the first application we apply our CEDOS algorithm to real medical image data. In the second one we develop a new tubularity measure using 3D orientation scores and apply the tubularity measure to both artificial and real medical data.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740733.1/warc/CC-MAIN-20200815065105-20200815095105-00392.warc.gz
CC-MAIN-2020-34
1,376
1
https://systemsdigest.com/taxonomy/term/1785
code
Do you know that log files in Linux can quickly consume disk space if not managed properly? This can lead to performance issues and even system crashes. Log files? What exactly are they, and why should they matter to anyone using Linux-based systems? Log files are essential components of any Linux-based system. They are text files that contain information about system events, including errors, warnings, and other important messages. For development teams aspiring to adopt shift-left testing, using Linux VMs can provide a secure and robust environment without the cost. Here at NodeSource we are focused on fixing issues for the enterprise. This includes adding functionality and features to Node.js that are useful for enterprise-level deployments but would be difficult to upstream. One is the ability to execute commands remotely on Worker threads without the addition of running the inspector, such as capturing CPU profiles or heap snapshots. Do you need complete control over your production environment? If so, you might want to skip the Platform as a Service (PaaS) offerings and deploy to your own server instead. This article describes deploying a Django application to an Ubuntu server at Linode. Linux is a free and open-source Unix-based operating system. As a result of Linux's security and flexibility, its use is gaining a great deal of attention these days. A Linux distro is an operating system that relies on the Linux kernel. A Linux server or high-end cloud device might be used on a desktop computer or laptop, but on a personal computer or laptop, it can be difficult to use. However, times have changed since then. On 25th August 1991, a computer-science student in Finland posted the following message on a Usenet newsgroup: “I’m doing a (free) operating system (just a hobby, won’t be big and professional like gnu) for 386(486) AT clones. This has been brewing since April, and is starting to get ready. I’d like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). It is possible to become more efficacious and proficient with any tool when you know how to use shortcuts. Consider this for a moment. Do you think it is okay if someone repeatedly selects text by dragging through the mouse and selecting the cut option from the menu for once instead of pressing Ctrl+X? or by holding down the mouse button to copy the entire text instead of pressing Ctrl+A? There is no exception to this rule when it comes to Linux shortcuts. In today’s cloud ecosystem the demands for high functioning and high performance observability, security and networking functionality for applications and their network traffic are as high as ever. Historically a great deal of this kind of functionality has been implemented in userspace, but the ability to program these kinds of things directly into the operating system can be very beneficial to performance.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817790.98/warc/CC-MAIN-20240421163736-20240421193736-00552.warc.gz
CC-MAIN-2024-18
3,000
8
https://www.contech.live/post/contech-pharma-2022-exploring-digital-transformation-enabled-by-fair-data-implementation
code
Firstly, what are we talking about here? What has been done, what are the challenges and what can be done to overcome them. It helps to set the scene by saying what is FAIR data, or more importantly what purpose does this concept and its implementation fulfil? We need to step back a little and think about the purpose of analytics. In the context of scientific and commercial research and development, analytics is all about supporting the assessment of hypotheses through digital means, as opposed to what you might call bench research. Or rather in support of the bench research that is done. The reason that this support is needed is that – science is complicated – there is an understatement if ever there was one. It is at the point where no individual can understand enough of a broad range of subjects to undertake research on their own. It is all about collaborative endeavor. Which adds an additional overhead to the work. We have seen a productivity crisis in research where more effort is expended for less benefit as time goes on. The low hanging fruit has been gathered, and the journey or harvesting (choose your metaphor) gets more difficult as we continue our work. As they say, we stand on the shoulders of giants. We can read previous research and learn from it. In essence we need to look back into prior data generated from research and use that to inform our work. At the same time, we have the opportunity and the obligation to work with computational tools that can support us in this work. Computational tools can, in essence, automate the kinds of processes that require human thought. Various kinds of thought processes can be automated, logical reasoning, association, pattern recognition, prediction from past experiences and so on. The good news is that we have powerful tools. The bad news is that making use of these tools is not as easy as it first appears. This is in part due to the paradox that often what people find hard computation can accomplish easily; but what people think of as easy is hard for computation. Or looked at another way to is the challenge of representation. People are great at making representations from the world around them. So good that they often don’t see that they are looking at a representation and not the world. Take maps as an example. They are a simplification or an analogy for the world. Computation can only work on representations, not on the world directly. You could spend years reading over the philosophy of cognition and representation, but one key insight is to think of a representation as one view of many. A perspective if you like. A key term used when people think of representation is the word ‘ontology’. Ontology was originally used to describe just this, a perspective. What is your ontology for this, and what is their ontology for it – they will be different because you see things slightly differently. Why did we digress on this discussion? Well, the point is that computation can be used to automate thought processes over representations, and the representations matter because they define a view on the world. Likely one for which you have an hypothesis. As a theory or model has to be expressed in a representation. Now what if you would like to gather other information with which to test your hypothesis? This information may have been gathered by other researchers who have a slightly different (but similar) world view. A similar, but slightly different ontology. Now you might need to translate between them. First however, you might wish to find all relevant data. That is the first of our acronym. FAIR. Now you can do this, as we say manually. But the power of the digital age is that you might wish to set a computational tool to automate this cognitive activity. So, ideally you want all information to be findable (usually based on relevant terms from your ontology, and perhaps other terms mapped to other ontologies as synonyms so your computational agent can find these too). Ok, so your computational agent has found some data. You might wish to get hold of it. To access it. This may be straightforward, or it may be complicated. Ideally, you would like your computational agent to be able to gain access automatically. This is the second letter of our acronym FAIR. Now, once you have the data you might find that it did indeed us a similar, but different, ontology as it’s representation. This could be a problem. You might not be able to get your new information aligned with your existing information, and so not be able to test your hypothesis. So, you need to identify the ontology used to encode this information, get hold of a copy of that ontology, and perhaps a mapping between that ontology and the one that you are using, and to run the mapping and normalise the data so that it can be analysed as one homogeneous data set. Assuming you are successful, you might wish to record your analysis, the method, results and conclusions, along with the data you used, and perhaps a copy of the algorithm you used in your method. In future, someone else might wish to revisit your work and extend it. And here is the last of the acronyms. R for reuse. If you have captured all of the above in such a way that the data can be found, accessed, and interoperated – using automated methods, then it can be resused and the virtuous cycle of enabling productive research can continue. So, that covers - what is FAIR data, and what purpose does this concept and its implementation fulfil? – read more tomorrow
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337631.84/warc/CC-MAIN-20221005140739-20221005170739-00609.warc.gz
CC-MAIN-2022-40
5,538
18
https://docs.infinityvoid.io/infinity-void-metaverse/glossary
code
Have some questions? Check out some frequently asked questions! Virtual Land Parcels The finite, traversable, 3D virtual city within Infinity Void is called Vaikuntha City (meaning: abode of the divine). Vaikuntha city is divided into sectors consisting of virtual land parcels identified by unique plot numbers. Users can purchase virtual land from Infinity Void official website or from secondary NFT Marketplaces. The unique ownership of the virtual land will be transferred from the seller (Infinity Void or other users) to the buyer (user) via a process called minting (creation) of a non-fungible token stored on the Ethereum blockchain. In case the virtual land was owned before by another user it is already minted and it will not be created again but the unique non-fungible token will be transferred to the new owner. In both cases, the buyer of the virtual land will get full control over this land within the virtual world to build on and use or to monetize on it. Each virtual land parcel in Vaikuntha city is accessible for users via roads or teleport functionality. The 3D virtual city consist of all basic infrastructure like roads, landscapes, water bodies, etc. Unique names are ENS subdomains minted on the Ethereum blockchain. ENS subdomains allow you to set a unique name for your user account avatar. You can mint your personal ENS subdomain (unique name) via https://mint.infinityvoid.io/ensmint/. Users that don't mint a unique name will get a name represented by their name with unique number for example: "name#0000" (similar to Discord). Users who mint a unique name will be given that name. For example, if the users name is Daniel with ENS this will be "Daniel" while without it will be "Daniel#unique number".
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474746.1/warc/CC-MAIN-20240228211701-20240229001701-00151.warc.gz
CC-MAIN-2024-10
1,739
6
https://windowsphone.stackexchange.com/questions/5780/resolving-problems-with-backup
code
For about 2 months now, it appears that backup has been failing to backup correctly - If I kick off a manual backup, the progress works up to 98%, and then fails with a message "There was a problem backing up your settings. Try again later.": Is there something I can do to resolve this, that won't cause me to lose my settings? I have been trying from a solid WiFi connection, so I don't believe it to be a temporary issue.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00419.warc.gz
CC-MAIN-2021-49
424
2
https://linuxreviews.org/Exaile
code
Exaile is a database-oriented music player with smart playlists and some advanced tagging features. It supports browsing music indexed into it's database by artist, album, genre, data and combinations of those but not by folder or folder tree. Both browsing a music collection and dragging songs or albums from it to the playlist is really slow, notably so, if the collection's large. Exaile's also got a radio browser with a large amount of stations pre-configured. Exaile is fine for browsing radio stations and that's about it. It's interface is unacceptably irreponsive and too sluggish to use it as a general music player. Features and usability Exaile is in principle a fine music player with most of the more advanced features database-driven music players tend to have. Indexing the collection and storing it in Exaile's database upon first launch is very slow and it takes hours if the music collection is large. This is pretty normal and to be expected. Actually using the music collection database from Exaile's "Collection" tab is also very slow and that's not very typical and not at all acceptable. Switching from one kind of view in it's collection tab to another is painfully slow. Clicking say Album makes Exaile freeze for half a minute until a list of albums - or random songs depending on how well your collection is tagged - appears. Dragging songs from the collection view to the playlist is also painfully slow. Drag an album and over and nothing happens for 20 seconds. Tracks are eventually added and it's totally possible to make playlists if you are very patient. Playlists can be managed in tabs which makes it easy to have several open and quickly switch between them. That doesn't really help when it's impossible to quickly add songs to the playlist. Drag one folder and it's frozen for ages and you have to patiently wait until you can add another folder. The user-interface has a completely different problem which makes Exaile painful to use in a very different way: The default icon sizes for the play/stop and back/forward buttons as well as file file navigation buttons in it's "Files" view are set to really small sizes which are extremely tiny on modern HiDPI monitors. These buttons can of course not be resized since Exaile is made with GTK 3 and the GTK developers decided that being able to resize icons made GTK applications too usable for people with poor eye-sight and HiDPI monitors so they removed that option in the typical GTK/GNOME fashion. Exaile's button icons are perhaps totally usable on a big old monitor with a low resolution like 1280x768. They are simply way too small on a modern display. Verdict and conclusion Dragging files from the "Files" view is responsive and fine and it can be used for playlist management if you are fine with using that view and you are able to navigate it using the extremely tiny buttons. Everything related to the music collection index feature is just too horribly slow. It is nowhere near as responsive as a music player should be. This may have to do with Exaile being a Python program and those do tend to be really slow and laggy. PyChess is another example of another Python-program which could have been great but isn't because the graphical interface acts like it is stuck in glue. Regardless of why Exaile needs to spend 20 seconds doing something that should take less than one: It absolutely ruins the experience. That combined with the tiny icons makes it a non-alternative. It's just bad at being a data-based oriented music player. Audacious is a better choice if you just want to add files from folders to a playlist. The Music Player Daemon combined with a front-end like Cantata is a much better choice if you want to index a music collection and use those kinds of features. Exaile's website is at https://www.exaile.org/ The link to "Download 4.0.0 Source" in the "Download" section on that site is broken. The link to the Windows version does work and there's also instructions for acquiring the source from Microsoft github. Stand-alone music players: |Program||rating||framework||music collection database| |Audacious||Qt5 or GTK2| Music Player Daemon clients: mpd is a database-oriented music player daemon which can be controlled by numerous front-end programs.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00566.warc.gz
CC-MAIN-2022-05
4,276
16
https://www.1plus1.io/about
code
The Double Diamond process Every event starts with understanding where your company is at and exploring the universe of possibility. We begin by asking the right questions. We help you ask the questions that are not being asked. This is ideation. Then we begin to narrow the problem down. We help you build your story. From the bigger picture to the local context and tailored solution. This is where we help to define the problem. Now we build. Not the full monty at first, but a few early versions. We work with you to explore potential solutions and test them against our assumptions. This may be series of designs and options that we then explore the feasibility. This is prototyping. Finally, we test, test, test and execute. On time. On budget and with a seamless experience. This is the magic.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573105.1/warc/CC-MAIN-20190917181046-20190917203046-00165.warc.gz
CC-MAIN-2019-39
800
5
https://www.irit.fr/~Sylvie.Doutre/
code
I am an associate professor (Maître de conférences in French) in computer science at the University of Toulouse 1 in France, member of the IRIT group Logic, Interaction, Language and Computation (LILaC). My research interests mainly concern argumentation-based reasoning, on its computational and modelling aspects. I am currently part of the AMANDE project. See research, projects and publications to know more. I teach databases, web design, algorithmics and programming, in the faculties of Economics, Management, and Computer Science. See teaching to know more. I am a member of the University of Toulouse 1 Research Committee, and of the Faculty of Computer Science Council. See admin to know more about administrative and other responsibilities. I have been in the current position since 2005. See CV to know more about my past positions.
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424559.25/warc/CC-MAIN-20170723122722-20170723142722-00382.warc.gz
CC-MAIN-2017-30
846
5
http://spinnerbox.blogspot.com/2013/06/notes-on-building-shooter-game.html
code
- So far I spent a lot of time removing but also adding stuff to my game in order to improve the gameplay. The gameplay is also the most important thing for every game, so if the gameplay is not good than the game will be failure. - For building a shooter game you don't have to go in so much detail like I did. I wanted to add real world conditions like gravity, hitting ground, movement contraints but it turned out that so much constraints are annoying for the player and makes him not to want to play the game. Shooter games are built upon a fictive capabilities of the main Hero. So you can release your imagination the way you want as much as you want. Also don't forget that if the game has some story like mine does, you always have to concentrate on gameplay first and bend the story according to the gameplay. Not vice versa. If you plan to add some story/RPG elements make sure you not bore the player with so much story telling. - I had to change some of the game concepts. I had a concept of collecting red scrap from the ships. People are not used to collect weird things and they get confused. They will recongnize coin or money like items so make sure you add items that look like coins or money banknotes or maybe ieven gold/silver/bronze pieces. You can release you imagination but again make collectible items something familliar. In my game I changed red scrap to science points, they look like coins. Now all monsters give you 1, 2 or 3 science points. So while you play the game you collect science points and therefore you buy weapon upgrades with those science points. - Shooter games should not be juicer :) Dont (make the player juice) destroy the player with shooting or moving monsters that hit him all the time. A shooter game should have 6 to 8 types of monsters/minions/soldiers all combined toghether to make the gameplay. Each of them should have its own pattern of movement/shooting so that after a while the player can recognize what to expect. Avoid fast units or repetitive units, that annoys the player. Hence in shooter games the most time of development should be spent on how/when to move the minions arround to produce enjoyable, story like experience for the player. You can have moving minions, minions that move and shoot and minions that move towards the Hero. You can also enhance that movement by using straigth line or sinusoidal like. You can spawn several minions at a time in formation like 45 /135 degrees or maybe v-shape or u-shape like mine game has. You can spawn units based on time or kills left on each level. - Add upgrades. Upgrades on the weapon, shields, time of regenration. Ingame short time upgrades. Add secret weapon or secret reinforcements that will aid the Hero in desperate times :) - You may like to structure you game in levels/areas and lead the player part by part into the unknown aquaintaing the player with new things/monsters trough the game. Thats all for now, I think I will get many other thoughts on my mind after. So tell me your thoughts and play my game :)
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863463.3/warc/CC-MAIN-20180620050428-20180620070428-00616.warc.gz
CC-MAIN-2018-26
3,044
9
http://kyantonius.com/category/foss/
code
It always feels good when someone comes to you and says that he actually finds your work useful and likes it. It’s not because you ask them for the appreciation but it’s more like that you realize that what you do has contributed something valuable to others. That is what I am going to share with you right now, a SLAMPP showcase written by a SLAMPP user. Of course through this entry I would encourage other users to share their own stories on what they have done with SLAMPP and what they think about it. I will publish your stories here. This showcase was actually written and submitted to me last year by Brian Papile from Texas. Although he already gave me the permission, unfortunately, I was not able to publish it in a timely manner. My apologies for that. FYI, Brian used SLAMPP as training material for his web application security class. I was absolutely amazed that he could master and customize SLAMPP within a short period of time and then finally shared the work to his class. Well done! Please read his full story by downloading this PDF file. I hope it could inspire other users to do the same. As last note, if you are interested to get a copy of his work, please contact me. Thanks all for your contribution to SLAMPP. Much appreciated. Now, I will focus on the next release which has been delayed for a while. Note: This is a reposted article. You can read the original entry over here. In the last few months I’ve gathered some links that may trigger your interest. Most of them are IT and FOSS related and only one is from Oil and Gas sector. Yes, it is sometimes hard to share Oil and Gas resources because of the closed and competitive nature of the business and copyrights/trademarks or other Intellectual Property Rights issues. And I don’t want to get in trouble by doing that for sure. I hope you will get some benefits from the following links. IT and FOSS MultiCD – multicd.sh is a shell script designed to build a multiboot CD image containing many different Linux distributions and/or utilities. Sakis3G – a tweaked shell script which is supposed to work out-of-the-box for establishing a 3G connection with any combination of modem or operator. It automagically setups your USB or Bluetooth™ modem, and may even detect operator settings. Passing on interesting news for all Python/Django developers out there. — Start of message — Are you a passionate Python programmer with experience using the Django web framework? Do you have a keen eye for detail, love writing clean and well documented code, wield Git commands like an orchestral conductor, and possess a strong desire to build web applications that are going to make the internet an even more beautiful place to interact in? If you answered “yes” to all of the above, then contact us today with your resume and a short cover letter outlining the reason for your unrequited love of Python, your marriage to Django, and any other language/framework mistresses that you simply can’t live without! How to Apply Please send your cover letter, resume, and any accompanying material to: hr [at] conceptuous [dot] com To make our life that little bit easier, please put the position title in the subject line of your email. Thank you. I just would like to break the silence in these first months of the year. Guys, this is my first entry in 2010 and I am getting excited to delivering many ‘interesting and useful’ entries this year. We’ll see! Just a quick announcement from me. We have released SLAMPP 2.0.2 yesterday. This is a maintenance release of current 2.0.x tree with some fixes and new documentation. Please get your copy here at http://slampp.abangadek.com. All credits go to Clinton Tinsley for making this happen. Thanks my friend! More details will be coming soon.
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163996875/warc/CC-MAIN-20131204133316-00041-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
3,792
22
https://learningsystems.vcu.edu/canvas/mobile/
code
Canvas is built on open web standards and uses minimal instances of Flash, so most features are supported on mobile devices. With the growing use of mobile devices, you should build your courses with best practices for mobile in mind. You can access Canvas from any browser on your Android/iOS device. However, mobile browsers are not supported, and features may not function as expected compared to viewing Canvas in a fully supported desktop browser. On mobile devices, Canvas is designed to be used within Canvas mobile applications. Canvas pages within a mobile browser are only supported when an action in the app links directly to the browser, such as when a student takes certain types of quizzes. Support is not extended to pages that cannot currently be used in the app, such as Conferences or Collaborations. Additionally, Canvas offers limited support for native mobile browsers on tablet devices. For details, please reference the limited-support mobile browser guidelines. When searching to "Find my school" there are 2 types of login options. - There is a Virginia Commonwealth University login. This requires users to use their VCU eID and password. This is the recommended option for all credit-bearing courses offered via Canvas. - There is a Virginia Commonwealth University Alternate Login (e.g. Canvas Catalog users) login. This is used for non-credit courses facilitated in Canvas, and Canvas Catalog courses. Important Mobile Tip
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100762.64/warc/CC-MAIN-20231208144732-20231208174732-00021.warc.gz
CC-MAIN-2023-50
1,451
8
http://randomthoughtsinwriting.blogspot.com/2010/12/there-is-geek-in-me.html
code
I have always believe I am a soft geek. Been excelling in school for most of the time and often I hate failures. Same attitude goes with work. I want to EXCELL that's why I'm giving and doing my best. So what better way to show the world but to strut out the geeky look. Hahaha!!! Not my style and I look terrible in those glasses!
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267866358.52/warc/CC-MAIN-20180624044127-20180624064127-00503.warc.gz
CC-MAIN-2018-26
331
3
https://techies-world.com/how-to-create-local-yum-repository-in-centos/
code
The Yellowdog Updater Modified (YUM) is an rpm package management opensource tool which is available for Redhat Linux and other Linux flavors.In earlier days, its very difficult to install packages in Linux systems since it will ask for many dependencies.After yum introduction,its become very simple and yum has algorithm to select all the dependencies automatically once you configured the yum repository. This tutorial explains the steps to configure local yum repository. Step1: Create the source folder. Step2: Copy the packges and dependencies to this folder Step3: Change the location to the source folder Step4: Confugre metadata Step5: Change the location to repo config folder. Step6: Create Repo file Step7: Add the following details. [LocalRepo] – Name of the Section. name = Name of the repository baseurl = Location of the package Enabled = Enable repository gpgcheck= Enable secure installation gpgkey = Location of the key gpgcheck is optional (If you set gpgcheck=0, there is no need to mention gpgkey) That's all... Now we can install or upgrade packages using yum command.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816535.76/warc/CC-MAIN-20240413021024-20240413051024-00710.warc.gz
CC-MAIN-2024-18
1,093
17
https://www.tutorialandexample.com/what-is-transaction
code
A transaction is a collection of logically related operations which reads and possibly updates the various data items in the database. Usually, a transaction is initiated by a user program written in high-level DML language (SQL), or programming language, with an embedded database, accesses in JDBC or ODBC. In other words, we can say that any logical calculation which is done in a consistent mode in the database is called a transaction. Every database before and after the transaction must be in a consistent state. A transaction has the following properties to ensure the integrity of data: Simple Transaction Example Suppose a bank employee transfers money from X’s account to Y’s account. As the amount of 100Rs gets transferred from the X account to Y’s account, the following series of operations are performed in the background of the screen. Simply we can say, the transaction involves many following operations such as: - Open the account of X - Read the old balance - Deduct the certain amount 100Rs from X account - saving new balance to X account - Open the account of Y - Read the old balance - Add the certain amount 100Rs to Y’s account - Save new balance to Y account Open_Account(X) Old_Balance = X.balance New_Balance = Old_Balance - 100 X.balance = New_Balance Close_Account(X) Open_Account(Y) Old_Balance = Y.balance New_Balance = Old_Balance + 100 Y.balance = New_Balance Close_Account(Y) Operations of transaction Following are the two main operations in a transaction: - Read operation - Write operation Read Operation: This operation transfers the data item from the database and then stores it in a buffer in main memory. Write Operation: This operation writes the updated data value back to the database from the buffer. For example: Let Ti be a transaction that transfer Rs50 from X account to Y account. This transaction can be defined as: T i: read(X); …….(1) X := X?50; …….(2) write(X); …….(3) read(Y); …….(4) Y := Y + 50; …….(5) write(Y).…….(6) Assume that the value of both X and Y before starting of the transaction Ti is 100. - The first operation read the value of X(100) from the database and stores it in a buffer. The second operation will decrease X value by 50. So buffer will contain 50. The third operation will write the value of the buffer to the database. So the final value of X will be 50. - The fourth operation read the value of Y(100) from the database and stores it in a buffer.The fifth operation will add X value by 50. So buffer will contain 150. The sixth operation will write the value of buffer to the database. So the final value of Y will be 150. But, it may be possible that due to hardware or software failure that transaction may fail before completing all the operations in the set. For example: If in the above transaction, the transaction fails after executing operation 5, then Y's value will remain 100 in the database, which is not acceptable by the bank. To overcome this problem, we have two important operations Commit and Rollback. Commit Operation: This operation is used to save the work done permanently in the database. Rollback Operation: This operation is used to undo the work done.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00192.warc.gz
CC-MAIN-2022-27
3,196
35
http://stackoverflow.com/questions/13531975/uicollectionview-with-variable-cell-sizes
code
I'm struggling a little bit with the size for cells in UICollectionView. In android, you can easily "wrap" the size of the cell. Just like in iOS, you have a function call 'GetCell' and you decide how big it will be. The difference in iOS is that in the "getCell" function (of UICollectionViewController) it seems you can't choose the size of the cell (or the contentview). If I change the size, it will ignore it and use anyway the general 'ItemSize' of the CollectionView (which is the same for all cells). This sometimes results in Views which are not very beautiful. For example, if I have a horizontal list with images, I want the distance between images to be the same, independent if one image is 200x200 and the other 400x200. So the cell size should be different also. It is possible to define a different size for different cells. You can use the Collectionview delegate and the GetSizeForItem (= sizeForItemAtIndexPath in ObjC) function. The problem is, this function is called BEFORE the actual GetCell function. So if I have a more complex Cell, with for example some labels. In my "GetCell" function, I build this Cell and at the end, when returning the Cell, I know which size it should be. However, in the GetSizeForItem function, that info is not available yet, because the Cell is still 'null'. The only way I could do it, is to actually build the UIView for the cell (so I can request the size) at the moment of the 'GetSizeForItem' call. But this doesn't seem a good design, because I'm building the UIView before the 'GetCell' where I will build it again. Is there something I'm overlooking?
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257826759.85/warc/CC-MAIN-20160723071026-00275-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
1,612
7
http://slashdot.org/~Billly+Gates/
code
C is like a powerful table saw. Don't practice safety and know what you are doing and you lose a limb. Powerful but not all should play with one. Because C is cool and Java and flash suck. Get with the program Gee a little insecure are we? News flash software is software and has bugs. It doesn't matter which license it is under. It still is software and no being from Microsoft doesn't make it insecure by default anymore than being GNU makes it more secure. Yes Apple, Google, and Microsoft are mentioned here when a serious flaw is discovered. Why should Linux or anything GNU get a free pass? I do not know if I can adjust hairy. I love Aero and the familiarity of Windows 7. I know XP users hate 7 because they too like the feel and know where everything is at for the last decade so why change? What I worry about is can I still do instantsearch or will Cortana pop up and Bing power options instead of just opening power options to set my sleep mode? I hated this with 8 with a passion. I need my aero too. Windows 7 is exactly what I tried to do with Vista via hacks like VistaGlaze with all the colors. I want customization. After 8.1 I am afraid of change. I am downloading 10 as an upgrade on the 7 box I am using now but I have a feeling by tomorrow night I will be putting 7 back on. 8.1 had a tendency to crash on youtube regardless if the video card was an ATI or Nvidia. Agreed. Thanks to virtualization I can still run Linux or FreeBSD as a VM. I tend to use turnkey Linux these days as these are appliances I just turn on and I have a modified HOST file with the IP addresses of my VMs from VMware Workstation. Windows as a host is stable, has office, Visual Studio, adobe photoshop, and of course my video games and cloud storage tools. Netscape was worse than IE 6 due to even more bugs and CSS work arounds if you can believe that. Firefox == netscape rendering engine whether it is cleaned up or not. Therefore IE 11 is better than Firefox You sir are in luck as Aero is returning I know people who went back to school at 45 and found jobs as software developers before even graduating! Do not tell me this as the 2002 recession ended. Wow buy a new one if the old one works just fine and is better (no cell phone on the desktop)? True as a gamer an old system is pretty useless unless you run older games from that era in which it came from. XP is still insanely popular and I will bet you money in 2020 when Windows 7 goes EOL XP will still have marketshare of active computer users. Small but still significant to be counted. It works and users like it because it is familiar to them. I still see no reason to leave Windows 7. I just wonder what it will break now? Windows 8.1 BSOD on both an ATI and NVidia cards in youtube which I find very strange. No problem at all in Windows 7. DirectX 12 adds more complexity of things it can fuck up. Oh and my Asus board probably won't have drivers as it is in their financial interests for me to throw it all away and buy another one right? I had a room mate who stayed on XP for a long long time as he thought Windows 7 was a buggy POS. It always BSOD because Dell wanted him to throw it away and buy a new one rather than release a driver that went thru QA? Just my gripe. I find that very hard to believe as driver support for XP has gone away since 2011 except for a few business oriented systems and now even they do not have Xp drivers. This must have been used. Wii has open hardware access to the primitives where developers writell their own direct access according to a past story here. PS is the same. Console makers hate opengl as it encourages portability to other consoles They use neither. Directx is on the Xbox so the pc gets a lousy Xbox port. Opengl has not been used in games for a long time now. Exception is Wow for the mac Wow right back in 2001 again. John Carmack conceded directx 9 is superior. I can't think of a single game today that uses opengl. Why is that? You must live in alternative universe as developers prefer to develop for consoles first with direct x then back port it to pc if piracy concerns are not to bad. In 2015 we choose languages on rich sets of apis. Java for example is almost universally hated for it's syntax yet is insanely popular. Why? 150,000 methods to choose from and frameworks galore. No one cares about features as its not 1982 anymore where you write your own libraries. Today you have a task and a tight deadline and there is no time to program. Only time to grab a framework can tinker with it. Chrome uses Webkit specific css and not W3C. It claims it supports more standards but like IE 6 it is not standardized. It is like IE 4 in many ways as websites need to use Webkit specific hacks.
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122039674.71/warc/CC-MAIN-20150124175359-00085-ip-10-180-212-252.ec2.internal.warc.gz
CC-MAIN-2015-06
4,724
34
http://www.dailymotion.com/video/xo1n4f
code
6 years ago496 views Date: Thursday 19th January 2012 Speaker: Rafael Torres (University of Oxford). Title: Constructions of generalized complex structures in dimension four. Abstract: Recent constructions of exotic smooth structures on small 4-manifolds can be canonically used to expand our understanding of generalized complex structures. This talk will be an exposition of such an enterprise, whose produce include unbosoming unexpected phenomena on the number of type change loci of a generalized complex structure, and their construction on a myriad of 4-manifolds.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590069.15/warc/CC-MAIN-20180718060927-20180718080927-00399.warc.gz
CC-MAIN-2018-30
571
5
https://forum.obsidian.md/t/centering-text-vertically/48017
code
Hi - new to Obsidian, but I’ve searched high and wide, and have tried a million CSS things, including vertical-align: middle. I’m sure I’m missing something obvious. Even on the default template, with no snippets active, I see this behavior, and can’t get my text to vertically align. Could someone please point me in the right direction: In the picture, the maroon color is from highlighting the text in the line. As you see, the gap from the tallest characters to the top of the line is larger than the space at the bottom of the line. Thank you in advance!
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948932.75/warc/CC-MAIN-20230329023546-20230329053546-00553.warc.gz
CC-MAIN-2023-14
567
4
https://classes.engineering.wustl.edu/ese205/core/index.php?title=Wash_U_Doing_Weekly_Log&oldid=3356
code
Wash U Doing Weekly Log Week January 27th In the first week, we formed our group of three members: Kristine Ehlinger, Carol Pazos, and Dylan Zubata. We decided to focus our project on creating an app that optimizes the walking distance across Wash U's campus. Week February 3rd This week we discussed the details of our project as we created our first Project Proposal. We met with our TA on Thursday and made various decisions about our project including: dividing the work, the programming language we plan to use, the resources we need for our project, and a project timeline. Week February 10th This week we began familiarizing ourselves with the ArcGIS tool and contacted the GIS department to discuss full features of ArcGIS. We also mapped out all of the destinations we intend to include in the app. Week February 17th After discussing with WUSTL employees, our group is waiting to be granted permission to use the WUSTL ArcGIS subscription. We also continued to annotate our map of Wash U's campus on ArcGIS and did further research into the optimization of our pathways. Specifically, we found an archive that describes how to "Find a Route," which can be utilized with our map. Week February 24th Permission has been granted to use WUSTL ArcGIS subscription. ArcGIS's AppStudio Pro has been downloaded to assist in incorporating our map into our app. We're currently trying to figure out the basics of AppStudio while other team members continue working on the map.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679101195.85/warc/CC-MAIN-20231210025335-20231210055335-00604.warc.gz
CC-MAIN-2023-50
1,476
11
http://www.droidforums.net/threads/having-trouble-sending-pictures.27630/
code
For some reason my Droid won't sent pictures consistently to email. I go in, multi-select a few photos, and send them no problem. But then when I come back and try it again, no go. It's as if after sending them, they get lost in some kind of void. I've tried sending them only a couple at a time or even one at a time, but nothing works. The newer pictures never make it to my email. What's going on?
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823817.62/warc/CC-MAIN-20181212091014-20181212112514-00361.warc.gz
CC-MAIN-2018-51
400
1
https://tech.forums.softwareag.com/t/concept-of-reusable-and-ron-reusable-load-modules/94722
code
A load module can be marked non-reusable, reusable and re-entrant. To simplify somewhat: A non-reusable module can only be loaded for one task - the code is assumed to be self-modifying (data and code are in the same load module). A reusable (“serially reuseable”) module can be used by a second task when the first is finished with it (generally the module will acquire its own data space dynamically). A re-entrant module can be used by multiple tasks at the same time - data areas are provided by the calling routines. Shared routines (eg ADAIOR) will generally need to be re-entrant and reusable so multiple threads can share the same copy of the load module in memory at the same time. Client applications (ADARUN) are usually not re-entrant and often not re-usable - a fresh, non-shared copy of the module is loaded into each task’s memory.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400265461.58/warc/CC-MAIN-20200927054550-20200927084550-00278.warc.gz
CC-MAIN-2020-40
852
5
https://www.peopleperhour.com/hourlie/develop-wordpress-web-site-with-4-pages-business-website-multipurpose/346745
code
- Views 130 What you get with this Hourlie I can do any installation of Wordpress CMS for you. This hourly includes 1. Installation of Wordpress 2. Database configuration and connection 3. Can Make 4 Pages With your given content. This is purely installation with 4 pages and no theme customisation offered in this package. What the Seller needs to start the work I need web hosting control panel or FTP access to website document root and access to data base management software. It will be great if you provide me the theme and to install it properly. You are requested to provide content for pages.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794869732.36/warc/CC-MAIN-20180527170428-20180527190428-00010.warc.gz
CC-MAIN-2018-22
601
10
http://www.webassist.com/forums/post.php?pid=19883
code
Usually this is the OrderReferance ID column in the Orders table. The string looks like the session ID. Double check that the OrderReferenceID column is set to text in the DB. then in the Store order Summary server behavior, make sure the data type for the order reference ID column is set to text as well. If you still have a problem, double check in the Store Order Summary server behaviors and the Store order detail server behaviors for any column you are storing the session ID in.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00698.warc.gz
CC-MAIN-2023-06
486
5
http://www.gekozone.co.uk/PetTurtles/eastern-fence-lizard-care
code
Eastern fence lizard care Sure! I actually had one when I was a kid. Took terrible care of it. :I But I am an adult now with much better knowledge! Fence Lizards (Sceloporus sp.) are small terrestrial lizards native to much of the US. Their care is fairly simple, though they are not big on being handled. A 10gal is suitable for a juvenile but an adult should be in a 20gal (24” x 12” x 16”) or 20 long (30” x 12” x 12”) for a pair. Substrate should be newspaper or paper towel during quarantine but for the furnished vivarium wood mulch is best. A mix of coco coir/wood mulch is also acceptable. A screen lid is a must for ventilation, but if it’s not built in make sure to use clips on ALL sides of the tank. Avoid sand, or at least feeding on sand, because it may cause impaction. NEVER use crushed walnut or calcium sand. Add in several pieces of slate rock to bask on and hide under. Build a small cave out of it, or purchase a commercial hide. Cork bark can also be used and looks nice and natural. However, you should put at least one piece of rock under the heat lamp for them to warm up on. Fake plants are a must to give the lizard a feeling of safety. Nothing dense like a phelsuma viv, but at least a few things to hide in. A small water dish is also important. Males will fight, and generally they should be kept alone but can sometimes be kept in pairs. These guys are insectivores taking small-medium crickets with ease. Always dust crickets with calcium powder to offset their phosphorus. Other foods include small roaches (a good staple), small mealworms (for adults ONLY), waxworms, and phoenix worms. Daily feeding is normal. Adults will eat about 6 crickets a day. Pretty much just feed them as much as they’ll eat in one sitting. Heating and Lighting Standard basking heat lamp and UVB. Their basking areas should be 85-90F (slightly cooler 80-85 for easterns) and a UVB light is a MUST. Be sure to mount it inside the cage. A ZooMed reptisun 5.0 is acceptable. Cool side can reach the mid to low 70s safely. Generally insignificant to these adaptable lizards, though they like it relatively dry. 40-50%is fine.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00059.warc.gz
CC-MAIN-2022-05
2,149
13
http://www.go4expert.com/forums/e-cracker-email-cracker-apparently-post40585/
code
A e-mail cracker? LOL u serious? the way ur detailing this retartded program is basically just a brute forcing tool. brute forcing is super old and hardly even works now days best way and fastest way i can think of is just sending him a keylogger / R.A.T either one works but to me its as if ur advertising this site so i took the link shabbir can add it back its up to his discreation
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00100-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
385
5
https://groups.yahoo.com/neo/groups/json/conversations/topics/1537?unwrap=1&var=1&l=1
code
new libJSON makefiles - A lot of people have asked me for makefiles instead of the Code::Blocks project files so I wrote makefiles for each platform. Should be good, but didn't try to make on Windows. If someone has a PC, maybe try and make it to make sure that it's correct? :) Also, how do I get libJSON added to the list of libraries on http://www.json.org/? I asked a while ago, but libJSON wasn't complete, but it's been fully compliant for a while. [Non-text portions of this message have been removed]
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00502.warc.gz
CC-MAIN-2017-43
508
4
https://www.ncasi.org/resource/cpat/
code
Climate Projection Analysis Tool (CPAT) NCASI developed the Climate Projection Analysis Tool (CPAT) to enable organizations to summarize climate projections for locations of interest in the conterminous United States. The tool is based on current (and recent past) climate measurements from the PRISM dataset (covering 1980 to 2019, interpolated from weather station data), and downscaled climate projections from the Coupled Model Intercomparison Project (CMIP-5). NCASI Member Company personnel may request access to the tool. More information is available here (website login required). The first version of CPAT was released in 2022, and CPAT 2.0 was released in 2023. Click here to access the CPAT 2.0 demo app (available to the public). This demo app was created for NCASI through a collaboration of the Harvard University Department of Statistics and Reed College in the Undergraduate Forestry Data Science Lab (UFDS). Full access to CPAT is available to NCASI Member Company staff (website login required).
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100112.41/warc/CC-MAIN-20231129141108-20231129171108-00280.warc.gz
CC-MAIN-2023-50
1,014
6
https://developer.jboss.org/thread/263810
code
I have successfully builded in my application the HATimerService, it starts itselfs and all the subsequent timers and everything works correctly. Btw it is instead not able to stop without errors, while the context gets stopped it raise a ComponentUnavailable or a NameNotFound exception while looking up the timers. I'm using the same jndi path used for starting it. (global/earName/earModule/ejb!interface) I checked with the admin console and I found that the only context available in that moment are: JNDI = / ConnectionFactory JNDI = / JmsXA JNDI = / Mail JNDI = / TransactionManager JNDI = / queue JNDI = / ejb JNDI = / jms JNDI = / jboss but there is not java:global context available, given that, it raises during the stop of the first timer EJBComponentUnavailableException: JBAS014559: Invocation cannot proceed as component is shutting down and during the stop of the second javax.naming.NameNotFoundException: Error looking up global/earName/.... Why the java:/global is unavailable inside the HATimerService.stop() method ? thks in advance
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158958.72/warc/CC-MAIN-20180923020407-20180923040807-00103.warc.gz
CC-MAIN-2018-39
1,053
18
https://www.smore.com/0pad9
code
Don't forget: the library.... - Has ipads, kindles (kindle, kindle fire) and a nook. Come check them out! - Works with classes and tutorials on the research process, picking great reads, and creating lesson plans - Can help with works cited, in-text citation, copyright, and primary/secondary sources
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823445.39/warc/CC-MAIN-20181210212544-20181210234044-00347.warc.gz
CC-MAIN-2018-51
300
5
https://ascoronavirus.com/coronavirus-in-depth-update-schools-open-recession-contact-tracing-nine-news-australia_c85c0bddd.html
code
Coronavirus: Description video Watch the video “Coronavirus: In-depth update: Schools open, recession, contact-tracing | Nine News Australia” and like it! Scott Morrison is asking parents to send their children back to school as the second term for 2020 begins. Subscribe: https://bit.ly/2noaGhv Get more breaking news at: https://bit.ly/2nobVgF Join Nine News for the latest in news and events that affect you in your local city, as well as news from across Australia and the world. Follow Nine News on Facebook: https://www.facebook.com/9News/ Follow Nine News on Twitter: https://twitter.com/9NewsAUS Follow Nine News on Instagram: https://www.instagram.com/9news/ #Coronavirus #9News #BreakingNews #NineNewsAustralia #9NewsAUS Liked the “Coronavirus: In-depth update: Schools open, recession, contact-tracing | Nine News Australia” video? Share it with your friends!
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00052.warc.gz
CC-MAIN-2021-17
878
10
http://darrendev.blogspot.com/2009/01/linux-shell-two-directories-quick.html
code
Sometimes I have to work in two directories from the commandline. For instance to check a log file got created, then back to the directory where the command I am testing is run from. The quick tip is simply "cd -". This takes you back to the previous directory. If you type it again it takes you back to where you were, acting as a nice toggle. "cd" with no parameters takes you to your home directory. "cd ~/somewhere" takes you directly to a directory relative to your home directory (i.e. tilde expands to your home directory). I had a hunt around but couldn't find any other special characters to use with "cd", but if you know one please let me know. (This tip was found in Linux Magazine, June 2008)
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00371.warc.gz
CC-MAIN-2021-25
705
3
https://forgis.ru/professional-dating-website-design-651.html
code
You open up your consideration set by letting online dating website scripts impress you, and could well strike upon a great deal in the form of a cool script. Some of the scripts are open-source, which ensures that you get access to resources like widgets and themes. The Speed Dating feature is a contemporary online dating method for adventurous users, and you can leverage text chat, video chat applications to provide the same to your website users. People look for dates everywhere – outside colleges, at cafes, in football games, and even online! Here, we help you understand how software and scripts can serve you by introducing you to the best ones from the market. Positioned as a simple yet sophisticated website builder for dating portals, Ska Date is a top class solution for all your dating oriented ideas such as an out and out dating service website, chat based service, and what not. The website builder is without any tricky coding exercises for you, so you can enjoy your blog and web page creation experiences. Once your website is populated with profiles, finding the right one among them will be easy for users, because of the profile search settings.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799711.94/warc/CC-MAIN-20210126073722-20210126103722-00658.warc.gz
CC-MAIN-2021-04
1,174
8
https://github.com/dotMorten/WinUIEx/
code
A set of extension methods and classes to fill some gaps in WinUI 3, mostly around windowing and unit testing. - Window Extension methods - HWND Window Handle Extensions methods - Window Manager - Splash Screens - OAuth Web Authentication - UI Test Tooling for easy UI Testing And more to come...
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943749.68/warc/CC-MAIN-20230322020215-20230322050215-00206.warc.gz
CC-MAIN-2023-14
296
8
http://techat-jack.blogspot.com/2012/03/tips-turning-your-windows-vista-to.html
code
Two files you need. Download them to your computer. - Universal Theme Patcher -http://soft3.wmzhe.com/download/deepxw/UniversalThemePatcher_20090409.zip - Window 7 Theme - http://www.deviantart.com/download/102269037/Windows_7_Style_For_Vista_by_giannisgx89.rar - Run the Universal Theme Patcher, depending on your Windows version run the -x64 (for 64bit OS) or -x86 (for 32bit OS) - Press the 3 "Patch" buttons to patch the needed files. (you can always restore these files back) - Copy the Windows 7 Theme files to your c:\windows\resources\themes. Remeber, you should have a folder "Windows 7" and a file "Windows 7 (Windows Theme File)" under the ...\themes folder. - Right click on your desktop, choose Personalize and then Theme. Under Theme, browse to "c:\windows\resources\themes\Windows 7" (the file) - Start and run Regedit, change or add this registry key "HKEY_CURRENT_USER\Control Panel\Desktop\WindowsMetrics\MinWidth". Gives its value 56 (String REG_SZ, value) Show Desktop Rectangular button - The Show Desktop button at the bottom right corner of Windows 7 is a handy shortcut. To add this function to Vista, download this little tool to your computer and create a shortcut to it in program startup: Of course, there're more ways to further customize your Windows Vista to make it more like 7...
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495147.61/warc/CC-MAIN-20190220150139-20190220172139-00319.warc.gz
CC-MAIN-2019-09
1,312
10
http://www.sidefx.com/docs/houdini/nodes/sop/pop.html
code
|On this page| This SOP uses an external POP network to run a particle simulation. It will import the state of the external POP network using the provided parameters. By connecting SOPs to the inputs of this SOP, one can override the Context Geometry which is used by the POP network. To jump to the POP network, press on a POP Merge node tile and choose Edit POP Network. This also sets the View Operation parameters of the POP viewer to match those of the POP Merge node. All POP networks can have 'Context Geometry'. These context geometries are set in the parameters of the POPNET, the Particle CHOP, or the Pop Merge SOP. In the Pop Merge SOP they can also be set by wiring OPs into the inputs of the operator. These context geometries can be referenced by the following POPs: Source POP, Collision POP, SoftBody POP, Attractor POP, Creep POP (All of which have the parameter - Geometry Source > Use Context Geometry). So, instead of hard-coding a specific piece of source geometry or collision geometry, a single POP network can be used by several Pop Merge SOPs to generate several different outputs, depending on the geometry input into the Pop Merge SOP. A path to the POP to cook. If it points to a POP network, it will use the cook POP of that network. Time at which simulation starts, based on seconds using the FPS of the scene. At start time, simulation has already been running this long Geometry to use as initial state of simulation This value is used to initialize the pseudo-random sequence used by the simulation. It is useful for generating different simulations from the same network. How many times to cook in between frames Max # of Particles Controls the maximum number of particles that can exist in the simulation at any given moment. A value of 0 means particles can always be birthed. Remove Unused Points Remove the points associated with dead particles. This can reduce the memory footprint of the simulation. On the other hand, disabling this option can improve performance by recycling points. Changing this option can affect the point numbers associated with individual particles. The path to a SOP to use as Context Geometry N. The display or render SOP can be specified using paths of the form object_path/display_sop or object_path/render_sop, respectively.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588961.14/warc/CC-MAIN-20180715183800-20180715203800-00619.warc.gz
CC-MAIN-2018-30
2,294
16
http://www-10.lotus.com/ldd/ndseforum.nsf/xpTopicThread.xsp?action=openDocument&documentId=7254829A0289B40985257FF0005B4E66
code
I may be being particularly thick here but I've been sent a class object to look over and I want to add it to one of my Notes databases but I don't know what to do with it. Where do I paste the class object? Should I treat it like an agent or part of a script library? How do I call the class object? Thanks in advance.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823702.46/warc/CC-MAIN-20181211194359-20181211215859-00204.warc.gz
CC-MAIN-2018-51
319
2
https://support.stackpath.com/hc/en-us/categories/360002537152-Object-Storage
code
StackPath's object storage solution allows you to store, manage and serve static files from the edge without the need for an origin server. Our object storage offers the following features: Free egress to our CDN Speeds up to 6x that of AWS S3 To get started, see Create and Manage Object Storage Not finding what you need?
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510575.93/warc/CC-MAIN-20230930014147-20230930044147-00788.warc.gz
CC-MAIN-2023-40
323
6