url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://community.wolfram.com/groups/-/m/t/379332
code
I followed the instructions in the Users guide provided with Mathematica-Root. (http://library.wolfram.com/infocenter/Articles/7793/) The first part was successful i.e. I was able to I am not able to load the example. Import::format: Cannot import data as ROOT. >> I don't know how to proceed from here. Help appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00838.warc.gz
CC-MAIN-2023-50
321
3
http://www.linuxquestions.org/questions/linux-newbie-8/keyboard-stopped-working-in-the-fc9-gui-671620/
code
Keyboard stopped working in the FC9 GUI I installed the Nvidia drivers on Fedora 9 and now my keyboard stopped working in the GUI, so I can't log in. It works fine in the console. If I hit ctrl + alt + f1 it works. I have googled for this for about a day and have seen nothing.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122167.63/warc/CC-MAIN-20170423031202-00142-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
277
3
https://idling-in-the-unreal.net/Corpus
code
In both the projects Synset_Gloss (2020) and Idling-in-the-Unreal (2021) an algorithmic ‘prediction’ of human action conjured biometric poetry that was used to stimulate a language model to remix 17 volumes of Michel Foucault’s oeuvre. The initial algorithmic analysis asked “what are the humans doing and what might they do next” and is analogous to a new form of surveillance power. But why Foucault? Foucault is a philosopher of power, and the power of data (the info-power of social media, data analytics and continuous algorithmic assessment) is arguably the most significant kind of power that has emerged since his death in 1984. This becomes more acute as artificial intelligence (AI) becomes increasingly coupled with surveillance technology, contributing to an epoch which begins to feel crushed and exhausted—an age of generative AI and progressively synthetic knowledge. The volumes comprising the corpus have been visualised below using Word2Vec Word Embeddings and t-SNE︎︎︎
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511220.71/warc/CC-MAIN-20231003192425-20231003222425-00293.warc.gz
CC-MAIN-2023-40
1,005
3
https://www.podbean.com/site/EpisodeDownload/PBB81EFITEWF
code
You're seeing that right - we swapped the Last Chance To Eat show with this one - so we wouldn't have two alcohol themed shows back to back. This show deals with some of our favourite wineries, past and present. Wine is for everybody - not just snobs! Lots of Links this time, we were all over the place:Wine for the Confused: Please follow us on facebook, MySpace and twitter! You can find us under [email protected]. If you like what you hear, spread the word and tell your friends about us. Our theme music is "Hot Swing" by Kevin MacLeod (incompetech.com) Licensed under Creative Commons "Attribution 3.0" http://creativecommons.org/licenses/by/3.0. This podcast is also protected under Creative Commons Attribution 3 - copy it, share it, but please give credit!
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572879.28/warc/CC-MAIN-20190916155946-20190916181946-00181.warc.gz
CC-MAIN-2019-39
766
3
http://lists.tunes.org/archives/tunes/1999-January/001839.html
code
Prism Conclusion: Applicability to TUNES Mon, 11 Jan 1999 20:36:08 -0700 Laurent Martelli wrote: > >>>>> "Jim" == Jim Little <[email protected]> writes: > I would rather say that domain abstraction is the act of programming > at the _right_ level considering the domain you are dealing with. If > you are programming the memory management of an OS, you'll do > low-level things, but that does not prevent you from using using > domain abstractions, which will be virtual and physical adresses for That's exactly how I feel, too. For more of my thoughts on the matter, see my "Prism Rationale, Part 2" essay. Here's an excerpt: >* Semantic errors may be reduced by using a programmable system whose >semantics are as close as possible to the semantics of the problem >In other words, DON'T USE ASSEMBLY LANGUAGE TO QUERY A DATABASE! :) >Use SQL. Or, more simply, "Use the right tool for the job." If you >don't, you'll deserve the bugs you'll get. I also go into this philosophy on my web site (http://www.teleport.com/~sphere) in "Paradigm-Independence: A Philosophy of Software Engineering." The conclusion is that not only should we use domain abstraction, but we should use MULTIPLE domain > Domain abstractions have been in my mind for a long time now. But > belonging to several mailing lists with a rather high traffic > prevented me from reading with sufficient attention your mails. > I'll try to have a look at them, but I must say that I am a little > buusy those days. Prism is sort of a "language laboratory." It allows you to define whatever domain abstractions you want (in the form of metamodels) and even mix several compatible domain abstractions together while creating a program. If domain abstraction has been on your mind and you want to experiment with creating some new abstractions, I suggest that Prism would be a good platform for this. Prism also has the ability to compile programs written in your domain abstraction, although you have to tell it how. If you don't want to slog through the six essays I posted, then drop me a line and ask questions specific to what you're interested in. For example, you might describe a domain abstraction you've been thinking about, and I could describe how that domain could be formally represented in Prism. I'll try to be as brief as possible. :) Jim Little ([email protected]) Prism is at http://www.teleport.com/~sphere.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103464.86/warc/CC-MAIN-20231211013452-20231211043452-00728.warc.gz
CC-MAIN-2023-50
2,402
40
https://jira.atlassian.com/browse/JSDCLOUD-5564
code
Automation rules that should be executed based on SLA outcomes are not being triggered. 1 - Create an SLA. E.g: "Time waiting for Customer Response". 2 - Create an Automation rule like the following: 3 - Create an issue and wait for the SLA to breach. When the SLA breaches, the issue should be transitioned to "Resolved". The SLA does not trigger the automation rule. The issue is not transition and nothing is displayed on the automation logs. The following queries return jobs scheduled to the past. Other automation rules do not seem to be impacted by the bug. None so far.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00570.warc.gz
CC-MAIN-2022-49
577
9
https://www.hackster.io/Asaadk/bleing-92c277
code
This demo project introduces Hexabitz Bluetooth/BLE module (H23R10) and a demo Android App used to control a variety of arrays. You can easily control different shapes and types of arrays by changing the ID of modules your are talking with. The app and BLE module will set you up for success on your IoT and home automation projects! We will also demo, step-by-step, the process to update smartBASIC scripts running on the embedded BT900 module in case you want to write your own scripts Controlling your projects from a smartphone is an awesome feature that you can easily add with Hexabitz H23R10 Bluetooth/BLE module. We built a really simple Android app to control few modules -nothing fancy- so that anyone can take the App and morph it to her own needs. The App source code is available in the software section. Once you open the App, a list of nearby Bluetooth devices is displayed. You need to pick your BLE module and pair with it. The app has multiple tabs, one for example controls the H01R00 RGB LED module via a color picker, RGB sliders and toggle buttons. Another tab controls the solid state relay module (H0FR60) via toggle buttons and a timeout timer: We will add later more tabs with time to control and read signals from other modules. Each tab has a text box to enter module ID so that you can easily control different array shapes. Just build the array, run exploration CLI command and then enter the module ID in the app and start the action!
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141686635.62/warc/CC-MAIN-20201202021743-20201202051743-00713.warc.gz
CC-MAIN-2020-50
1,465
6
https://www.sololearn.com/Discuss/2266116/cert-to-college-credit
code
Cert to college credit? So im in the navy and evaluation period is coming up and they care if your taking steps in education so i was wonder if these can be transferred to college credit, or if the certs and courses are US board of education approved as a class or something? 4/29/2020 4:20:19 PMNicolas Frye 1 AnswerNew Answer The certificates on SoloLearn are not accredited by any educational institute if I'm correct. They wouldn't be certified because there is no way of monitoring if you haven't cheated because you can do "exams" at home rather than in a test centre. Have a look at this website. https://cpduk.co.uk/news-articles/view/cpd-points-units-credits Also, you can take courses that are accredited by big tech giants like Microsoft, Cisco, etc. Microsoft offers many different types of certificates but you will most likely have to set an exam. Have a look at Microsoft as they have HTML5 certificate if I'm correct.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107865665.7/warc/CC-MAIN-20201023204939-20201023234939-00254.warc.gz
CC-MAIN-2020-45
933
5
https://osf.io/mdcbt/wiki/home/
code
This provides the NIH Cortex software based source code for fMRI data acquisition on the auditory segregation experiment. A critical aspect of auditory scene analysis is the ability to extract a sound of relevance (figure) from a background of competing sounds (ground) such as when we hear a speaker in a cafe. This is formally known as auditory figure-ground segregation. This is colloquially known as "cocktail party problem". To understand how the brain segregates overlapping sounds, we need to record from neurons, i.e. single cells in the brain. Since systematic single cell brain recordings are not suitable to perform in humans, we need to use animals in this research. Monkeys are best suited as animal models of human auditory perception due to their similar auditory abilities and similar organization of their auditory brain as humans. However, before we generalize the findings from monkeys to humans, we need to establish that monkeys utilize similar brain regions as humans for auditory figure ground segregation. I employed non-invasive functional magnetic resonance imaging (fMRI) and presented stochastic figure ground (SFG) artificial sounds to awake passively listening rhesus macaques (macaca mulatta) that were trained to perform visual fixation for fluid reward. I showed that monkeys use similar regions of their auditory brain as humans to separate overlapping sounds. This has now paved the way for recording from single cells in the monkey brain which will enable us to understand how the brain solves the cocktail party problem. If you use this code **please cite the following paper**: Felix Schneider*, Pradeep Dheerendra*, Fabien Balezeau, Michael Ortiz-Rios, Yukiko Kikuchi, Christopher I. Petkov, Alexander Thiele, and Timothy D. Griffiths. "Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey." Scientific reports 8, no. 1 (2018): 1-8.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362589.37/warc/CC-MAIN-20211203030522-20211203060522-00591.warc.gz
CC-MAIN-2021-49
1,902
6
http://www.linuxquestions.org/questions/blog/pierre2-469572/page2.html
code
Welcome to LinuxQuestions.org, a friendly and active Linux Community. You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today! Note that registered members see fewer ads, and ContentLink is completely disabled once you log in. went to a local auction the other week, - never have had a chance to do this before, & wanted to get some notebooks - maybe 6 - 10 or so of these, but I was quite surprised at just how high the price for 2nd hand notebooks were. almost 40% of a brand new one, with only slightly higher spec's ??. - new one would have more memory or a larger hdd - but only just.
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662197.73/warc/CC-MAIN-20160924173742-00221-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
827
8
https://stackoverflow.com/questions/4227583/in-visual-studio-programming-net-how-can-i-tell-a-file-reference-from-a-proje
code
I intend to combine a bunch of .NET projects, in separate solutions, into one solution. Some of the projects appear in more than one solution. It all happens to be C#, but I don't think that's relevant. We have a full range of projects, class project, windows forms, web site, web services. The links between the projects are a mix of file references (Add Reference -> Browse Tab) and project references (Add Reference -> Projects Tab). I will want to change the file references to project references when all the projects are in the same solution. Is there anywhere that the Visual Studio UI (I am using Visual Studio 2008) allows me to distinguish a project reference from a file reference? Failing that, is there an easy way to tell by looking at the project file with another tool, or even a text editor?
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00082.warc.gz
CC-MAIN-2020-05
808
3
https://conoga.neocities.org/about.html
code
What is Conoga? What is a Conoga? In its current form, Conoga is just me. I am a 21 year old goblin-like creature who hails from the state of Oregon. I developed this site, and created the software and games which bear the name. The name "Conoga" stands for COmmunication (with the) aNOmaly GAmes. A bit of a stretch for sure, but I really liked how "Conoga" looked and sounded, so I made it work. Not to be confused with Canoga park in California. I really hope it doesn't mean anything offensive somewhere. On a side note, my pronunciation for "Conoga" goes like this: Coh (Like Coworker), No (Like saying No), and Guh (like Gunslinger). While this is how I've been pronouncing my name, it's more up to whoever is saying it. What does a Conoga do? I like to program games and software, and this site will act as a sort of portfolio for them. I started getting into HTML and CSS earlier this year, which has resulted in this site. I've always been a fan of the old internet's aesthetic, and this site has been my place to express my love for it. When I'm not programming or participating in the real world, I'm usually playing video games for inspiration and for, of course, entertainment. The "GameChat" portion of this site is where I write psuedo-game-reviews. It's basically where I gush about the games that I've been playing. What does a Conoga like? Besides the nerdy shit mentioned above, here are some other things that I enjoy or dabble in: - Music playing / Making sound effects - Pixel art - Repairing old game consoles Some styles and general things I'm into are vaporwave, city-pop, retro-futurism, outdated electronics, and video games. Some of my most recent escapades include watching old VHS TV-recordings, exploring discarded thrift-store floppies, and fixing a Sega Saturn. This is my current PC setup: My PC was built in 2017, and it was mid-range then. I plugged a 1050ti into it back in 2019 so that I could play Modern Warfare, but it might be time for an upgrade again soon. The chunky CRT monitor is a Viewsonic E790 built in March of 1999, with a native resolution of 1280x1024. Most games seem to work on it so far with hardly any issues. Retro-style platformers like Sonic Mania look incredible on it. I love the aesthetic of the it. The matching keyboard is a unicomp keyboard, which is a mechanical keyboard manufactured from the same molds as the old IBM Model M keyboards. Sadly, I don't have a beige computer chassis to match everything else (yet). What does a Conoga like? Here's some stuff that I've been listening to while programming lately: - "Some chill Dreamcast music - "Ridge Racer Type 4" Soundtrack - A Vaporwave mix organized by "Kurdtbada" - A Vaporwave mix organized by "Deep Sea Current" - "Bomberman World" Soundtrack - A song from the Plok OST. How tf is this a SNES track? Esc Realm is another cool sound-plug if you dig vaporwave. Plus, it's a local neighborhood radio station located right here on Neocities! IT EVEN HAS A MASTER SYSTEM EMULATOR! Want to send hate-mail? Direct your furious typing in this direction: [email protected]!
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056578.5/warc/CC-MAIN-20210918214805-20210919004805-00539.warc.gz
CC-MAIN-2021-39
3,090
26
https://www.meetup.com/SoCo-Visual-Studio-and-VS-Code/events/tzxdqqyzdbqb/
code
GraphQL can be considered a next step `rest++`-ful service in terms of the functionality of what it brings to a service api. In this discussion William Wegerson gives the history of GraphQL and demoes how one can build a useful hybrid rest and GraphQL.Net api against a SQL Server 2017/Azure database. William goes old school and shows how to pass on generated JSON data via stored procedures without Entity Framework for performance considerations all within a .Net Core 2.2 API using Visual Studio 2019. He also demonstrates a GraphQL testing playground based on the rest website to do basic queries against a fully stocked database. Every step is reproducable and usable in a real life coding scenarios; so come and learn the fundamentals of GraphQL in .Net. William is a full stack developer who speaks on the topics he loves to program in and use. He generally finds himself on the consultant side of the office and has been determined for the past 25 years plus to ride the tech wave the best he could by immersing himself in new technologies to provide solutions to the clients that hire him. His current topics are one he has worked on for a few contracts and is able to provide the pros/cons an tips to get the job done.
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480272.15/warc/CC-MAIN-20190216105514-20190216131514-00187.warc.gz
CC-MAIN-2019-09
1,229
2
http://judahpapco.thezenweb.com/The-Ultimate-Guide-To-pay-me-to-do-your-assignment-15530149
code
When you already stayed 183 times inside the place, you can leave Peru and re-enter. In case you are Blessed you're going to get another 183 times, but in the final months border hoping to resume your vacationer visa got tougher and A lot more immigration officers are hesitant to give you another half calendar year whenever you presently stayed 183 times in Peru. Given that 2008 previously it's now not achievable to extent your vacationer visa as you entered Peru. On or beside the stamp the immigration officer wrote a number indicating the days she's allowed to remain in the place. A max. of 183 days can be offered. The moment in Peru a vacationer visa cannot be prolonged neither in Breña. nor in Tarapoto nor on the internet. According to Peruvian Customs you can be billed twenty% customs obligation. Use a look at the "Handle de Equipajes y Vehiculos en Fronteras Terrestres" ("") virtually at the bottom of stage one.1, exactly where you do have a listing from a to d . There you discover the Formal rules. How much time should I set aside to clear immigration ahead of connecting on to Cusco? I will likely be arriving into your state by using LAN at 10:55am. Thank you for your time! when searching for anything unique I stumbled upon The solution to your dilemma. In accordance with the Peruvian Aliens Act (Ley de Extranjeria) Article 17 A brief visa like your tourist visa is legitimate 6 months, which means following the problem of this kind of visa you may enter Peru in a period of time of six months. When you finally entered the visa is legitimate for some time indicated on it, in your case 30 times. hi....i am a pakistani scholar of medicine in university of barcelona spain. I've a foreign college student residency visa and wish to head over to peru with my surgical team for voluntary work . With a valid Peruvian passport you happen to be Peruvian and will Are living, do the job or research like some other Peruvian; no have to have for the visa! Tourists need a passport legitimate no less than 50 percent a 12 months with at the very least 2 absolutely free internet pages within the visa portion when entering Peru. Regardless that in most countries 2 no cost webpages in the passport can be a general prerequisite when implementing for the visa, on the web site of the Peruvian consulate in the UK ("") I can not discover nearly anything about this. Hello! Are Peru visitors allowed to bring a tripod (for use which has a movie camera) or could it be a topic to any of polices? There are 2 Verify box at the conclusion of this way. Just click on/check them and at the tip just click the “Submit” button. Thus I very endorse that you get in contact with the consulate once again and request them. It would be excellent should you report back again to us, so Some others in the same circumstance as you realize the official polices. Many thanks My spouse contains a Japanese passport that expires the end of May well. We were hoping to drop by Peru April - May 4, but I just noticed that passports really should be valid for 6 months from departure. as I want details about peru visa i Get in touch with all consulate and embassy but not one person no reaction just india is great place on earth and him embassy also so wonderful as peru embassy in india offered me information and facts and so they advise me to Speak to china and dubai last 10 times i am trying to Get in touch with but no reply me my cellular phone no reply my mail no reply my textual content so the place i send my documnts And More Info just how? for pakistani citizen
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583822341.72/warc/CC-MAIN-20190121233709-20190122015709-00193.warc.gz
CC-MAIN-2019-04
3,580
14
https://wiki.openstreetmap.org/wiki/Tag:landuse%3Dbasin
code
|landuse = basin| |An area of land artificially graded to hold water.| |Used on these elements| |Status: de facto| |Tools for this tag| An area of land artificially graded to hold water. Note that this definition includes also structures typically without water. Usually these features are made for man made water courses e.g. storm water, water treatment. Applies to nodes and areas. Tags used in combination - name=<name of the basin> - basin=* refines the type of the basin: - natural=water If it typically contains water. - intermittent=yes If the presence of water is intermittent. - seasonal=* If the presence of water is seasonal. - natural=water + water=reservoir where the water holding is formed by a dam and natural valley or depression. - landuse=pond (deprecated) Possible Tagging Mistakes
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574058.75/warc/CC-MAIN-20190920175834-20190920201834-00296.warc.gz
CC-MAIN-2019-39
802
16
https://bdocodex.com/us/quest/3423/2/
code
Region: Northern Calpheon Type: Character quest First quest in the chain: - Commotion at the Outpost Next quest in the chain: - A Taboo Show/hide full quest chain The soldiers at the outpost think that the villagers' superstition is ridiculous, and said that's not possible when Elion is watching over everyone. They don't understand why the villagers are afraid to enter the deserted house. Show/hide full quest's text What's with the commotion? It's.. it's nothing. I mean, it's something... Is it something? Hmm... Phew.. I'm so distracted. Anyways, they're raising a fuss refusing to do what we requested, talking about some ridiculous superstition! Do they not know that we are under Elion's protection..? They are out of their minds! The villagers are complete fools. There's no use talking to them. Quest complete conditions Completion Target: Soldier - Talk to the soldier at the outpost Meet NPC: - Soldier
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00314.warc.gz
CC-MAIN-2022-33
915
24
https://devop.one/engineer/sybase
code
Find leading freelance experts in Devop1 is a marketplace for dedicated . developers, engineers, programmers, coders, architects, and consultants. Top companies and start-ups choose Devop1 . freelancers for their mission-critical . development projects. HIRE FREELANCE . DEVELOPERS AND ENGINEERS Emil is a Software Engineer with 5+ years of experience in web development. He works on the product development team using different programming languages. He has expertise in .Net programming language in .NET, .NET Core, Java, C#, C++ and has extensive working experience in React.js, MySQL, etc. He always looks for new things to try and learn! Andriy is a software Engineer. He is 8+ years experienced as a Sybase Database Administrator with a demonstrated history of working in the financial services industries. He is skilled in PHP, Sybase, Symphony, Java, Python. He has gained good technical knowledge as a Database Administrator. Roman is a software developer, experienced SQL Developer with a demonstrated history of working in the healthcare, financial, automobile industries. Skilled in SQL, SQLite, Spring, Sybase, Spring Boot, Symphony, Python, Java, Django, Kubernetes, Jenkins, Jest, Data Analysis, Data Modeling, and SQL Server Integration Services (SSIS). Miron is a Full Stack Web developer with an expertise of 7 years. He is very familiar with a variety of skills in software engineering but mostly he has been focusing on Tableau, Django, Redux, Sybase. He has a demonstrated history of working with Cucumber and Snowflake. He has a strong foundation and madness for Snap, Slack, SageMaker, and Sage. Ruslan is a Full Stack developer, an expert in Angular, Vue Js, and Laravel Developer for 10 years. He is a full-stack engineer experienced in the typical web stack of Redux, Django, Tableau, and Sybase. Lately, his specialty is server development: microservices and REST APIs, Java and. NET. He has a deep understanding and practical working knowledge of BootStrap, XML, JUnit, and SQLite. Natalia is a Systems Engineering and Software Developer for 8 years. She has been an excellent software engineer in Brutus Framework with vast expertise in BuddyPress, CakePHP, and CMS. Has expertise in scrum and agile but prefers a lean way of handling products and people. Started a career with rails followed by PHP, API Development Specialist occasionally with Cucumber and Sybase. Salvador is an experienced Software Engineer working in the computer software industry for over 10 years. Skilled in Android, Java, Python, Django, SQL, Tableau, and Laravel. He has experience in professional software development with extensive involvement in Object-Oriented, design and development. Have experience in the full life cycle of software development including requirements defining, feasibility analysis, prototyping, interface implementation, testing, and maintenance. Hire in 3 Steps ONLY Work with hand-selected talent, customized to fit your needs at scale. Consult with our Industry Experts One of our representatives will work closely with you - to know your purpose of hiring, needs, team dynamics and skill specifications. Meet Hand-picked Resources Remotely Within 36 hours or less, our team will suggest the best resource for your project. This won’t take much time. Beginning of Collaboration Time to begin a trial task to assess the chosen resource or team! Pay if you are happy with the work and proceed towards a longer partnership.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474617.27/warc/CC-MAIN-20240225135334-20240225165334-00242.warc.gz
CC-MAIN-2024-10
3,458
18
https://blenderartists.org/t/frog-monster/378799
code
It’s been a long time since I posted, but I decided to seek some crits on this most recent work. I’m not sure how far I’m going to go with it, but I’ll at least get through modelling I hope. Things I notice to work on: -Creasing/Smoothing around the mouth -Area under the nose horn -Unifying the style of horns (two different types in this render, trying to see which ones I like best)
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038101485.44/warc/CC-MAIN-20210417041730-20210417071730-00459.warc.gz
CC-MAIN-2021-17
393
5
https://www.cisco.com/c/en/us/td/docs/switches/wan/mgx/mgx_8850/software/mgx_r3/pxm1e/configuration/guide/scg/smlines.html
code
Preparing Narrowband Service Modules for Communication This chapter describes how to prepare the following narrowband service modules for standalone or redundant operation in switches with PXM1E controllers: This chapter provides a quickstart procedure for configuring service module cards and describes how to do the following procedures: •manage firmware version levels for narrowband service modules •establish redundancy between two narrowband service modules The quickstart procedure in this section provides a summary of the tasks required to prepare service modules and lines to enable Frame Relay communications. This procedure is a quick reference for those who already have configured narrowband service modules. Start a configuration session. Note To perform all the procedures in this quickstart procedure, you must log in as a user with GROUP1 privileges or higher. setrev <slot> <version> Initialize service module cards by setting the firmware version level for each card. Run the setrev command from the PXM1E. See the "Managing Firmware Version Levels for Service Modules" section later in this chapter. Define which cards are operating as redundant cards. See the "Establishing Redundancy Between Two Service Modules" section later in this chapter. Managing Firmware Version Levels for Service Modules The service modules within the switch run two types of firmware: boot firmware and runtime firmware. The boot firmware provides the startup information the card needs. The boot firmware is installed on the card at the factory. The runtime firmware controls the operation of the card after startup. The runtime firmware file is stored on the PXM1E hard disk. After the service modules are installed, you must specify the correct runtime firmware version for each card before the switch can begin using the card. The following sections explain how to •Locate the cards that need to have the firmware version level set •Set the firmware version levels for cards in the switch •Verify the firmware version levels being used by cards Locating Cards that Need the Firmware Version Set When a service module is installed and the firmware version needs to be set, the System Status LED on the front of the card blinks red. The dspcds command shows that the card status is Failed. Other events can cause a failed status, but if the service module is new, the problem is probably that the firmware version number has not been set. To locate the cards that need to have the firmware version set, use the following procedure. Step 1 Establish a CLI management session at any access level. Step 2 To display a list of all the cards in the switch, enter the dspcds command. mgx8830b.1.PXM.a > dspcds The following example shows the display for this command. The card state for the card in slot 4 is listed as Failed/Active. This is how a card appears when the runtime firmware version is not selected. mgx8830b.1.PXM.a > dspcds mgx8830b System Rev: 03.00 Apr. 25, 2002 23:20:16 GMT Chassis Serial No: SCA053000KM Chassis Rev: A0 GMT Offset: 0 Card Front/Back Card Alarm Redundant Redundancy Slot Card State Type Status Slot Type --- ---------- -------- -------- ------- ----- 01 Active/Active PXM1E-4-155 MAJOR 02 PRIMARY SLOT 02 Standby/Active PXM1E-4-155 NONE 01 SECONDARY SLOT 03 Active/Empty RPM NONE NA NO REDUNDANCY 04 Failed/Active FRSM_2CT3 MINOR 05 PRIMARY SLOT 05 Standby/Active FRSM_2CT3 NONE 04 SECONDARY SLOT 06 Active/Active CESM_8T1 NONE NA NO REDUNDANCY 07 Active/Active SRM_3T3 NONE 14 PRIMARY SLOT 11 Active/Active FRSM_8T1 NONE NA NO REDUNDANCY 13 Standby/Active FRSM_8T1 NONE NA NO REDUNDANCY 14 Standby/Active SRM_3T3 NONE 07 SECONDARY SLOT Note the slot number, card type, and redundancy type for each card that needs to have the firmware version set. You will need this information to activate these cards as described in the next section, "Initializing Service Modules." Note If any service module displays the Active/Active card state, you do not have to set the runtime firmware version for that card. Initializing Service Modules Before a service module can operate, it must be initialized in a switch slot. The initialization process defines the service module runtime software version that will run on the card and identifies the slot in which the card operates. To initialize a service module, use the following procedure. Note The PXM1E card supports a maximum of 99 lines on the switch. As you add service modules, verify that the line count for all service modules does not exceed this number. Step 1 If you have not already done so, determine the software version number for the card by referring to the Release Notes for Cisco MGX 8850 and MGX 8830 Software Version 3 (PXM45/B and PXM1E). Tips If you have trouble locating the runtime firmware version level, use the filenames on the PXM1E hard disk. To see how to derive a version number from a file name, see the "Determining the Software Version Number from Filenames" section in Chapter 9, "Switch Operating Procedures." Step 2 Establish a configuration session using a user name with SERVICE_GP privileges or higher. Step 3 To set the firmware revision level for a card, enter the setrev command. mgx8830b.1.PXM.a > setrev <slot> <version> Note Each card should be initialized only once with the setrev command. The only other time you should enter the setrev command is to initialize cards after the configuration has been cleared with the clrallcnf, clrcnf, or clrsmcnf commands. Replace <slot> with the card slot number and replace <version> with the software version number. For example: mgx8830b.1.PXM.a > setrev 4 3.0(0) After you enter the setrev command, the System status LED blinks red until the firmware load is complete, then it changes to non-blinking green. Step 4 To verify the activation of a card for which the status was previously listed as Failed/Empty, enter the dspcds command. The status should change to Active/Active. Verifying Card Firmware Version Levels When you are having problems with your switch, or when you have taken delivery of a new switch but delayed installation, it is wise to verify the firmware versions installed on the switch. If newer versions of this firmware are available, installing the updated firmware can prevent switch problems. To see the firmware version numbers in use on your switch, use the following procedure. Step 1 To display the software revision status of all the cards in a switch, enter the dsprevs command as follows: hsfrnd6.8.PXM.a > dsprevs Step 2 To see the software revision levels for a single card, enter the dspversion command as follows: hsfrnd6.4.pxm.a > dspversion Step 3 Another way to see the software revision levels for a single card is to enter the dspcd command as follows: hsfrnd6.4.FRSM12.a > dspcd Step 4 Using the dsprevs and dspcd commands, complete the hardware and software configuration worksheet in Table 2-10, which is in the "Verifying the Hardware Configuration" section in Chapter 2, "Configuring General Switch Features." Step 5 Compare the versions you noted in Table 2-10 with the latest versions listed in the Release Notes for Cisco MGX 8850 and MGX 8830 Software Version 3 (PXM45/B and PXM1E). Step 6 If the switch requires software updates, upgrade the software using the instructions in Appendix A, "Downloading and Installing Software Upgrades." Establishing Redundancy Between Two Service Modules To establish redundancy between two service modules of the same type, use the following procedure. Step 1 Establish a configuration session using a user name with SUPER_GP privileges or higher. Step 2 Enter the dspcds command to verify that both service modules are in the Active state. Step 3 Enter the addred command as follows: pop20one.7.PXM.a > addred <redPrimarySlotNum> <redSecondarySlotNum> <redType> Replace <redPrimarySlotNum> with the slot number of the card that will be the primary card, and replace <redSecondarySlotNum> with the slot number of the secondary card. Replace <redType> with the number 1, which selects Y-cable redundancy. Although the online help lists other redundancy types, Y-cable redundancy is the only type supported on service modules in this release. Note One of the two cards can be configured before redundancy is established. If this is the case, the configured card should be specified as the primary card. Redundancy cannot be established if the secondary card has active lines. If the secondary card has active lines, you must delete all ports and down all lines before it can be specified as a secondary card. You clear the configuration on a single service module with the clrsmcnf command. Tips If the switch displays the message, ERR: Secondary cd is already reserved, then lines are already in use on the specified secondary card. Enter the dnln command to bring down these lines before re-entering the addred command, or enter the clrsmcnf command for the secondary card. Step 4 To verify that the redundancy relationship is established, enter the dspred command as shown in the following example: pop20two.7.PXM.a > dspred The secondary state for the card in the secondary slot changes to Standby only when the secondary card is ready to take over as active card. After you enter the addred command, the switch resets the secondary card. When you first view the redundancy status, the state may be Empty Resvd or Init. The secondary card may require one or two minutes to transition to standby. Note The dspcds command also shows the redundancy relationship between two cards. For information on managing redundant cards, refer to the "Managing Redundant Cards" section in Chapter 9, "Switch Operating Procedures."
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988759.29/warc/CC-MAIN-20210506175146-20210506205146-00403.warc.gz
CC-MAIN-2021-21
9,673
83
http://www.pronetworks.org/forums/stupid-mcafee-problem-t104361.html
code
of course like always its stupid windows vista giving me another stupid problem to p*ss me off. I just installed the 64-bit vista. According to mcafee's website it says the program will work. I installed mcafee internet security (has all anti spam,firewall,anti virus,etc) it worked for a day and then when i click on it to bring it up all i get is a blank window, its still protecting everything but the stupid security center won't work. SO i uninstalled it and rebooted and re installed it and it installed no problem but when i open the stupid security center ..same thing. i am so tired of this
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064898.14/warc/CC-MAIN-20210411174053-20210411204053-00388.warc.gz
CC-MAIN-2021-17
599
4
https://www.rspspk.com/topic/8238-new-player-looking-for-a-pking-partner/
code
Hi guys, Ive been playing a bit of deadman mode and really enjoy it, except I've never really pked at all before, encept for f2p pure pking back in the day. Send me a message if you are also new or would like to teach me . I would like to learn bridding, since thats pretty much what dmm is, so reply if interested, thanks. New player looking for a pking partner 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541692.55/warc/CC-MAIN-20161202170901-00110-ip-10-31-129-80.ec2.internal.warc.gz
CC-MAIN-2016-50
434
4
https://lists.linbit.com/pipermail/drbd-user/2010-February/013461.html
code
Note: "permalinks" may not be as permanent as we would like, direct links of old sources may well be a few messages off. On Thu, Feb 04, 2010 at 10:47:39AM +0100, listacc at gmx.de wrote: > Hello! > > I have a drbd-8.2.7 running here (precompiled package from SuSE, on a openSUSE 11.1 system). > > In my logfile I get warnings every few minutes: > "kernel: drbd0: helper command: /sbin/drbdadm outdate-peer minor-0 exit code 4 (0x400)" Every few minutes?? Thats a bit unusual. Mind do show a log excerpt? It is very likely not the only message, is it? > I got the source code for 8.2.7 from the linbit site and took a grep for an exit code 4, but did not find any. > > Then I took the source from suse repositories and did the same, bu no hint for an exit code 4. Only for 40. > > Can you please give me a hint? http://www.lmgtfy.com/?q=outdate%20peer%20exit%20code%204&l=1 ;) -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. __ please don't Cc me, but send to list -- I'm subscribed
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00685.warc.gz
CC-MAIN-2021-43
1,116
3
https://buddy.works/actions/google-cloud-cli
code
Do more with Google Cloud CLI Buddy allows you to instantly connect Google Cloud CLI with 100+ actions to automate your development and build better apps faster.Connect Google Cloud CLI to 100+ dev tools You use lots of tools to get web & app development done. Buddy creates more time in your day by helping you automate those tools. Sign up for free with
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.92/warc/CC-MAIN-20210726183622-20210726213622-00155.warc.gz
CC-MAIN-2021-31
355
4
http://www.ijc.org/en_/Careers
code
Recruitment campaigns to staff IJC offices in Windsor, Ottawa, and Washington, D.C., are posted here. - Administrative Assistant, Canadian Section (Open to employees of the Canadian Public Service occupying a position in the National Capital Region) - Director, GLRO, Windsor (Full-time) (Open to all U.S. citizens) The International Joint Commission recruits its staff through the public services of the governments of Canada and the United States. For more information about these services, please see:
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510266597.23/warc/CC-MAIN-20140728011746-00063-ip-10-146-231-18.ec2.internal.warc.gz
CC-MAIN-2014-23
504
4
https://forums.developer.ebay.com/questions/13950/very-long-response-time-from-ebay-api-am-i-the-onl.html
code
Very long response time from eBay API (am I the only one?) I am currently experiencing huge response time from eBay API. Happens mainly for: AddFixedPriceItem ReviseInventoryStatus (I have seen calls that took 130 seconds, when average is about 2seconds). I don't believe that this is API limit threshold issue. last time I check I was veryyy far from my limit. Am I the only one experiencing this slowness?
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058415.93/warc/CC-MAIN-20210927090448-20210927120448-00513.warc.gz
CC-MAIN-2021-39
407
2
http://radar.oreilly.com/nat/page/5
code
- Digital Music Consumption on the Internet: Evidence from Clickstream Data (Scribd) — The goal of this paper is to analyze the behavior of digital music consumers on the Internet. Using clickstream data on a panel of more than 16,000 European consumers, we estimate the effects of illegal downloading and legal streaming on the legal purchases of digital music. Our results suggest that Internet users do not view illegal downloading as a substitute to legal digital music. Although positive and significant, our estimated elasticities are essentially zero: a 10% increase in clicks on illegal downloading websites leads to a 0.2% increase in clicks on legal purchases websites. Online music streaming services are found to have a somewhat larger (but still small) effect on the purchases of digital sound recordings, suggesting complementarities between these two modes of music consumption. According to our results, a 10% increase in clicks on legal streaming websites lead to up to a 0.7% increase in clicks on legal digital purchases websites. We find important cross country difference in these effects. A paper from the EU commission’s in-house science service. (via Don Christie) - Six Degrees of Francis Bacon — data-driven research into “the early-modern social network”. (via Jonathan Gray) - Internet Census 2012 — scanning the net via botnet. Appalling how many unsecured devices are directly connected to the net. Also appalling how underused the address space is. Visualizing City Data, Gigabits Unrealized, Use Open Source, and Bad IPs Cluster - VizCities Dev Diary — step-by-step recount of how they brought London’s data to life, SimCity-style. - Google Fibre Isn’t That Impressive — For [gigabit broadband] to become truly useful and necessary, we’ll need to see a long-term feedback loop of utility and acceptance. First, super-fast lines must allow us to do things that we can’t do with the pedestrian internet. This will prompt more people to demand gigabit lines, which will in turn invite developers to create more apps that require high speed, and so on. What I discovered in Kansas City is that this cycle has not yet begun. Or, as Ars Technica put it recently, “The rest of the internet is too slow for Google Fibre.” - gov.uk Recommendations on Open Source — Use open source software in preference to proprietary or closed source alternatives, in particular for operating systems, networking software, Web servers, databases and programming languages. - Internet Bad Neighbourhoods (PDF) — bilingual PhD thesis. The idea behind the Internet Bad Neighborhood concept is that the probability of a host in behaving badly increases if its neighboring hosts (i.e., hosts within the same subnetwork) also behave badly. This idea, in turn, can be exploited to improve current Internet security solutions, since it provides an indirect approach to predict new sources of attacks (neighboring hosts of malicious ones). - A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method (PDF) — This project was simultaneously an experiment in developing quantitative and computational methods for tracing changes in literary language. We wanted to see how far quantifiable features such as word usage could be pushed toward the investigation of literary history. Could we leverage quantitative methods in ways that respect the nuance and complexity we value in the humanities? To this end, we present a second set of results, the techniques and methodological lessons gained in the course of designing and running this project. Even litcrit becoming a data game. - Easy6502 — get started writing 6502 assembly language. Fun way to get started with low-level coding. - How Analytics Really Work at a Small Startup (Pete Warden) — The key for us is that we’re using the information we get primarily for decision-making (should we build out feature X?) rather than optimization (how can we improve feature X?). Nice rundown of tools and systems he uses, with plug for KissMetrics. Search Ads Meh, Hacked Website Help, Web Design Sins, and Lazy Correlations - Consumer Heterogeneity and Paid Search Effectiveness: A Large Scale Field Experiment (PDF) — We find that new and infrequent users are positively influenced by ads but that existing loyal users whose purchasing behavior is not influenced by paid search account for most of the advertising expenses, resulting in average returns that are negative. We discuss substitution to other channels and implications for advertising decisions in large firms. eBay-commissioned research, so salt to taste. (via Guardian) - Google’s Help for Hacked Webmasters — what it says. - 14 Lousy Web Design Trends Making a Comeback Thanks to HTML 5 — “mystery meat icons” a pet bugbear of mine. - The Human Microbiome 101 (SlideShare) — SciFoo alum Jonathan Eisen’s talk. Informative, but super-notable for “complexity is astonishing, massive risk for false positive associations”. Remember this the next time your Big Data Scientist (aka kid with R) tells you one surprising variable predicts 66% of anything. I wish I had the audio from this talk! On Anonymous, Information Rights, RSS Readers, and CDN Sec - Our Weirdness is Free (Gabriella Coleman) — Often lacking an overarching strategy, Anonymous operates tactically, along the lines proposed by the French Jesuit thinker Michel de Certeau. “Because it does not have a place, a tactic depends on time—it is always on the watch for opportunities that must be seized ‘on the wing,’” he writes in The Practice of Everyday Life (1980). “Whatever it wins, it does not keep. It must constantly manipulate events in order to turn them into ‘opportunities.’ The weak must continually turn to their own ends forces alien to them.” (via Jonas Kubilius) - Information Rights and Copy Rights (YouTube) — Justice David Harvey’s keynote at Australian Digital Alliance forum, proposing balance of rights. (via Alastair Thompson) - NewsBlur (GitHub) — one of the many trending repos in the wake of the announcement of Google Reader’s case of terminal lack of relevance to Google+. See also Tiny Tiny RSS, FastLadder, and a million repos empty but for “TODO” files listing the almighty RSS reading features yet to be added to the empty file. Also found: this obsessive guide to Reader’s history. - The Pentester’s Guide to Akamai (PDF) — This paper summarizes the findings from NCC’s research into Akamai while providing advice to companies wish to gain the maximum security when leveraging their solutions. HTML DRM, Visualizing Medical Sciences, Lifelong Learning, and Hardware Hackery - What Tim Berners-Lee Doesn’t Know About HTML DRM (Guardian) — Cory Doctorow lays it out straight. HTML DRM is a bad idea, no two ways. The future of the Web is the future of the world, because everything we do today involves the net and everything we’ll do tomorrow will require it. Now it proposes to sell out that trust, on the grounds that Big Content will lock up its “content” in Flash if it doesn’t get a veto over Web-innovation. [...] The W3C has a duty to send the DRM-peddlers packing, just as the US courts did in the case of digital TV. - Visualizing the Topical Structure of the Medical Sciences: A Self-Organizing Map Approach (PLOSone) — a high-resolution visualization of the medical knowledge domain using the self-organizing map (SOM) method, based on a corpus of over two million publications. - What Teens Get About The Internet That Parents Don’t (The Atlantic) — the Internet has been a lifeline for self-directed learning and connection to peers. In our research, we found that parents more often than not have a negative view of the role of the Internet in learning, but young people almost always have a positive one. (via Clive Thompson) - Portable C64 — beautiful piece of C64 hardware hacking to embed a screen and battery in it. (via Hackaday) Chrome Tricks, Sins of Journaling, Icon Font, and Sweet PD - One Tab — turn tabs into lists, easily. (via Andy Baio) - Deep Impact: Unintended Consequences of Journal Rank — These data confirm previous suspicions: using journal rank as an assessment tool is bad scientific practice. Moreover, the data lead us to argue that any journal rank (not only the currently-favored Impact Factor) would have this negative impact. Therefore, we suggest that abandoning journals altogether. - Genericons — useful straightforward icon font. - Public Domain Review Fundraising — Over the course of our two years we’ve created a large and ever growing archive of some of the most interesting and unusual artefacts in the history of art, literature and ideas. Love the idea of some limited edition reprints of these gorgeous works! Comparing Algorithms, Programming & Visual Arts, Data Brokers, and Your Brain on Ebooks - mlcomp — a free website for objectively comparing machine learning programs across various datasets for multiple problem domains. - Printing Code: Programming and the Visual Arts (Vimeo) — Rune Madsen’s talk from Heroku’s Waza. (via Andrew Odewahn) - What Data Brokers Know About You (ProPublica) — excellent run-down on the compilers of big data about us. Where are they getting all this info? The stores where you shop sell it to them. - Subjective Impressions Do Not Mirror Online Reading Effort: Concurrent EEG-Eyetracking Evidence from the Reading of Books and Digital Media (PLOSone) — Comprehension accuracy did not differ across the three media for either group and EEG and eye fixations were the same. Yet readers stated they preferred paper. That preference, the authors conclude, isn’t because it’s less readable. From this perspective, the subjective ratings of our participants (and those in previous studies) may be viewed as attitudes within a period of cultural change.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696400149/warc/CC-MAIN-20130516092640-00018-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
9,940
37
http://www.matrixgames.com/forums/tm.asp?m=2100732
code
I recently abandoned my longest-running attempt at a game to date because I ran out of arms as the Central Powers during a key turn and Russia simply punched too many holes in my line to continue. Also, I let the allies develop 2nd level trenches before I had even spent points on it and found that this shifted the balance of forces on the western front against me, which meant that I probably wasn't going to win the game. Incidentally, given the number of troops I poured East and my complete inability to slow the Russians down as Germany because I simply couldn't pool together enough HQ activations, I'm seriously concerned that this game overestimates the allies on BOTH fronts. I already knew that the Western front is hopelessly pro-Allies because the German advance can't replicate the historical speed of Germany's August 1914 offensive (plus, has anyone noticed that Liege only falls on the first impulse about half the time; even a passing familiarity with history reveals that the taking of Liege in the first days of the war was of critical importance and its failure would have been disastrous to the Germans; in GoA, Liege is like a mini-Verdun). The point of this ramble is to ask whether arms purchases expire? I wasn't keeping careful track, but there were a lot of turns in 1914 where I had far more arms than I could spend. AH in particular seemed to run a surplus. Then, all of a sudden, in 1915 I was out of them. This was despite purchasing a large number each turn (so many, in fact, that I neglected research, artillery barrages, and air power). Do they disappear? Do I need to use them on the turn they appear? < Message edited by jscott991 -- 4/29/2009 3:34:23 PM >
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247488490.40/warc/CC-MAIN-20190218220415-20190219002415-00170.warc.gz
CC-MAIN-2019-09
1,694
3
http://www.cuffelinks.com/miniature-rgb.html
code
I designed a prototype PCB in kicad and then ordered a small batch of them from the great oshpark.com. I used interesting method to solder them, put the PCB on a standalone electric hotplate, move the components in to place with the aid of a USB microscope. This actually worked, but proved I needed two cells to run all colours red, green and blue. Wired up to a simple sketch, to give some random colours. I have since improved the script a little to reduce the flicker.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948029.93/warc/CC-MAIN-20180425232612-20180426012612-00489.warc.gz
CC-MAIN-2018-17
472
4
https://community.arubanetworks.com/community-home/digestviewer/viewthread?MID=35804
code
Hi i'm having some problems with Captive Portal. I am running v. 126.96.36.199 on a 620 controller. I was origionally running v 188.8.131.52. I created a Guest network with the Wlan wizard and enabled Captive Portal. I did not have any errors or problems during the configuration. I went to test the captive portal with a Windows 7 laptop. I associated to the correct SSID and I did get an IP address. However i could not get to the captive portal page. I kept getting the message Page not found. As if Dest Nat was not routing me correctly. However, when I tried the Captive portal with my Smart phone (Android) I had no problems reaching the captive portal page. Also tried with an Android Tablet and successfully reached the Captive Portal page. Android devices do not seem to have any problems. It appears that Windows 7 and later a Windows 8 device cannot reach the captive portal page. Is there some additional configuration i should be making to allow the Windows devices to connect? You are sure that both the Windows and Android devices have the same "pre-auth" role to redirect to captive portal? Are they on the same VLAN or do the Android devices get a different VLAN through fingerprinting? What happens if you try to browse to http://184.108.40.206 on the Windows machines? Yes they are on the same Vlan. This is a very basic setup, I have put the Guest SSID on a seperate Vlan, so all devices logging onto the Guest network will be on the same Vlan. The "employee" SSID is the only other network and its on a Different Vlan. I'm not doing anything Fancy just a seperate Vlan for Guest devices. Also I am allowing the controller to issue IP Addresses to Guest clients. While Clients on the "employee" SSID get address from the ethernet DHCP server. I have not tried going to 220.127.116.11 but I can try that this evening when i will be at the site. Thanks for your quick Response. Just throwing a couple of ideas out here.. ipv6 - dns resolve issue if possible try different browsers That seems like it might be a dns issue Have the Windows clients got a HTTP proxy configured? Is DHCP option 252 set. The Windows clients could be picking this up whereas most mobile clients need some configuration to get this option used. At Aruba, we believe that the most dynamic customer experiences happen at the Edge. Our mission is to deliver innovative solutions that harness data at the Edge to drive powerful business outcomes. © Copyright 2021 Hewlett Packard Enterprise Development LPAll Rights Reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104248623.69/warc/CC-MAIN-20220703164826-20220703194826-00331.warc.gz
CC-MAIN-2022-27
2,516
14
https://martinjljeb.ourcodeblog.com/18410845/a-review-of-http-000
code
A Review Of http 000 A Review Of http 000 A lot of dot-com businesses, exhibiting solutions on hypertext webpages, ended up included in to the Internet. Over the subsequent 5 years, over a trillion bucks was lifted to fund A large number of startups consisting of small a lot more than a website. A single result of Google's productization initiatives, Based on a CNBC report, known as “Apprentice Bard," a chatbot that takes advantage of LaMDA engineering enabling people today to "ask issues and obtain in-depth answers just like ChatGPT." The report laid out a bunch of possible directions Google is experimenting with, like "an alternate search page that would use a matter-and-answer structure," "prompts for likely queries placed immediately under the key search bar" about the Google homepage, and a success page that displays "a grey bubble right under the search bar, featuring extra human-like responses than common search benefits." La Web es el medio de mayor difusión de intercambio individual aparecido en la Historia de la Humanidad, muy por delante de la imprenta. Esta plataforma ha permitido a los usuarios interactuar con muchos más grupos de personas dispersas alrededor del planeta, de lo que es posible con las limitaciones del contacto fileísico o simplemente con las limitaciones de todos los otros medios de comunicación existentes combinados. Net server application was produced to allow computers to work as Internet servers. The primary web servers supported only static files, such as HTML (and images), but now they commonly allow embedding of server aspect applications. On this Wikipedia the language backlinks are at the highest with the page throughout through the post title. Go to best. Once you’ve completed that, it's going to Enable you to mail and receive textual content messages from the Windows computer, which includes iMessages. The Intel Unison app even permits you to perspective your mobile phone’s digital camera roll, deliver files for your mobile phone, make calls via your Personal computer, and find out www google com pk search your apple iphone’s notifications. Google was not the only large tech organization where by ChatGPT was read more capable to gain a coding situation. In accordance with Enterprise Insider(Opens in a completely new window), an engineer at Amazon questioned the chatbot interview questions utilized by the corporate for its coding Employment and acquired them appropriate. which centred on info regarding the WWW project. Website visitors could learn more about hypertext, technological details for creating their very own webpage, and in some cases an evidence regarding how to search the world wide web for data. Therefore the only way I can have Google mail texts, make phone calls, or do anything though the tablet is dormant would be to electric power down the tablet. This entirely emasculates the capabilities I am able to execute arms-free of charge without Eyeglasses. Be sure to shed this "attribute". A dynamic Internet page is then reloaded through the user or by a computer plan to alter some variable information. The updating data could come from the server, or from changes www google com webhp produced to that page's DOM. This may or may not truncate the browsing history or make a saved http 300 version to return to, but a dynamic web page update using Ajax technologies will neither create a page to return to nor truncate the world wide web searching history forward from the shown page. When the Assistant will work fine for simple queries, Google hasn't been in the position to monetize the function, and It is reportedly been slicing means within the division. It is not clear how a ChatGPT competitor would change the core difficulty of monetization in addition to kicking that could in the future a few years. BGR’s viewers craves our marketplace-major insights on the newest website in tech and entertainment, and our authoritative and expansive reviews. #googlemybusiness #websiterank #rankwebsite #seotips #googlerank #rankongoogle #itsjuwelbd #imranahmedjuwel #freelancer_juwel #websiteseo #websitecontent #ranktips Then decide on Google in the fall down menu. Reward: Improve your homepage to Google Using your mouse, click and drag the blue Google icon under to the home icon located in the higher correct corner of your browser.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00645.warc.gz
CC-MAIN-2023-14
4,356
16
https://tradersacademy.online/trading-lesson/what-is-the-tws-api
code
Prerequisites for this course: - Windows, Linux, or Mac OSX computer with GUI and Python 3.3 or higher installed. - Familiarity with Python programming. - The TWS API utilizes socket programming, multiple threads, and other concepts which it is recommended to be familiar with beforehand. If not, it is suggested to first try an Introduction to Python course which covers these topics. Programmers with experience in Python who are interested in building custom applications. IBKR offers several trading platforms, including: Trader Workstation (TWS), the flagship desktop application. TWS is a Java-based application which can run on any major desktop operating system supporting a graphical user interface, such as Windows, Linux, or MacOS. For security reasons, TWS is designed to require the end user to manually enter credentials into the user interface. Client Portal, which is a web-based platform for trading and other account functions, Or, IBKR mobile, the mobile trading app for Apple and Android smartphones. In addition to using IBKR’s trading software, there are several ways by which custom or 3rd party trading applications can place trades to IBKR accounts. One common means of connection available for all clients is the TWS API. Other connections types include the Client Portal API, currently in Beta, and FIX/CTCI connections. The API offerings are detailed on the API Solutions page on the website under the Technology menu at interactivebrokers.com. The Trader Workstation API is an open-source interface to TWS which can be used by custom or 3rd party applications to automate TWS functionality, including but not limited to: - Order placement - Receiving account values - Receiving portfolio data - Receiving market data - Querying financial instrument details It is important to keep in mind that the TWS API itself does not provide new functionality unavailable in TWS, but rather provides the ability to automate some actions within TWS from external software. The source code for the TWS API is provided under a non-commercial license agreement from http://interactivebrokers.github.io/ and can be used by a developer to write a custom application that connects to TWS. Since this code is entirely in general programming languages such as Python, Java, C#, and C++, the intended audience for the source code are experienced third party programmers with a background in the respective technology. To develop applications which do not fit under the default non-commercial license agreement, a commercial license agreement is available on request. More than 100 applications compatible with the TWS API have been developed by third party developers, and many are advertised on the Investors Marketplace on the Interactive Brokers website. Common questions about the TWS API TWS API FAQs: Q. What account types can be used with the TWS API? A. Any IB account type, including Individual, Financial Advisor, STL, and linked account structures are compatible with the TWS API. However it’s important to note many third party applications are developed to support only IB individual accounts. Q. Does the API require a specific version of TWS, or vice-versa? A. TWS is backwards compatible with the API so it is not necessary to upgrade the API to use a new version of TWS. API applications designed for a specific API version generally cannot use a different API version without some changes. Q. Aside from Python, what other languages are included in the download from IBKR? A. The other official API languages are C++, Java, C#/.NET, VB.NET, and ActiveX and DDE (for Q. Is there a preferred language, or difference in offered functionality between different programming A. There are no differences in API functions offered between the available languages (except for DDE for Excel). Q. Can independent programmers make suggestions to the API source code directly? A. Yes, it is possible to make suggestions for changes to the API source code and raise questions directly on the Github repository. Access can be requested by following directions under ‘API Beta’ link at http://interactivebrokers.github.io/ Q. Does IBKR provide hosting services for custom algorithms? A. Unfortunately no, web hosting is not provided. Q. Can I use another trading application (IBKR mobile, WebTrader, TWS) while an API program is A. To connect to the same IBKR account simultaneously with multiple trading applications it is necessary to have multiple usernames. Additional usernames can be created through Account Management free of charge. Market data subscriptions however only apply to individual usernames so the fees would be charged separately. Disclosure: Interactive Brokers The analysis in this material is provided for information only and is not and should not be construed as an offer to sell or the solicitation of an offer to buy any security. To the extent that this material discusses general market activity, industry or sector trends or other broad-based economic or political conditions, it should not be construed as research or investment advice. To the extent that it includes references to specific securities, commodities, currencies, or other instruments, those references do not constitute a recommendation by IBKR to buy, sell or hold such investments. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice. Supporting documentation for any claims and statistical information will be provided upon request. Any stock, options or futures symbols displayed are for illustrative purposes only and are not intended to portray recommendations.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573104.24/warc/CC-MAIN-20220817183340-20220817213340-00289.warc.gz
CC-MAIN-2022-33
5,844
46
http://www.webassist.com/forums/post.php?pid=33108
code
In order to make this change you should select the form. In the property inspector you should choose 'edit design'. From the first dropdown you see select 'form elements'. In the next select list select the text area element. You can then edit any of the design properties of the text area like it's height and width along with all of the other properties like the color, margin and padding.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583700734.43/warc/CC-MAIN-20190120062400-20190120084400-00104.warc.gz
CC-MAIN-2019-04
391
3
https://community.netwitness.com/t5/netwitness-platform-online/configure-file-reputation-server-data-source/ta-p/669500
code
Configure File Reputation Server as a Data Source Configure File Reputation Server as a Data Source File Reputation Server provides analysts the opportunity to view reputation status of files. By default, File Reputation is enabled in Additional Live Services section. If Context Hub service is configured, File Reputation Server is automatically added as data source for Context Hub. Ensure the followings: - Context Hub is enabled and the service is available in (Admin) > Services view of NetWitness. - RSA Live Account is available. Note: To create a Live Account, see the Step 1. Create Live Account topic in the Live Services Management Guide. By default, File Reputation is enabled in Additional Live Services section. Before setting up File Reputation data source, make sure that you have signed in to your Live account with your Live Account Credentials and Context Hub is enabled. File Reputation is automatically added as a data source for context hub. For information about configuring Live Account and Live Services, see the Configure Live Services Settings topic in the System Configuration Guide. For information about configuring Context Hub service, see the Step 1. Add the Context Hub Service topic in the Context Hub Configuration Guide. Enable or Disable File Reputation Data SourceEnable or Disable File Reputation Data Source To enable or disable File Reputation data source for Context Hub: - Go to (Admin) > System. - In the left navigation pane, select Live Services. In the Additional Live Services section, enable File Reputation. - Click Apply. File Reputation Server data source is enabled for Context Hub service. - To verify, go to the Data Sources tab and view the available sources. File Reputation source must be added to the list of available sources and the Enabled field must be a solid green circle (). To disable File Reputation data source, disable File Reputation in Additional Live Services panel and click Apply. File Reputation data source is disabled for Context Hub service. Edit File Reputation Server Data Source SettingsEdit File Reputation Server Data Source Settings To edit File Reputation Server data source for Context Hub: - Select (Admin) > Services. The Services view is displayed. - In the Services panel, select the Context Hub service, and > View > Config. The Services Config view is displayed. In the Data Sources tab, select the File Reputation Server source and click . The Edit Data Source dialog is displayed. - Edit the required fields: To edit the Proxy settings, see the HTTP Proxy Settings Panel topic in the System Configuration Guide. Click Test Connection to test the connection between Context Hub and the data source. Click Save to save the settings. This highlights the meta values (in the Investigate > Navigate, Events, Event details and Nodal graph) for which the contextual information is available for this data source in the Context Hub. By default, this option is enabled. Note: You can disable the context highlighting globally in the Context Hub explorer view. After you disable this option, the entity values for all the data sources configured will not be highlighted if there are any contextual information. |Max. Concurrent Queries||You can configure the maximum number of concurrent queries defined by the Context Hub service to be run against the configured data sources. The default value is 25.|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510676.40/warc/CC-MAIN-20230930113949-20230930143949-00118.warc.gz
CC-MAIN-2023-40
3,388
36
https://plughitzlive.com/radio/6-1467-cigar-city-multirotors-roboticon-tampa-bay-2016.html
code
This year, our staff took responsibility for the technical aspects of ROBOTICON, including video on the JUMBOTRON and live streamed on Facebook. One angle that we have never had before at our competition is a moving aerial shot. Luckily, one of the event partners made this a reality. Cigar City Multirotors is a local flying club, and Zac Lessin, Vice President of the club, came to speak to us about the group, what they do and how to join. He also discussed how they planned to help us enhance our live video through the use of aerial drones. Daniele is a student at Florida Polytechnic University who is studying Computer Science with a concentration in Cyber Security. In High School, she was introduced to the science and technology world through the Foundation for Inspiration and Recognition of Science and Technology (FIRST), a robotics foundation where students of varying ages can compete through tasks that their robots perform. With help from mentors she met through FIRST, she became interested in programming and developing. Today, Daniele is a special events host for F5 Live: Refreshing Technology and PLuGHiTz Live Special Events and a co-host for both The New Product Launchpad and FIRST Looks.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474649.44/warc/CC-MAIN-20240225234904-20240226024904-00066.warc.gz
CC-MAIN-2024-10
1,213
3
https://promanagement.ro/windows-operating-system-fundamentals/
code
Durata: 24 ore/ 3 zile Suportul de curs oficial Microsoft in format electronic, masa de pranz, coffee break, diploma Microsoft recunoscuta international. About this course: This three-day MTA Training course helps you prepare for Microsoft Technology Associate Exam 98-349 and build an understanding of these topics: Operating System Configurations, Installing and Upgrading Client Systems, Managing Applications, Managing Files and Folders, Managing Devices, and Operating System Maintenance. This course leverages the same content as found in the Microsoft Official Academic Course (MOAC) for this exam. This course is updated in support of Windows 10. The Microsoft Technology Associate (MTA) is Microsoft’s newest suite of technology certification exams that validate fundamental knowledge needed to begin building a career using Microsoft technologies. This program provides an appropriate entry point to a future career in technology and assumes some hands-on experience or training but does not assume on-the-job experience. At course completion: After completing this course, students will be able to: • Understand Operating System Configurations • Install and Upgrade Client Systems • Manage Applications • Manage Files and Folders • Manage Devices • Understand Operating System Maintenance There are no prerequisites for this course. Module 1: Introducing, Installing, and Upgrading Windows 7 Module 2: Introducing, Installing, and Upgrading Windows 7 Module 3: Understanding Native Applications, Tools, Mobility, and Remote Management and Assistance Module 4: Managing Applications, Services, Folders, and Libraries Module 5: Managing Devices Module 6: Understanding File and Print Sharing Module 7: Maintaining, Updating, and Protecting Windows 7 Module 8: Understanding Backup and Recovery Method Bonus: Un voucher de 10% discount pentru al 2 lea curs achizitionat Discount de volum: • 7-8 participanti/ grupa - 5 % discount la pretul de lista • 9-10 participanti/ grupa - 10 % discount la pretul de lista • >10 participanti/ grupa - pretul se negociaza Diploma obtinuta: Certificat Microsoft recunoscut international RECOMANDARI DIN PARTEA CLIENTILOR
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526254.26/warc/CC-MAIN-20190719140355-20190719162355-00493.warc.gz
CC-MAIN-2019-30
2,184
29
https://forums.collectors.com/discussion/1093238/submitting-to-psa-at-the-national
code
Submitting to PSA at the National. I am planning to submit some items at this years National. Do I simply bring my items and do everything on site? My PC is crashed, so I am wondering the best way to get it done. Not in a great hurry to get items back, just need to submit. I'll be there every day, any suggestions on the best way/time to do it? Thanks in advance for any advice.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100286.10/warc/CC-MAIN-20231201084429-20231201114429-00440.warc.gz
CC-MAIN-2023-50
379
6
https://repology.org/project/yafaray-exporter/history
code
Please note that this history is still an experimental feature and may be reset at any time. Also note that in addition to actual activity of software authors and repository maintainers, this history may contain artifacts produced by repology. For example, if two projects are merged it will look like one project has appeared in more repositories and another one removed from all repositories. |Devel version updated to 0.1.2+really0.1.2~beta5 by Raspbian Oldstable, Trisquel 7.0, Ubuntu 14.04| |Project removed from Raspbian Stable| |Project removed from Raspbian Testing| |History start, latest version is 0.1.2beta5, up to date in Raspbian Oldstable, Raspbian Stable, Raspbian Testing, Trisquel 7.0, Ubuntu 14.04|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362969.51/warc/CC-MAIN-20211204094103-20211204124103-00132.warc.gz
CC-MAIN-2021-49
717
6
https://play.google.com/store/apps/details?id=com.sevstar.playstory
code
Online only Why? It downloads the books but you can't use them when you aren't online. This makes the app completely useless for our purposes. Useless app Can be used to download few books Very good application for kids Great! Very helpful... - smooth scroll - fix misclick - add new features to books interactive player Dive into hundreds of adventures with all of your favorite Disney characters! Kind and instructive books for children with smooth graphics and animation.
s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320595.24/warc/CC-MAIN-20170625235624-20170626015624-00445.warc.gz
CC-MAIN-2017-26
474
9
http://nslug.ns.ca/pipermail/nslug/2015-July/026727.html
code
[nSLUG] Re: httpd busy on idle machine? mspencer at tallships.ca Sun Jul 5 17:43:14 ADT 2015 Stephen Yorke <syorke at gmail.com> wrote: > Apache? I switched to nginx a year ago and never looked back. > Is there a particular reason for using Apache? It's what comes with the Slackware distro. This isn't a production/enterprise machine. This is my personal laptop. No outsiders are using the web server. I use it, in conjunction with my own cgi-bin scripts, to retrieve pages from the Some page in ~/ on localhost contains a -> link, possibly the ACTION attribute of a <FORM..., which points to -> script reads ENV, composes a query, opens socket on remote host -> script sends query -> script reads and unpacks reply -> script edits reply to suit me -> script composes a response and hands it back to Apache -> Apache returns response to my browser and, occasionally, to look at pages across my LAN. So I don't have to manage httpd for numerous or random or possibly malicious users, heavy loads or other production matters. This all works fine on my older (2.4 kernel, apache-1.3.37) desktop system and httpd doesn't show up in top(1) as active when nothing is calling on So fetching, installing, figuring out, tweaking a completely different web server sounds like totally unnecessary bother and possily spawning Michael Spencer Nova Scotia, Canada .~. mspencer at tallships.ca /( )\ More information about the nSLUG
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892699.72/warc/CC-MAIN-20180123211127-20180123231127-00301.warc.gz
CC-MAIN-2018-05
1,418
28
https://artdiamondblog.com/archives/2007/04/post_103.html
code
Source of book image: http://ec1.images-amazon.com/images/P/0809557487.01._SS500_SCLZZZZZZZ_V38973347_.jpg There’s a new collection of science fiction stories entitled Creative Destruction (after one of the main stories in the collection that is also entitled "Creative Destruction"). I have not read the book, but used to enjoy reading science fiction, and hope to have a look before too long. I welcome comments from anyone who has read the book. Does Schumpeter get a mention? The reference to the book is: Lerner, Edward M. Creative Destruction. Rockville, MD: Wildside Press, 2006.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00702.warc.gz
CC-MAIN-2023-14
588
5
https://007software.net/tag/red-hat-enterprise-linux/
code
Executive Summary: The IKE daemons in RHEL7 (libreswan) and RHEL6 (openswan) are not vulnerable to the SLOTH attack. But the attack is still interesting to look at . The SLOTH attack released today is a new transcript collision attack against some security protocols that use weak or broken hashes such as MD5 or SHA1. While it mostly focuses on the issues found in TLS, it also mentions weaknesses in the “Internet Key Exchange” (IKE) protocol used for IPsec VPNs. While the TLS findings are very interesting and have been assigned CVE-2015-7575, the described attacks against IKE/IPsec got close but did not result in any vulnerabilities. In the paper, the authors describe a Chosen Prefix collision attack against IKEv2 using RSA-MD5 and RSA-SHA1 to perform a Man-in-the-Middle (MITM) attack and a Generic collision attack against IKEv1 HMAC-MD5. We looked at libreswan and openswan-2.6.32 compiled with NSS as that is what we ship in RHEL7 and RHEL6. Upstream openswan with its custom crypto code was not evaluated. While no vulnerability was found, there was some hardening that could be done to make this attack less dangerous that will be added in the next upstream version of libreswan. Specifically, the attack was prevented because: - The SPI’s in IKE are random and part of the hash, so it requires an online attack of 2^77 – not an offline attack as suggested in the paper. - MD5 is not enabled per default for IKEv2. - Weak Diffie-Hellman groups DH22, DH23 and DH24 are not enabled per default. - Libreswan as a server does not re-use nonces for multiple clients. - Libreswan destroys nonces when an IKE exchange times out (default 60s). - Bogus ID payloads in IKEv1 cause the connection to fail authentication. The rest of this article explains the IKEv2 protocol and the SLOTH attack. The IKEv2 protocol The IKE exchange starts with an IKE_INIT packet exchange to perform the Diffie-Hellman Key Exchange. In this exchange, the initiator and responder exchange their nonces. The result of the DH exchange is that both parties now have a shared secret called SKEYSEED. This is fed into a mutually agreed PRF algorithm (which could be MD5, SHA1 or SHA2) to generate as much pseudo-random key material as needed. The first key(s) are for the IKE exchange itself (called the IKE SA or Parent SA), followed by keys for one or more IPsec SAs (also called Child SAs). But before the SKEYSEED can be used, both ends need to perform an authentication step. This is the second packet exchange, called IKE_AUTH. This will bind the Diffie-Hellman channel to an identity to prevent the MITM attack. Usually these are digital signatures over the session data to prove ownership of the identity’s private key. Technically, it signs a hash of the session data. In TLS that signature is over the hash of the session data which made TLS more vulnerable to the SLOTH attack. The attack is to trick both parties to sign a hash which the attacker can replay to the other party to fake the authentication of both entities. They call this a “transcript collision”. To facilitate the creation of the same hash, the attacker needs to be able to insert its own data in the session to the first party so that the hash of that data will be identical to the hash of the session to the second party. It can then just pass on the signatures without needing to have private keys for the identities of the parties involved. It then needs to remain in the middle to decrypt and re-encrypt and pass on the data, while keeping a copy of the decrypted data. The IKEv2 COOKIE The initial IKE_INIT exchange does not have many payloads that can be used to manipulate the outcome of the hashing of the session data. The only candidate is the NOTIFY payload of type COOKIE. Performing a Diffie-Hellman exchange is relatively expensive. An attacker could send a lot of IKE_INIT requests forcing the VPN server to use up its resources. These could all come from spoofed source IP addresses, so blacklisting such an attack is impossible. To defend against this, IKEv2 introduced the COOKIE mechanism. When the server gets too busy, instead of performing the Diffie-Hellman exchange, it calculates a cookie based on the client’s IP address, the client’s nonce and its own server secret. It hashes these and sends it as a COOKIE payload in an IKE_INIT reply to the client. It then deletes all the state for this client. If this IKE_INIT exchange was a spoofed request, nothing more will happen. If the request was a legitimate client, this client will receive the IKE_INIT reply, see the COOKIE payload and re-send the original IKE_INIT request, but this time it will include the COOKIE payload it received from the server. Once the server receives this IKE_INIT request with the COOKIE, it will calculate the cookie data (again) and if it matches, the client has proven that it contacted the server before. To avoid COOKIE replays and thwart attacks attempting to brute-force the server secret used for creating the cookies, the server is expected to regularly change its secret. Abusing the COOKIE The SLOTH attacker is the MITM between the VPN client and VPN server. It prepares an IKE_INIT request to the VPN server but waits for the VPN client to connect. Once the VPN client connects, it does some work with the received data that includes the proposals and nonce to calculate a malicious COOKIE payload and sends this COOKIE to the VPN client. The VPN client will re-send the IKE_INIT request with the COOKIE to the MITM. The MITM now sends this data to the real VPN server to perform an IKE_INIT there. It includes the COOKIE payload even though the VPN server did not ask for a COOKIE. Why does the VPN server not reject this connection? Well, the IKEv2 RFC-7296 states: When one party receives an IKE_SA_INIT request containing a cookie whose contents do not match the value expected, that party MUST ignore the cookie and process the message as if no cookie had been included The intention here was likely meant for a recovering server. If the server is no longer busy, it will stop sending cookies and stop requiring cookies. But a few clients that were just about to reconnect will send back the cookie they received when the server was still busy. The server shouldn’t reject these clients now, so the advice was to ignore the cookie in that case. Alternatively, the server could just remember the last used secret for a while and if it receives a cookie when it is not busy, just do the cookie validation. But that costs some resources too which can be abused by an attacker to send IKE_INIT requests with bogus cookies. Limiting the time of cookie validation from the time when the server became unbusy would mitigate this. The paper contains an error when it talks about this COOKIE size: To implement the attack, we must first find a collision between m1 amd m’1. We observe that in IKEv2 the length of the cookie is supposed to be at most 64 octets but we found that many implementations allow cookies of up to 2^16 bytes. We can use this flexibility in computing long collisions. It is not clear where the authors got the value of 64. The RFC does not mention anything about the maximum cookie size. The COOKIE value is sent as a NOTIFY PAYLOAD. These payloads have a two byte Payload Length value, so NOTIFY data is legitimately 2^16 (65535) bytes. Adding more bytes should not be possible. Any IKE implementation that reads more bytes than specified in the Payload Length value would be very broken. Assuming the COOKIE NOTIFY is the last payload in the packet, the attacker could increase the length specified in the IKE header and stuff additional bytes after this payload, but proper implementations would not read this data. In fact, libreswan encountered some interoperability problems when it did this by mistake when it was padding the IKE packets to a multiple of 8 bytes (as per IKEv1 but not IKEv2) and got its IKE packets rejected by various implementations. Still, the authors claim 65535 bytes is enough for their attack. Attacking the AUTH hash Assuming the above works, it needs to find a collision between m1 and m’1. The only numbers they claim could be feasible is when MD5 would be used for the authentication step in IKE_AUTH. An offline attack could then be computed of 2^16 to 2^39 which they say would take about 5 hours. As the paper states, IKEv2 implementations either don’t support MD5, or if they do it is not part of the default proposal set. It makes a case that the weak SHA1 is widely supported in IKEv2 but admits using SHA1 will need more computing power (they listed 2^61 to 2^67 or 20 years). Note that libreswan (and openswan in RHEL) requires manual configuration to enable MD5 in IKEv2, but SHA1 is still allowed for compatibility. The final step of the attack – Diffie-Hellman Assuming the above succeeds the attacker needs to ensure that g^xy’ = g^x’y. To facilitate that, they use a subgroup confinement attack, and illustrate this with an example of picking x’ = y’ = 0. Then the two shared secrets would have the value 1. In practice this does not work according to the authors because most IKEv2 implementations validate the received Diffie-Hellman public value to ensure that it is larger than 1 and smaller than p – 1.They did find that Diffie-Hellman groups 22 to 24 are known to have many small subgroups, and implementations tend to not validate these. Which led to an interesting discussion on one of the cypherpunks mailinglists about the mysterious nature of the DH groups in RFC-5114. Which are not enabled in libreswan (or openswan in RHEL) by default, and require manual configuration precisely because the origin of these groups is a mystery. The IKEv1 attack The paper briefly brainstorms about a variant of this attack using IKEv1. It would be interesting because MD5 is very common with IKEv1, but the article is not really clear on how that attack should work. It mentions filling the ID payload with malicious data to trigger the collision, but such an ID would never pass validation. Work was already started on updating the cryptographic algorithms deemed mandatory to implement for IKE. Note that it does not state which algorithms are valid to use, or which to use per default. This work is happening at the IPsec working group at the IETF and can be found at draft-ietf-ipsecme-rfc4307bis. It is expected to go through a few more rounds of discussion and one of the topics that will be raised are the weak DH groups specified in RFC-5114. Upstream Libreswan has hardened its cookie handling code, preventing the attacker from sending an uninvited cookie to the server without having their connection dropped.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473738.92/warc/CC-MAIN-20240222093910-20240222123910-00310.warc.gz
CC-MAIN-2024-10
10,702
35
https://tecnstuff.net/how-to-remove-symbolic-links-in-linux/
code
A symbolic link is also known as symlink or soft link, is a special type of file that serves as a reference to another file or directory. A symlink can point to a file or a directory on the same or a different filesystem or partition. This guide explains how to remove symbolic links in Linux systems. Before going ahead for removing symbolic link, make sure use have writing permission on parent directory of the symlink. Otherwise, you will get “Operation not permitted” error. To check symlink and to find the destination directory or file, use ls -l command. ls -l /home/file.php lrwxrwxrwx 1 tecnstuff tnsgrp 4 May 2 14:03 /home/file.php -> file_link.php In above output the first character l shows that the file is symlink and the arrow -> symbol indicates where the symlink points to. Remove Symbolic Links with rm To remove a symbolic link, use the rm command followed by the symbolic link name. Using rm command you can remove given file or directories. For example, to delete the /home/file.php symlink, you would run following command: It will not show output and exits with zero. If you would like to delete more than one symbolic link, you can pass multiple symlink names as arguments with rm SYMLINK_NAME_1, SYMLINK_NAME_2 If you would like to prompt confirmation message before deleting the symlink, you should pass -i option along with rm -i SYMLINK_NAME y and press Enter key for confirmation. You will get following output: rm: remove symbolic link 'SYMLINK_NAME'? Ensure that you never use -r option along with rm command while removing symlink. Otherwise it will remove all the contents of the destination directory. Remove Symbolic Links with unlink Unlink command removes the given symlink. It is possible to delete only a single file using To remove a symlink using unlink, run the following command followed by the symlink name. For instance, to remove the /home/file.php symlink, you would run following When removing a symbolic link that points to a directory do not append a trailing slash to the symlink name. This tutorial shown you how to remove symbolic links or symlink using If you have any question or feedback, please leave a comment below.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510179.22/warc/CC-MAIN-20230926075508-20230926105508-00600.warc.gz
CC-MAIN-2023-40
2,179
35
http://cerebralmeltdown.com/forum/index.php?topic=412.0
code
I'm not sure what your first question is asking? "Does the east and west degrees have to be equal?" If you are referring to the maximum and minimum software limits, the answer is no, the angle values don't have to be equal. If your machine is moving to the correct altitude and azimuth values when you manually input them, then I think we can narrow down the problem to just a few possibilities. The first would be like you said, the latitude and longitude coordinates might be wrong. If the getlatlon site isn't working, you could try using a GPS if you have one. I oftentimes use Google Earth to find latitude and longitude coordinates. If you go to Tools >> Options, you will even find an option to display the latitude and longitude values in decimal degrees under "Show Lat/Long". That way you don't have to convert them yourself. Then you can just travel to your house and read the displayed coordinates. If you send me your address, I can double check it for you. For a location in Germany, you should have a latitude about +50 degrees and a longitude about +10 degrees. A common error is to accidentally use a negative value where you should have put a positive one. Another possibility is that you have the time set incorrectly. Maybe go through the instructions again. http://www.cerebralmeltdown.com/setting-the-time-on-the-real-time-clock/ Double checking the calculated angles in another program can actually make things a lot more confusing because you can easily make a mistake when inputting your settings in that program as well. I do have this program PC based program here that you might be able to use. http://cerebralmeltdown.com/forum/index.php?topic=361.0 If you have a smart phone, you should be able to find applications for it that will display that altitude and azimuth of the sun. Since most smart phones have a built in GPS and should automatically have the correct time, it is less likely that you will accidently use the incorrect values. A few more random thoughts... You might try getting the sun tracking to work first. If that works, then heliostat mode should also work. Have you tried resetting the Arduino after the beam has drifted to see if it ends up where it is supposed to be after the reset has finished? Are you sure you have the machine aligned correctly? If you are reasonably certain you have everything else setup correctly, you can put the machine in sun tracking mode and then adjust the altitude and azimuth until it points at the sun. Which Arduino are you using, the Uno or the Mega? Those are my thoughts on the issue. Let us know how it goes.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203515.32/warc/CC-MAIN-20170322213003-00511-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
2,598
15
https://math.stackexchange.com/questions/1218354/why-are-vdash-and-vdash-symbols-from-metalanguage
code
I can explain it like this. Given a set $X$, write $X^*$ for the collection of all finite sequences in $X$, including the empty sequence. Now let $L$ denote an arbitrary set, which we think of as a "language"; so you should be thinking of the elements of $L$ as "formulae". Then: Definition. An inference relation over $L$ is subset $\vdash$ of $L^* \times L$ subject to certain axioms, like: For all $\Gamma \in L^*$ and all $\varphi \in L$, if $\varphi$ occurs somewhere in $\Gamma$, then $\Gamma \vdash \varphi$. Note that we write $\Gamma \vdash \varphi$ as a more readable alternative to the more correct $(\Gamma,\varphi) \in \;\vdash$. This notation can be seen in axiom 1, for example. Now here's where the ambiguity creeps in. Suppose $L$ is the set of all strings featuring the symbols $0,1$, the comma symbol, and the symbol $\vdash$. So a generic element of $L$ looks like: You can see the issue, right? If we suppose furthermore that $\vdash$ is an inference relation on $L$, then we cannot tell what "$01,1 \vdash 1$" means. It could be an element of $L$. Or, it could be the writer attempting to claim that $(\langle 01,1\rangle,1) \in \;\vdash$. Without further information, we cannot know. To avoid this kind of ambiguity, we would choose a different symbol for the inference relation, as in: Suppose furthermore that $\vdash'$ is an inference relation on $L$. It now becomes clear that $01,1 \vdash 1$ is intended to denote an element of $L$, whereas $01,1 \vdash' 1$ is expressing the proposition that $(\langle 01,1\rangle,1) \in \;\vdash'$.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257243.19/warc/CC-MAIN-20190523103802-20190523125802-00050.warc.gz
CC-MAIN-2019-22
1,561
10
https://www.edureka.co/community/185342/jquery-ui-datepicker-change-date-format
code
I am using the UI DatePicker from jQuery UI as the stand alone picker. I have this code: And the following JS: When I try to return the value with this code: var date = $('#datepicker').datepicker('getDate'); I am returned this: Tue Aug 25 2009 00:00:00 GMT+0100 (BST) Which is totally the wrong format. Is there a way I can get it returned in the format DD-MM-YYYY?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100674.56/warc/CC-MAIN-20231207121942-20231207151942-00639.warc.gz
CC-MAIN-2023-50
366
7
https://www.smartdcc.co.uk/customer-hub/consultations/ichis-consultation/
code
DCC is consulting on amendments to the Intimate Communications Hub Interface Specification (ICHIS). Amendments are focused on: - Removing the Bit Error Rate (BER) test requirement; - Removing the Radio Frequency Noise limits from the Communications Hubs (Comms Hub) data sheets to ICHIS into a new Appendix B; - Replacing current test content in ICHIS with a new ICHIS test specification - Adding requirements to test multiple meters; - Adding a new Appendix A for CHAS information; - Making other minor changes to the specification such as amending references to standards and adding definitions to the glossary. DCC Wider Changes to ICHIS v1.0 Intimate Communications Hubs Interface Specification v2.0 Draft v1.0
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540491871.35/warc/CC-MAIN-20191207005439-20191207033439-00406.warc.gz
CC-MAIN-2019-51
714
10
https://biodatamining.biomedcentral.com/articles/10.1186/1756-0381-7-8
code
A classification and characterization of two-locus, pure, strict, epistatic models for simulation and detection BioData Mining volume 7, Article number: 8 (2014) The statistical genetics phenomenon of epistasis is widely acknowledged to confound disease etiology. In order to evaluate strategies for detecting these complex multi-locus disease associations, simulation studies are required. The development of the GAMETES software for the generation of complex genetic models, has provided the means to randomly generate an architecturally diverse population of epistatic models that are both pure and strict, i.e. all n loci, but no fewer, are predictive of phenotype. Previous theoretical work characterizing complex genetic models has yet to examine pure, strict, epistasis which should be the most challenging to detect. This study addresses three goals: (1) Classify and characterize pure, strict, two-locus epistatic models, (2) Investigate the effect of model ‘architecture’ on detection difficulty, and (3) Explore how adjusting GAMETES constraints influences diversity in the generated models. In this study we utilized a geometric approach to classify pure, strict, two-locus epistatic models by “shape”. In total, 33 unique shape symmetry classes were identified. Using a detection difficulty metric, we found that model shape was consistently a significant predictor of model detection difficulty. Additionally, after categorizing shape classes by the number of edges in their shape projections, we found that this edge number was also significantly predictive of detection difficulty. Analysis of constraints within GAMETES indicated that increasing model population size can expand model class coverage but does little to change the range of observed difficulty metric scores. A variable population prevalence significantly increased the range of observed difficulty metric scores and, for certain constraints, also improved model class coverage. These analyses further our theoretical understanding of epistatic relationships and uncover guidelines for the effective generation of complex models using GAMETES. Specifically, (1) we have characterized 33 shape classes by edge number, detection difficulty, and observed frequency (2) our results support the claim that model architecture directly influences detection difficulty, and (3) we found that GAMETES will generate a maximally diverse set of models with a variable population prevalence and a larger model population size. However, a model population size as small as 1,000 is likely to be sufficient. The phenomenon of epistasis, or gene-gene interaction, confounds the statistical search for main effects, i.e. single locus associations with phenotype . The term epistasis was coined to describe a genetic ‘masking’ effect viewed as a multi-locus extension of the dominance phenomenon, where a variant at one locus prevents the variant at another locus from manifesting its effect . In the context of statistical genetics, epistasis is traditionally defined as a deviation from additivity in a mathematical model summarizing the relationship between multi-locus genotypes and phenotypic variation in a population . Alternate definitions and further discussion of epistasis is given in [1, 4–9]. Limited by time and technology, and drawn by the appeal of “low hanging fruit”, it has been typical for genetic studies to focus on single locus associations (i.e. main effects). Unfortunately, for those common diseases typically regarded as complex (i.e. involving more than a single loci in the determination of phenotype) this approach has yielded limited success [10, 11]. The last decade has seen a gradual acknowledgment of disease complexity and greater focus on strategies for the detection of complex disease associations within clinical data [1, 12–14]. Beyond the detection of complex multilocus genetic models, theoretical investigations have also pursued their enumeration, generation, and classification. These theoretical works seek to lay the foundation for the identification and interpretation of multilocus associations as they may appear in genetic studies. A natural stepping stone towards understanding complex multilocus effects is the examination of two-locus models. Early on, Neuman and Rice considered epistatic two-locus disease models for the explanation of complex illness inheritance, highlighting the importance of looking beyond a single locus. Li and Reich classified all 512 fully penetrant two-locus models, in which genotype disease probabilities (i.e. penetrances) were restricted to zero and one. This work emphasized diversity of complex models beyond the typical two-locus models previously considered by linkage studies. Of these models, only a couple exhibit what was later referred to as “purely” epistatic interactions. Pure refers to epistasis between n loci that do not display any main effects [13, 17–20]. Alternatively, impure epistasis implies that one or more of the interacting loci have a main effect contributing to disease status [19, 20]. Hallgrimsdottir and Yuster later expanded this two-locus characterization to include models with continuous penetrance values. Within a population of randomly generated two-locus models, they characterized 69 “shape-based” classes of impure epistatic models. In addition, they observed that the “shape” of a model (1) reveals information about the type of gene interaction present, and (2) impacts the power (i.e. frequency of success) in detecting the underlying epistasis. Taking aim at pure epistasis, Culverhouse et. al. described the generation of two to four-locus purely epistatic models and explored the limits of their detection. Working with a precisely defined class of models such as pure epistasis offered a more mathematically tractable set for generation and investigation. The value of their work was not to suggest that purely epistatic models necessarily reflect real genetic interactions, but rather the ability extrapolate their findings to more likely epistatic models possessing small main effects. Similar to these earlier works, the present study focus on statistical epistasis, which is the phenomenon as it would be observed in case-control association studies, quantitative trait loci (QTL) mapping, or linkage analysis. Exclusively, we focus on a precise subclass of epistasis which we refer to as pure and strict. Strict, conceptually alluded to in , refers to epistasis where n loci are predictive of phenotype but no proper multi-locus subset of them are [19, 20]. Of note, all two-locus purely epistatic models are strict by default since no other subsets are possible with only two-loci. The loci in pure, strict models could be viewed as “fully masked” in that no predictive information is gained until all n loci are considered in concert. Therefore these models may be considered “worst case” in terms of detection difficulty. While this exact, extreme class of models is unlikely to be pervasive within real biological associations, they offer a gold standard for evaluating and comparing strategies for the detection and modeling of multiple predictive loci. A handful of studies have introduced methods for generating epistatic models [18, 22–24] including our own Genetic Architecture Model Emulator for Testing and Evaluating Software (GAMETES) designed to randomly generate an architecturally diverse population of pure, strict, epistatic models. Architecture references the unique composition of a model (e.g. the particular penetrance values and arrangement of those values across genotypes). Additionally, in an Ease of Detection Measure (EDM) was introduced and incorporated into GAMETES offering a predictor of model detection difficulty calculated directly from the penetrance values and genotype frequencies of a given genetic model. Previously we demonstrated that a 2-locus model’s EDM was more strongly and significantly correlated with the detection power than heritability or any other metric considered. Detection power was determined separately using three very different, cutting edge data search algorithms in order to establish EDM calculation as a simple alternative to completing model detection power analyses. In the present study we refine the characterization of two-locus models described in and to a more specific subset of models defined as having pure, strict, epistasis. We generate these models using GAMETES [19, 20] and apply the geometric approached used in to similarly identify shape model classes. Next, we examine whether model EDM scores (a surrogate measure of detection difficulty) differs between these shape groups as well as between groups with the same number of edges in their projected shapes. Then, we evaluate the impact of GAMETES model population as well as the effect of fixing population prevalence (K) or allowing it to vary randomly on observed model shape coverage and EDM score range. This study expands our theoretical understanding of a particularly challenging class of multi-locus models and suggests novel insight into the effective generation of complex models with GAMETES. In this section, we describe (1) the modeling of epistasis with GAMETES, (2) the triangulation of model shape (3) our experimental evaluation. Modeling 2-Locus pure strict epistasis Single nucleotide polymorphisms or (SNPs) are loci in the DNA sequence which can serve as markers of phenotypic variation. The term genotype has been used to refer both to the allele states of a single SNP, as well as the combined allele states of multiple SNPs. Herein, we will refer to the latter as a multi-locus genotype (MLG) whenever necessary. Penetrance functions represent one approach to modeling the relationship between genetic variation and a dichotomous trait. Penetrance is the probability of disease, given a particular genotype or MLG. Our models assume Hardy Weinberg equilibrium such that, the allele frequencies for a SNP may be used to calculate it’s genotype frequencies as follows; freq(AA) = p 2, freq(Aa) = 2pq, and freq(aa) = q 2, where p is the frequency of the major (more common) allele ‘A’, q is the minor allele frequency (MAF) where ‘a’ is the minor allele, and p + q = 1. Penetrance functions may be constructed to describe n-locus interactions between n predictive loci using a penetrance function comprised of 3n penetrance values corresponding to each of the 3n MLGs. Table 1 gives an example of an epistatic model that is both pure and strict. For convenience all values in the table have been rounded to three decimals places. While fully penetrant models, like the ones characterized in are easy to interpret, they are rarely representative of real world relationships between genotype and disease. A common example of a fully penetrant, purely epistatic 2-locus model based on the XOR function is given in Table 2. More realistic models, like the one in Table 1 and the ones typically generated by GAMETES, possess penetrance values between 0 and 1. Each of the nine entries in Table 1 corresponds to one of the nine possible MLGs combining SNPs 1 and 2. For instance, subjects that have the MLG aa-bb have a 14.7% chance of having disease. What makes these penetrance functions purely epistatic is that while the genotypes of SNPs 1 and 2 are together predictive of disease status, neither is individually. Further discussion of what makes models purely and strictly epistatic is given in . The GAMETES strategy for generating random, n-locus, pure, strict epistatic models is briefly reviewed here. Each n-locus model is generated deterministically, based on a set of pseudo random parameters, a randomly selected direction, and specified values of heritability, MAFs, and population prevalence (K). The GAMETES algorithm first (1) generates 2n random parameters and a random unit vector in , then (2) generates a random pre-penetrance function by seeding these parameters using the unit vector, and then (3) uses a scaling function to scale the entries of this random pre-penetrance function to generate a random penetrance function. To obtain a random penetrance function having a specified heritability, or heritability and K, it further (4) scales the entries of this penetrance function to achieve, if possible, these values. If steps (1) or (4) are not successful the algorithm starts over, attempting to generate models until either the desired model population size or the iteration limit is reached. For a detailed explanation of this strategy see . EDM is utilized by GAMETES to select model architectures that span the range of predicted difficulties . This allows for the design of a simulation study which diversifies model architecture based on detection difficulty. First we generated a population of pure, strict, epistatic models of random architecture sharing commonly specified genetic constraints (i.e. number of loci, heritability, MAFs, and K). GAMETES allows the user to specify a population size of models from which some will be selected to generate simulated genetic datasets. Certain constraint combinations may yield few or no viable models . Therefore, GAMETES runs until either the desired population size or a maximum attempt limit is reached. Once one of the aforementioned stopping criteria is met, all models (each with the same constraints) were ordered by their EDM. At this point, GAMETES select some number of models to represent the range of observed EDMs. By default, GAMETES selects two models from this distribution, representing the highest and lowest EDM scores. A higher EDM indicates that a given model will be easier to detect than a model with a lower EDM. For the purposes of this study, we directed GAMETES to instead report the entire population of models generated by GAMETES. Shapes of two-locus models The triangulation, or shape, of a model is used here to generalize it’s architecture and offer a classification of the type of interaction present. This geometric classification of epistasis was first applied to haploid models in , and extended to diploid two-locus QTL models in . Overall, our approach was similar to , except that we used Qhull as opposed to TOPCOM to compute triangulations of the models. Consider the example model given in Figure 1A. First, we place points in space where the x and y coordinates represent the 9 MLGs of this two-locus model and the z coordinates (or heights) of these points are the penetrance values at these MLG (see Figure 1B). Four additional points are placed at the outside corners of the x-y coordinates. Each additional point has an equal, negative height (not shown in Figure 1B). This was done so that Qhull could correctly discern the convex hull formed by these MLG heights. A model’s shape is defined by the upper faces of the convex hull of these heights. As explained in , this surface would intuitively be formed by draping a piece of stiff cloth over these points. The point coordinates are passed to Qhull which determines the convex hull, and projects the upper faces (i.e, the creases of the surface) onto an xy-plane. This projection results in a set of polygons. Irrelevant polygons which include any of the four reference points of negative height are discarded. A unique set of polygons determines the classification of model shape (see Figure 1C). A mathematical definition of triangulation is given in . As in and we take symmetry into account when defining shape classes. Symmetry is determined by (1) interchanging locus 1 and locus 2, or (2) interchanging two alleles at one or both loci. In , shape classes were further characterized by circuits (i.e. linear combinations of penetrance values) which decompose the main and epistatic effects of a model. In the present study, all models being classified are purely epistatic, having no main effects to decompose. Circuits could be used to decompose the types of interaction effects characterizing a model however this is beyond the scope of the present study. We use GAMETES to generate differently sized populations of pure, strict, two-locus epistatic models possessing different constraint combinations. Specifically, populations were generated for heritabilities of 0.005, 0.01, 0.025, 0.05, 0.1, or 0.2, MAFs of 0.2 or 0.4 and with population prevalence (K) either fixed to 0.3 or allowed to vary to any value between 0 and 1. Thus, a total of 24 constraint combinations were considered (6 heritabilities, ∗ 2 MAFs ∗ 2 prevalence settings). Heritability and MAF constraints were seletect to be consistent with previous work using GAMETES [19, 20], and the K value of 0.3 was selected based on the limits described in to ensure that the specified combinations of heritability and MAF would yield models. We explore a variable K since a specific population prevalence rarely of interest in simulation studies, and previous findings in indicated that a variable K facilitated viable model discovery in GAMETES. For each constraint combination above, GAMETES was used to generate a population of models of sizes 1,000, 10,000, and 100,000 yielding a total of 72 different populations of models. All together, 2,664,000 models were generated which is similar in magnitude to the 1,000,000 random models examined in . Within each of these 72 populations we characterize all model shapes as previously described. Additionally, we further generalize model shape by categorizing models by the number of edges as well as the number of polygons (triangles) existing within it’s shape class. Observations in suggested that the power to detect randomly generated, impure epistatic models was correlated with model shape. Extending these findings, we examine how model detection difficulty differs between shape classes observed in populations of pure, strict, epistatic models. We utilize EDM as a surrogate for detection difficulty or power, where power is used to describe the frequency of successful detection of a model. EDM is calculated directly from the penetrance function circumventing the need to generate simulated datasets and perform a secondary evaluation of power. The non-parametric Kruskal-Wallis test was used to evaluate whether model EDM significantly differed within separate shape classes, as well as between groups defined by the number of edges in the model projection. Mann-Whitney pairwise comparisons were subsequently utilized to look for EDM differences between models with a specific number of projection edges. Results and discussion Across all 72 populations GAMETES-generated pure, strict, two-locus epistatic models, we identified 33 unique shape classes. This is in contrast to the 69 symmetry shape classes identified when not restricting models to pure, strict, epistasis . It is important to note that pure, strict 2-locus epistastic models are not limited to these 33 shape classes, but rather that these are the only shape classes we observed when generating over two million genetic models with GAMETES. Case in point, we did not observe the shape class for the classic XOR penetrance function given in Table 2 (which would look like a baseball diamond including 4 edges). Strictly speaking, the XOR model diamond ‘shape’ is not a triangulation (since it is not comprised entirely of triangles), but rather it is a subdivision. Subdivisions, which are not triangulations, are unlikely to be generated randomly. Also note that shape class 24 in Figure 2 is a refinement of that subdivision. Models such as this, and potentially other unique shape classes, have an extremely low probability of being generated. Figure 2 illustrates the projections which depict the 33 observed shape symmetry classes. The ID numbers assigned to shape symmetry classes were arbitrarily assigned according to the order in which the unique class was identified within the model populations. Notice that different shape triangulations possess different numbers of edges. For example, the only triangulation with a single edge is symmetry class 4, and the only triangulations with two edges are symmetry classes 6 and 9. We observe a maximum of 6 edges in our symmetry class projections. Shape classes are organized by number of edges in Table 3. Towards the characterization of our now shape-classified models we first explore all 36 model populations with a fixed K. For each of the three model population sizes (1,000, 10,000 and 100,000) we examine the distribution of EDM scores obtained across all 12 constraint combinations of heritability and MAF. Similarly, we examine the number of models that have been randomly generated by GAMETES for each shape class. Figure 3 illustrates these findings for EDM distribution and frequency of shape class occurrence for population sizes of 100,000. Notice that the distribution of EDM scores as well as respective median values can be dramatically different from one shape class to another. Kruskal-Wallis testing confirmed that the EDMs of models found in different shape groups were significantly different (P <<0.001). This significance held for every population size examined and whether K was fixed or not. Figure 3 also illustrates the number of models identified within each of the 12 combination populations that belong to a respective shape symmetry class. Examining a column of boxes indicates the coverage of shape classes within one of the 12 populations. If a column has no *’s, than the respective model population included at least one model each of the 33 observed shape classes. If a row of boxes has no *’s than at least one model was found for the respective shape class no matter which of the 12 constraint combinations were used. This figure not only illustrates the variability of EDM between shape classes, but the relative likelihood that these shape classes will be randomly generated for different model constraint combinations. It may be interesting in follow up research to compare the frequency with which we observe specific real-world epistatic model shape classes relative to what has been observed through random generation. For reference, it may also be useful to note that in , we observed that for all models with an EDM greater than 0.01 the multifactor dimensionality reduction (MDR) software had significant power (>80%) to detect them within a dataset including 20 attributes, and 800 samples. Figure 4 offers an illustration identical to that found in Figure 3 except that only 1,000 models were generated for each of the 12 constraint combinations of heritability and MAF. The most obvious difference in comparing Figures 3 and 4, is that when a smaller model population size was used, the shape class coverage within each of the 12 populations decreased. In other words, as might be expected, the diversity of model shapes observed for different combinations of heritability and MAF became limited within a smaller population. Results for a population size of 10,000 fit this trend (See Figure S1 of the Additional file 1). Interestingly, Figure 4 also indicates that while some shapes were clearly less likely to be generated by GAMETES, all 33 shape classes were still represented within at least 1 of the 12 populations. Notably, in both the 1,000 and 100,000 population examples, specifying a heritability of 0.2 along with a MAF of 0.2 tended to particularly limit the diversity of model shapes that could be generated. Models belonging to shape classes 21 and 23 were instead dramatically more prevalent given these constraints. Similar figures for populations of all three sizes and a variable K (instead of a fixed K) are given in the Additional file 1 (Figures S2, S3 and, S4). As for the fixed K populations, within the variable K populations Kruskal-Wallis testing confirmed that the EDMs of models found in different shape groups were significantly different (P <<0.001). However we observed one key difference when allowing K to vary. Specifically, the models generated were more evenly distributed across different shape classes, which simultaneously improved class coverage such that more shape classes were represented within each of the 12 constraint combination populations with variable K than when compared to fixed K. This finding is intuitive since allowing K to vary is less mathematically restrictive for penetrance function generation, and thus we would expect diversity. Figure 5 summarizes trends in both EDM scores and shape class coverage between all 72 populations. Specifically, Figure 5A summarizes the EDM range (i.e. the maximum EDM minus the minimum EDM observed within a respective population). Since GAMETES selects models representative of this model difficulty range, maximizing this range encourages the selection of best models to represent the easiest and most challenging models based on model architecture alone. This is valuable for developing the most thorough simulation study possible. Notice how for a fixed K, the EDM range largely consistent for different population sizes. However when comparing EDM ranges between fixed and variable K, we see that variable K can, for certain constraints (such as MAF = 0.2 and larger heritabilities), increase EDM range. Also, notice that EDM range tends to increase with heritability suggesting that architecture may be most important to consider for higher heritability models within simulation studies. Maximum and minimum EDM values for all populations are summarized in the Additional file 1: Figure S5. Figure 5B summarizes the shape class coverage for each of the 72 populations. This is the number of shape classes which were represented by models in the respective populations (where all 33 identified shape classes is the maximum). Observe that coverage tends to decrease along with population size whether K is fixed or not. However a variable K somewhat reduces loss in coverage. Finally, we return to generalization of shape class projections by the number of edges or the number of triangles. While we considered both generalized classifications, here we only report the results for generalizing by edge number, as we found that it better captured underlying differences in detection difficulty (see Additional file 1 for the results of generalizing by number of triangles). Figure 6 explores trends in model EDM scores broken down by the number of edges in respective shape class projections. Keeping in mind that models with a higher EDM are generally easier to detect, models with 2 or 3 edges tend to be the most challenging to detect (lowest median EDM scores), models with 1 or 4 edges are somewhat easier to detect, and models with 5 than 6 edges are, on average, the easiest to detect. Kruskal-Wallis testing indicates that EDM scores are significantly different depending on the number of edges in the model (P <<0.001). Mann-Whitney pairwise comparisons reveal that nearly all pairwise comparisons between edge number groups as suggested by box-plot notches in Figure 6 (P <0.05) (see Additional file 1 for pairwise comparisons). The only exceptions are found for fixed K models in populations of 1,000 and 10,000 where the difference in average EDM between having 1 vs. 4 edges was not significant. Also of note, the proportion of models having a specific number of edges in their shape projections appears to be relatively stable, regardless of population size or how K is set (See Additional file 1: Figure S6). One other intuitive observation from this figure, is that when K is fixed at 0.3 the range of observed EDM scores is narrower than when GAMETES is allowed to explore a variable K. In we had previously demonstrated while holding all other model constraints constant (i.e. heritability, minor allele frequency, and K), in inherent model ’shape’ could still significantly influence EDM. While beyond the scope of the present study, it might be interesting to further dissect the influence of model shape from that of K on a model’s detection difficulty. This study pursued three goals: (1) Classify and characterize pure, strict 2-locus epistatic models, (2) explore the relationship between model architecture and detection difficulty in the generalized context of model shape, and (3) explore the maintenance of model architecture diversity in GAMETES-generated model populations to establish guidelines for effective complex model generation. Our focus on such a precise, challenging class of epistatic models lends itself to both simulation studies, in which a gold standard for algorithmic evaluation is desirable, and to real world model detection where our characterization of a more mathematically tractable class of epistasis may facilitate the characterization of interaction in an observed biological model. Our geometric classification of pure, strict, two-locus epistasis model shapes revealed 33 unique shape symmetry classes having 1 to 6 edges. This is in contrast to the 69 symmetry classes having 1 to 8 edges identified in randomly generated impure epistatic models . The shape of a two-locus model may be used to classify and offer a visual representation of the type of gene interaction present. This classification of model shape might be applied to epistatic models identified in real world analyses in order to determine associated shape classes and EDM scores. It would be interesting to explore whether the model shapes of real world interactions tended to correspond with shapes that GAMETES generates more frequently by chance. Having previously demonstrated the ability of EDM to predict detection power in , the present evaluation of model EDM transitively indicates that the shape of pure, strict epistatic models significantly influences detection difficulty. This is also in line with the claim made by that an epistatic model’s shape impacts the power to detect it. Additionally, this finding highlights the importance of taking model architecture into consideration when generating models and datasets for the evaluation of algorithms. Further characterization of shape classes, grouped by number of edges in the shape projection, reveals that the number of edges alone is also predictive of relative detection difficulty. The experimental variation of population size and K revealed the overall consistency of the GAMETES model generation strategy. GAMETES was designed to generate a random population of genetic models with diverse model architectures. We gauge this diversity on (1) the range of model EDMs (maximum and minimum observed EDM values), and (2) the number of shape classes observed in the generated population of models. Maintaining model diversity is ideal when constructing a simulation study which efficiently covers the space in which real biological disease associations could appear. Our results suggest that when generating models for a simulation study GAMETES effectively maintains diversity even at population size of only 1,000. However for maximal architectural diversity, a population size of 100,000 combined with a variable K is optimal. Combining GAMETES with geometric model classification could be used to select models for a simulation study which are both representative of each model class and representative of maximum and minimum EDM values. By developing better simulation study designs we hope to encourage the development of better complex model detection algorithms. As suggested by , the geometric classification scheme applied in this paper could be expanded to three or more loci, as well as to QTL studies (by scaling penetrance values outside the range 0-1, such that values at MLGs become expected phenotype magnitudes). It is crucial not only to develop techniques for the detection of complex multilocus interactions, but to develop a theoretical understanding of epistasis. This work will promote effective simulation studies and facilitate the characterization of observed real-world interactions by better defining the charteristics of one group of complex epistatic models to which others may be compared. Single nucleotide polymorphism Genetic architecture model emulator for testing and evaluating software Minor allele frequency Ease of detection measure. Cordell H:Epistasis: what it means, what it doesn’t mean, and statistical methods to detect it in humans. Hum Mol Genet. 2002, 11 (20): 2463- Bateson W, Mendel G:Mendel’s principles of heredity. Putnam’s. 1909, Fisher R:The correlation between relatives on the supposition of mendelian inheritance. Trans R Soc Edinburgh. 1918, 52: 399-433. Cheverud J, Routman E:Epistasis and its contribution to genetic variance components. Genetics. 1995, 139 (3): 1455- Frankel W, Schork N:Who’s afraid of epistasis?. Nat Genet. 1996, 14 (4): 371-373. Phillips P:The language of gene interaction. Genetics. 1998, 149 (3): 1167- Wade M, Winther R, Agrawal A, Goodnight C:Alternative definitions of epistasis: dependence and interaction. Trends Ecol & Evol. 2001, 16 (9): 498-504. Moore J, Williams S:Traversing the conceptual divide between biological and statistical epistasis: systems biology and a more modern synthesis. Bioessays. 2005, 27 (6): 637-646. Moore J, Williams S:Epistasis and its implications for personal genetics. Am J Hum Genet. 2009, 85 (3): 309-320. Shriner D, Vaughan L, Padilla M, Tiwari H:Problems with genome-wide association studies. Science. 2007, 316 (5833): 1840c- Eichler E, Flint J, Gibson G, Kong A, Leal S, Moore J, Nadeau J:Missing heritability and strategies for finding the underlying causes of complex disease. Nat Rev Genet. 2010, 11 (6): 446-450. McKinney B, Reif D, Ritchie M, Moore J:Machine learning for detecting gene-gene interactions: a review. Appl Bioinform. 2006, 5 (2): 77-88. Cordell H:Detecting gene–gene interactions that underlie human diseases. Nat Rev Genet. 2009, 10 (6): 392-404. Moore J, Asselbergs F, Williams S:Bioinformatics challenges for genome-wide association studies. Bioinformatics. 2010, 26 (4): 445- Neuman R, Rice J:Two-locus models of disease. Genet Epidemiol. 1992, 9 (5): 347-365. Li W, Reich J:A complete enumeration and classification of two-locus disease models. Hum Hered. 2000, 50 (6): 334-349. Brodie III E:Why evolutionary genetics does not always add up.Epistasis Evol Process. 2000, New York: Oxford University Press, 3-19. Culverhouse R, Suarez B, Lin J, Reich T:A perspective on epistasis: limits of models displaying no main effect. Am J Hum Genet. 2002, 70 (2): 461-471. Urbanowicz RJ, Kiralis J, Sinnott-Armstrong NA, Heberling T, Fisher JM, Moore JH:GAMETES: a fast, direct algorithm for generating pure, strict, epistatic models with random architectures. BioData Min. 2012, 5: 1-14. Urbanowicz RJ, Kiralis J, Fisher JM, Moore JH:Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection. BioData Min. 2012, 5: 1-13. Hallgrímsdóttir IB, Yuster DS:A complete classification of epistatic two-locus models. BMC Genetics. 2008, 9: 17- Moore J, Hahn L, Ritchie M, Thornton T, White B:Application of genetic algorithms to the discovery of complex models for simulation studies in human genetics. Proceedings of the Genetic and Evolutionary Computation Conference. 2002, NIH Public Access, 1155-1155. Moore J, Hahn L, Ritchie M, Thornton T, White B:Routine discovery of complex genetic models using genetic algorithms. Appl Soft Comput. 2004, 4: 79-86. Greene C, Himmelstein D, Moore J:A model free method to generate human genetics datasets with complex gene-disease relationships. Evol Comput Mach Learn Data Min Bioinformatics. 2010, 6023: 74-85. Beerenwinkel N, Pachter L, Sturmfels B:Epistasis and shapes of fitness landscapes. Stat Sinica. 2007, 17: 1317-1342. Barber C, Huhdanpaa H:Qhull, Softwarepackage. 1995, Rambau J:TOPCOM: Triangulations of point configurations and oriented matroids. Mathematical software: proceedings of the first International Congress of Mathematical Software: Beijing, China, 17-19 August 2002. 2002, Imperial College Pr, 330-340. Kruskal W, Wallis W:Use of ranks in one-criterion variance analysis. J Am Stat Assoc. 1952, 47 (260): 583-621. Hahn LW, Ritchie MD, Moore JH:Multifactor dimensionality reduction software for detecting gene–gene and gene–environment interactions. Bioinformatics. 2003, 19 (3): 376-382. This work was supported by NIH grants AI59694, LM009012, LM010098, EY022300, LM011360, CA134286, and GM103534. The authors declare that they have no competing interests. RU organized the analysis, carried out statistical analyses, and wrote the majority of the manuscript. DGM, coded the shape projection and classification strategy, carried out the experiments, developed key figures and co-wrote the manuscript. JK assisted with the shape projection strategy. JH co-wrote the manuscript. All authors read and approved the final manuscript. Electronic supplementary material Authors’ original submitted files for images About this article Cite this article Urbanowicz, R.J., Granizo-Mackenzie, A.L., Kiralis, J. et al. A classification and characterization of two-locus, pure, strict, epistatic models for simulation and detection. BioData Mining 7, 8 (2014). https://doi.org/10.1186/1756-0381-7-8
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.17/warc/CC-MAIN-20231205105136-20231205135136-00402.warc.gz
CC-MAIN-2023-50
37,192
80
https://github.com/zxhfirefox?tab=repositories
code
- Joined on Apr 21, 2011 forked from krzyzanowskim/CryptoSwift Crypto related functions and helpers for Swift implemented in Swift programming language forked from realm/realm-cocoa Realm is a mobile database: a replacement for Core Data & SQLite forked from stephencelis/SQLite.swift A type-safe, Swift-language layer over SQLite3. forked from Haneke/HanekeSwift A lightweight generic cache for iOS written in Swift with extra love for images. forked from daltoniam/SwiftHTTP Thin wrapper around NSURLSession in swift. Simplifies HTTP requests. forked from duemunk/Async Syntactic sugar in Swift for asynchronous dispatches in Grand Central Dispatch forked from yoavlt/LiquidFloatingActionButton Material Design Floating Action Button in liquid state forked from matthewpalmer/Locksmith A powerful, protocol-oriented library for working with the keychain in Swift. forked from Ramotion/animated-tab-bar RAMAnimatedTabBarController is a Swift module for adding animation to tabbar items. forked from akosma/SwiftMoment A time and calendar manipulation library for iOS 9 and Xcode 7 written in Swift 2 forked from Yalantis/Side-Menu.iOS Animated side menu with customizable UI forked from ReactKit/SwiftTask Promise + progress + pause + cancel + retry for Swift. forked from pNre/ExSwift A set of Swift extensions for standard types and classes. forked from yannickl/DynamicColor Yet another extension to manipulate colors easily in Swift forked from ochococo/Design-Patterns-In-Swift Design Patterns implemented in Swift forked from netty/netty Netty project - an event-driven asynchronous network application framework forked from danielgindi/Charts An iOS port of the beautiful MPAndroidChart. - Beautiful charts for iOS apps! forked from krzysztofzablocki/KZPlayground Playgrounds for Objective-C forked from vsouza/awesome-ios A curated list of awesome iOS frameworks, libraries, tutorials, xcode plugins and components. forked from AFNetworking/AFNetworking A delightful iOS and OS X networking framework forked from BradLarson/GPUImage An open source iOS framework for GPU-based image and video processing forked from robbiehanson/XMPPFramework An XMPP Framework in Objective-C for Mac and iOS forked from CocoaLumberjack/CocoaLumberjack A fast & simple, yet powerful & flexible logging framework for Mac and iOS forked from cocos2d/cocos2d-x cocos2d for iOS, Android, Win32 and OS X. Built using C++ forked from RestKit/RestKit RestKit is a framework for consuming and modeling RESTful web resources on iOS and OS X forked from ReactiveCocoa/ReactiveCocoa A framework for composing and transforming streams of values forked from menacher/java-game-server Jetserver is a high speed nio socket based multiplayer java game server written using Netty and Mike Rettig's Jetlang.It is specifically tuned for network based multiplayer games and supports TCP and UDP network protocols. forked from MugunthKumar/MKNetworkKit ARC ready Networking Framework with built in authentication and HTTP 1.1 caching standards support for iOS 5+ devices forked from FuzzyAutocomplete/FuzzyAutocompletePlugin A Xcode 5 plugin that adds more flexible autocompletion rather than just prefix-matching. forked from onevcat/VVDocumenter-Xcode Xcode plug-in which helps you write Javadoc style documents easier. forked from git/git Git Source Code Mirror - This is a publish-only repository and all pull requests are ignored. Please follow Documentation/SubmittingPatches procedure for any of your improvements. forked from tomaz/appledoc Objective-c code Apple style documentation set generator. forked from MacRuby/MacRuby MacRuby is an implementation of Ruby 1.9 directly on top of Mac OS X core technologies such as the Objective-C runtime and garbage collector, the LLVM compiler infrastructure and the Foundation and ICU frameworks.
s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861735203.5/warc/CC-MAIN-20160428164215-00157-ip-10-239-7-51.ec2.internal.warc.gz
CC-MAIN-2016-18
3,818
68
https://ir.lib.hiroshima-u.ac.jp/journals/HUStudSchLett/v/82/item/53511
code
An archived book titled Genji-hitotoki-banashi (“A Digest of The Tale of Genji”) was created in a unique way. Originally, the book was published in 1837 with the title Genji-monogatari-ezukushi-taiisho (“Illustrated Tale of Genji”). The book was bound with one handwritten manuscript page inserted between each of the pages of the book. That is, when turning the pages of this book, there is one single printed page, followed by one handwritten page, and so on. Originally, this book had one chapter of The Tale of Genji assigned to one page, in the form of an illustration of one scene in that chapter, together with only one of the waka poems found in that chapter. The handwritten text inserted here presents a summary of the chapter found on the previous page on its front side, and a summary of the chapter found on the next page on its reverse (back) side. With this format, with the book open to two pages, one sees simultaneously an illustration of each respective chapter of the Tale and one representative waka poem, plus a summary of that chapter. The summary for each chapter is written at around 250 Japanese-text characters, with the storyline of the long Tale aptly summarized in compact form. One learns from a postscript at the end of the book that this was written by a person in 1841 named Umezono no Aruji. It is surmised that the additions to this book served to make the contents more suitable as an introductory text for The Tale of Genji, and was perhaps given to a young girl familiar to said person.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510810.46/warc/CC-MAIN-20231001073649-20231001103649-00564.warc.gz
CC-MAIN-2023-40
1,533
1
https://askubuntu.com/questions/1130389/ubuntu-boots-up-in-emergency-mode-and-im-too-much-of-a-potato-to-diagnose
code
My computer froze and I foolishly did a hard reset. Now when I try to boot Ubuntu, it stalls on the loading screen for about 30 seconds and then kicks me into emergency mode. I've taken a look at the journal and I'd gladly post the revelant results here but there are hundreds of lines and I'm not sure which parts to post. I'll update this if/when someone guides me to it. here are some highlighted/red sections that seemed like they'd be relevant: secureboot: Secure boot could not be determined (mode 0) ACPI BIOS Warning (bug): Optiional FADT field Pm2ControlBlock has valid Length but zero address: (long string) xhci_hcd 0000:4:00.0: host halt failed, -110 || : can't setup: -110 || : init 0000:4:00.0 fail, -110 Timed out waiting for device on dev-disk-by\(long string).device.
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529175.83/warc/CC-MAIN-20190723085031-20190723111031-00444.warc.gz
CC-MAIN-2019-30
784
3
https://wiki.kadam.net/en/index.php?title=Banner_Generator
code
Our Banner Generator is a graphic editor that allows you to quickly create a banner with one of the most popular sizes used in our system. See the video instructions for creating a banner in our generator: Access to the editor can be obtained by clicking on "Banner Generator" button in the list of banner campaign materials. (image 1) In the banner generator you can select 4 main options: - add and edit image </ li> - add and edit buttons </ li> - add and edit text </ li> - frame and background settings </ li> Adding and editing an image To upload an image, click "Add Image" (image 2). The boot menu appears on the right. You can download an image file from your device or specify a link to an image from any site on the Internet. You can download both a static image and an animated one. Supported file extensions: gif, jpg, jpeg, png. The weight of the image file should not exceed 1 mb (image 3). Once your image has been loaded, you can resize it, set and edit the image frame, its transparency, corners and rotate it however you had prefer. (image 4). To add a button to the banner, click "Add button" button (image 5). The button is added automatically, and you can resize it and reshape it however you like; change the background color, image frame, button frame, set transparency, adjust its shadow, and rotate it however you like. (image 6). Adding and editing text To add text to the banner you need to click "Add text" (image 7). The text is added automatically and you can change its content, font, size, style, alignment, color, shadow, rotate (image 8). You can switch between all banner elements using the cursor by clicking on the desired icon. You can also change the position of the different layers of your banner (image 9). Setting the frame and background Clicking on the color squares you can set the background color, frame color and frame thickness, if necessary (image 10). After editing all the elements to add a banner to the campaign, you need to click the "Create" button (image 11).
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00346.warc.gz
CC-MAIN-2021-21
2,018
21
https://www.voltage.com/crypto/ibe-for-key-management/
code
IBE for key management A few months ago, Trust Catalyst released their 2008 Encryption and Key Management Industry Benchmark Report. Here's the data from this report that shows how challenging the 330 people who responded to Trust Catalyst's survey rated various areas of key management. What's interesting is that all of the top three challenges listed here are ones that are particularly easy to solve if you use identity-based encryption (IBE). With IBE, it's easy to calculate keys as they're needed. This makes it practical to use short-lived keys, and if you do this, there's no need at all to revoke keys. And because IBE keys are calculated, there's no need at all to back them up. Calculating an IBE key a second time is no more difficult that calculating it the first time, so instead of keeping a secure archive of keys, you don’t need to archive IBE keys at all. Instead of getting a key from the secure archive server, you just recalculate it when you need it, which makes backing up and recovering IBE keys extremely easy. Similarly, because you can calculate any IBE decryption keys from just a single system-wide master secret when you need them, making IBE keys accessible to a disaster recovery site is extremely easy. Even if you can't reach an IBE key server from a disaster recovery site, it's easy to get a key server up and running in the disaster recover site in just a few minutes. All you need to do is configure a key server with the master secret, and you're done. This takes no more than a few minutes, so this problem is also extremely easy if you're using IBE. In the light of Trust Catalyst's report, I have to wonder why more people aren't using IBE for things other than email. It seems to be the logical choice because it makes what seems to be the biggest challenges of key management extremely easy.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00400.warc.gz
CC-MAIN-2018-30
1,838
6
http://3d2f.com/download/9-374-join-two-pst-files-free-download.shtml
code
SysInfoTools PST Split and Merge Combo Pack 1.0 This combo pack allows user to merge multiple PST files into single PST file, and in the same time you can split large size PST file into small number of PST files without damaging or harming the original PST files. PST Join 2.5 How to join PST files these situations usually arise when you work with Outlook? So you are in right place to sort out your problems as SysTools PST Merge software is most reliable solution to merge your multiple PST files into single PST file. File Security Manager - full control over file and folder access permissions (even in Windows XP Home) Being an overall excellent operating system, Windows XP does have some versions that limit users control capabilities to a bare minimum. For instance, Windows XP Home offers a great set of multimedia and general features, but lacks the degree of flexibility that many advanced users would definitely appreciate. For instance, the Home version will not allow you to set file and folder access right, which is a common method Index Your Files - lightning-fast file searches that won't cost you a penny When regular users want to give a solid boost to their computers, most of them head straight to the nearest computer store to purchase faster CPUs, more RAM, faster hard drives and even RAID arrays. Unfortunately, most of them dont even realize that the same results can be easily achieved with the help of specialized software that indexes files on their hard drives and results in much faster, almost instantaneous
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827252.87/warc/CC-MAIN-20181216025802-20181216051802-00085.warc.gz
CC-MAIN-2018-51
1,541
4
https://developer.android.com/reference/app-actions/built-in-intents/common
code
|Open app feature |Launch a feature of the app. |Create a review or leave a rating on products, locations, content, or other things. |Construct a new entity in an app. |Open a barcode or QR code scanner. |Get news article |Search and view news updates. |Search and view reviews for products, locations, content, or other things. |Get service observation |Check usage information for a service provided to the user. |Add, update, or remove a service (such as a premium subscription service) from a user's account. |Update software application |Determine the setting to be updated by name using the softwareApplication.softwareSetting.name intent parameter. |Present a summary of the user's account in an app. |Search for content or entities using the default in-app search feature in an app. |Search and view previous orders placed in an app. Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Last updated 2023-05-04 UTC.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.87/warc/CC-MAIN-20240421071342-20240421101342-00749.warc.gz
CC-MAIN-2024-18
1,164
18
https://codegolf.stackexchange.com/questions/242960/chromatic-polynomial-of-a-graph
code
Given a undirected graph \$G\$ and a integer \$k\$, how many \$k\$-coloring does the graph have? Here by a \$k\$-coloring, we mean assigning one of the \$k\$ colors to each vertex of the graph, such that no two vertices connected by an edge have the same color. For example, the following graph can be 3-colored in 12 different ways: Let \$P(G,k)\$ denotes the number of \$k\$-coloring of the graph \$G\$. For example, \$P(G, 3) = 12\$ for the graph above. \$P(G,k)\$ is in fact a polynomial in \$k\$, which is known as the chromatic polynomial. For example, the chromatic polynomial of the graph above is \$k^4-4k^3+5k^2-2k\$. This can be shown using the deletion–contraction formula, which also gives a recursive definition of the chromatic polynomial. The following proof is taken from Wikipedia: For a pair of vertices \$u\$ and \$v\$, the graph \$G/uv\$ is obtained by merging the two vertices and removing any edges between them. If \$u\$ and \$v\$ are adjacent in \$G\$, let \$G-uv\$ denote the graph obtained by removing the edge \$uv\$. Then the numbers of \$k\$-colorings of these graphs satisfy: $$P(G,k)=P(G-uv, k)- P(G/uv,k)$$ Equivalently, if \$u\$ and \$v\$ are not adjacent in \$G\$ and \$G+uv\$ is the graph with the edge \$uv\$ added, then $$P(G,k)= P(G+uv, k) + P(G/uv,k)$$ This follows from the observation that every \$k\$-coloring of \$G\$ either gives different colors to \$u\$ and \$v\$, or the same colors. In the first case this gives a (proper) \$k\$-coloring of \$G+uv\$, while in the second case it gives a coloring of \$G/uv\$. Conversely, every \$k\$-coloring of \$G\$ can be uniquely obtained from a \$k\$-coloring of \$G+uv\$ or \$G/uv\$ (if \$u\$ and \$v\$ are not adjacent in \$G\$). The chromatic polynomial can hence be recursively defined as - \$P(G,x)=x^n\$ for the edgeless graph on \$n\$ vertices, and - \$P(G,x)=P(G-uv, x)- P(G/uv,x)\$ for a graph \$G\$ with an edge \$uv\$ (arbitrarily chosen). Since the number of \$k\$-colorings of the edgeless graph is indeed \$k^n\$, it follows by induction on the number of edges that for all \$G\$, the polynomial \$(G,x)\$ coincides with the number of \$k\$-colorings at every integer point \$x=k\$. Given a undirected graph \$G\$, outputs its chromatic polynomial \$P(G, x)\$. This is code-golf, so the shortest code in bytes wins. You can take input in any reasonable format. Here are some example formats: - an adjacency matrix, e.g., - an adjacency list, e.g., - a vertex list along with an edge list, e.g., - a built-in graph object. You may assume that the graph has no loop (an edge connecting a vertex with itself) or multi-edge (two or more edges that connect the same two vertices), and that the number of vertices is greater than zero. You can output in any reasonable format. Here are some example formats: - a list of coefficients, in descending or ascending order, e.g. - a string representation of the polynomial, with a chosen variable, e.g., - a function that takes an input \$n\$ and gives the coefficient of \$x^n\$; - a built-in polynomial object. Input in adjacency matrices, output in polynomial strings: [[0,0,0],[0,0,0],[0,0,0]] -> x^3 [[0,1,1],[1,0,1],[1,1,0]] -> x^3-3*x^2+2*x [[0,1,0,0],[1,0,1,0],[0,1,0,1],[0,0,1,0]] -> x^4-3*x^3+3*x^2-x [[0,1,1,0],[1,0,1,1],[1,1,0,0],[0,1,0,0]] -> x^4-4*x^3+5*x^2-2*x [[0,1,1,1,1],[1,0,1,1,1],[1,1,0,1,1],[1,1,1,0,1],[1,1,1,1,0]] -> x^5-10*x^4+35*x^3-50*x^2+24*x [[0,1,1,0,1,0,0,0],[1,0,0,1,0,1,0,0],[1,0,0,1,0,0,1,0],[0,1,1,0,0,0,0,1],[1,0,0,0,0,1,1,0],[0,1,0,0,1,0,0,1],[0,0,1,0,1,0,0,1],[0,0,0,1,0,1,1,0]] -> x^8-12*x^7+66*x^6-214*x^5+441*x^4-572*x^3+423*x^2-133*x [[0,0,1,1,0,1,0,0,0,0],[0,0,0,1,1,0,1,0,0,0],[1,0,0,0,1,0,0,1,0,0],[1,1,0,0,0,0,0,0,1,0],[0,1,1,0,0,0,0,0,0,1],[1,0,0,0,0,0,1,0,0,1],[0,1,0,0,0,1,0,1,0,0],[0,0,1,0,0,0,1,0,1,0],[0,0,0,1,0,0,0,1,0,1],[0,0,0,0,1,1,0,0,1,0]] -> x^10-15*x^9+105*x^8-455*x^7+1353*x^6-2861*x^5+4275*x^4-4305*x^3+2606*x^2-704*x Input in vertex lists and edge lists, output in descending order: [1,2,3], -> [1,0,0,0] [1,2,3], [(1,2),(1,3),(2,3)] -> [1,-3,2,0] [1,2,3,4], [(1,2),(2,3),(3,4)] -> [1,-3,3,-1,0] [1,2,3,4], [(1,2),(1,3),(2,3),(2,4)] -> [1,-4,5,-2,0] [1,2,3,4,5], [(1,2),(1,3),(1,4),(1,5),(2,3),(2,4),(2,5),(3,4),(3,5),(4,5)] -> [1,-10,35,-50,24,0] [1,2,3,4,5,6,7,8], [(1,2),(1,3),(1,5),(2,4),(2,6),(3,4),(3,7),(4,8),(5,6),(5,7),(6,8),(7,8)] -> [1,-12,66,-214,441,-572,423,-133,0] [1,2,3,4,5,6,7,8,9,10], [(1,3),(1,4),(1,6),(2,4),(2,5),(2,7),(3,5),(3,8),(4,9),(5,10),(6,7),(6,10),(7,8),(8,9),(9,10)] -> [1,-15,105,-455,1353,-2861,4275,-4305,2606,-704,0]
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100508.53/warc/CC-MAIN-20231203193127-20231203223127-00402.warc.gz
CC-MAIN-2023-50
4,557
28
http://radiofreetooting.blogspot.com/2005/11/oracle-express-edition-two-strikes.html
code
Oracle Express Edition: two strikes, the pitcher steps up to the plate... The first problem was the install spinning when it was trying to create the XE services. CPU meter up to 100% for as long as you like. This turned out to be an artefact of having NLS_LANG set to UK English. All that was required was settting to American_America.WE8ISO8859P1. Ooops! We might have thought that offshoring so much development and support work would have alerted Oracle to globalisation but apparently not. Having got past that the second problem rears its ugly head. I now have an instance but no data files. Each time the scripts tried to connect to the database the failed with ORA-12557: TNS:protocol adapter not loadable. This is a rara avis; the only note on Metalink relates to Grid controllers, which doesn't seem to fit the case here, and Google likewise draws a blank. Let's hope Mike Townsend comes up with something. Before you Linuxen start smirking this is not particularly a Windows problem: I was able to install XE on my home laptop first time. My home machine is lower spec but same operating system so it's something about the specific configuration of my work machine that's giving me grief. I am only persevering with this because if I ever need XE, I will need it on my work machine. Although I must admit I am starting to get very tired with the process: it requires several manual steps - editing the registry, renaming files, two reboots - to clear down XE prior to re-installing. However, I have just discovered that if I re-run the MSI against an untouched install it asks if I want to uninstall XE. I wish I known this earlier: the Installation guide talks about using Add/Remove Programs in the Control Panel but that option only appears once the installation has passed the Services point. Of course, the MSI uninstall option may also only be triggered if the prior install got to that point. I'm afraid I lack the strength to de- and re-install just to find out. So what have I learnt so far? Not a lot. I've spent hours, literally hours, trying to install XE on my work machine with no success. I certainly haven't had time to build an app on my home machine. I do know two things. One is that the MSDE team is not yet quaking in their boots. The other is that I don't think this blog is likely to appear in the 10XE User Experiences any time soon.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123635.74/warc/CC-MAIN-20170423031203-00475-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,368
8
http://dingogames.com/tasty-blue/instructions/gameplay.htm
code
GameplayUse the mouse or arrow keys to control the grey goo. Eat everything that is smaller than you. The more you eat the bigger you get. When using the arrow keys, press the space bar to "boost" and go faster - you'll need to do this to make some jumps. When using the mouse, hold the left mouse button to increase your acceleration - this does not increase your top speed. You must fill up the progress bar to beat the level. Current SizeThe current size display shows you how big your creature is. Eat more stuff to grow bigger. Progress BarWhen the bar fills with blue the level will be complete. Just EatenThis displays the last item that you have eaten. Helper ArrowThe arrow and cursor tell you what you can eat. The arrow will point to the largest item in close proximity that you can eat. You should eat this item and everything smaller than it. Total TimeThis is how long you have spent in the current level. Beat the level as fast as possible to earn stars.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578711882.85/warc/CC-MAIN-20190425074144-20190425100144-00187.warc.gz
CC-MAIN-2019-18
969
6
http://blogbiznesu.info/guko/litecoin-client-ubuntu-705.php
code
The biggest priority as far as a Bitcoin user is concerned is security, and MultiBit bitcoin client is a perfect blend of security and functionality.Miners are currently awarded with 25 new litecoins per block, an amount which gets halved roughly every 4 years (every 840,000 blocks).You can now grab the Steam for Linux client, which is compatible with Ubuntu versions 10.04-12.10,. Official Steam Client For Ubuntu Linux Released. Litecoin.This is a little guide to help you compile the memecoin-qt client on (X)Ubuntu. Install Expanse Client Ubuntu – The New World Order, Maybe Litecoin 101: How to Install Litecoin Wallet on Windows & Ubuntu ComputerFollow this guide to run a Linux Live operating system on your computer. BAMT Wiki | FANDOM powered by Wikia How To Install Bitcoin Core Wallet 0.9.2.1 On Ubuntu 14.04 Install Clubcoin Client Ubuntu – The Future of DigitalDue to more frequent block generation, the network supports more transactions without a need to modify the software in the future. Litecoin 0.8.5.1 Release Notes – Site Title Litecoin features faster transaction confirmation times and improved storage efficiency than the leading math-based currency.Litecoin is a peer-to-peer Internet currency that enables instant, near-zero cost payments to anyone in the world.I just spent a good amount of time trying to get my headless ubuntu litecoin miner to work. Bitcoin/Litecoin mining config Ubuntu 10.04 - superkuhIf you appreciate our work please consider a small contribution to the Litecoin. issue that can cause the client to erroneously. (Ubuntu, Fedora or. INSTRUCTIONS FOR COMPILING THE LINUX BITCOIN-SCRYPT CLIENT NiceHash Miner - v18.104.22.168 Install Gamecredits Client Ubuntu – Buy A Private Island php - Pushpoold mmcFE-Litecoin server not running properly The following instructions will upgrade your system to the latest version of the client. sudo add-apt-repository ppa:bitcoin.The Litecoin client has a built in encryption which can be activated.Since they are all based on the same client, this guide should also be. Release Notes of Litecoin 0.8.3.7 – Site Title How To Install Bitcoin 0.9.1 On Ubuntu, Linux Mint And Wallet Download - Peercoin - Secure & Sustainable Debian -- Details of package litecoin-qt in sid Install Ethereum Client Ubuntu: The Omni Currency – TANYou get paid in bitcoins by pay-per-share approach, once a day or once a week.Moving the Bitcoin Core Data Directory. On Ubuntu, open a file browser by clicking on the folder icon in the launcher.Or you can use AUR helper like yaourt to automate the process for you. Ubuntu 15.10 Important:. Bitcoin wallet openSUSE ‹ Bitcoin wallet / Bitcoin tradeThis software connects your computer to the network and enables it to interact with the bitcoin clients,.I just installed bitcoin-qt wallet on Ubuntu 14.04 and have. A guide for setting up the Litecoin client and different mining software in Ubuntu Linux. How to Install / Setup μTorrent (uTorrent) in Ubuntu 16.04If you are ArchLinux user, you can find Peercoin packages in AUR.This tutorial shows you how to install and use Electrum Bitcoin wallet on Linux including Ubuntu 16. With any form of crypto-currency whether it be a bitcoin, ether, litecoin, or some of the numerous different altcoins,.Just launch the package manager and install bitcoin-qt. OR. 1- Bitcoin manual installation how to.The goal of this major new upgrade to 0.8.x is to modernize the Litecoin reference client,.The software is released in a transparent process that allows for independent verification of binaries and their corresponding source code.Cryptocurrency is freeing individuals to transact money and do business on their terms. How to restore your Litecoins into the LTC Wallet in Linux [Ubuntu]A simple easy to use UI for minerd.exe or cgminer.exe Bitcoin,Litecoin client. Apple Mac OS X and Linux Ubuntu RedHat 38 Data Recovery Freeware,. Litecoin is a free open source peer-to-peer electronic cash system that is.
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160754.91/warc/CC-MAIN-20180924205029-20180924225429-00501.warc.gz
CC-MAIN-2018-39
3,979
25
https://mathoverflow.net/questions/120951/intersection-cohomology-and-etale-cohomology
code
Can someone explain or give a reference on the comparison between intersection cohomology and l-adic etale cohomology of a variety over a field of characteristic zero? Sign up using Google Sign up using Facebook Sign up using Email and Password 3 years ago
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825366.39/warc/CC-MAIN-20160723071025-00000-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
256
5
http://jobs.canadalearningcode.ca/job_posts/1152
code
We are looking for a developer to help improve the shopping and user experience on our existing website including optimization for mobile and tablet users. This is a project with potential to stay on on a freelance basis to maintain and upkeep the site. Please send your portfolio and rate. Please send your resume, portfolio and rate to [email protected] Email title: Freelance Developer, Specializing in Shopify
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00478.warc.gz
CC-MAIN-2019-18
413
3
https://docs.bugcrowd.com/customers/submission-management/commenting/
code
- Adding a comment - Adding Blocker - Viewing Submission Activities - Uploading an Attachment with Your Comment - Editing a Comment - Deleting a Comment When adding comments, you can style your text using the Markdown syntax. For more information, see using markdown for formatting content. Adding a comment To add a comment or send a message: Go to the Activity section of a submission and click Send a message. In To, select one of the following based on whom you want to send the message to: - Bugcrowd: Send an internal message visible to your team and the Bugcrowd team. - Everyone: Send message to everyone involved in the submission and the general public (if you and the researcher agree to disclose the report) In the text box, type the message. You can style your text using the Markdown syntax. For more information, see using markdown for formatting content. You can upload attachments for providing detailed information. For more information, see upload an attachment with your comment. Click Send message. The message is sent and it is visible in the submission activity stream. The researcher will receive an email notification that you have commented on their submission for additional information from them. Even if the submission is not yet claimed, the email notification is sent to the researcher. Replying Directly to External Researcher: The Reply to (researcher) option is unavailable for submissions made anonymously (through the embedded form without providing an email address) or has no associated researcher (example, through Qualys). You can add a blocker for a submission. For information about blockers, see blockers. Viewing Submission Activities Each submission has an activity stream that maintains a history log of all actions, comments, and changes that have been made to a submission and a record of the person who made the changes. The activities are displayed in colors based on to whom the message was sent: - Everyone - Displayed in grey - Bugcrowd - Displayed in pink Subscribing to a Submission When you comment on a submission, you automatically subscribe to receive updates for that submission. Learn more about submissions and how to unsubscribe from them. When adding a comment, you can notify a team member directly by mentioning their name using the “@” key. This is useful when you need to alert someone who is not currently assigned or subscribed to a submission. Mention the Application Security Engineer on-staff for your submission by mentioning @Bugcrowd. Uploading an Attachment with Your Comment When replying to a researcher or sending a private message, you can click Add attachments and attach a video, image, or PDF. This helps you share sensitive information without uploading it to third party. Browse to the location of the file you want to upload. You can attach up to five files at a time. The supported file types are The size of each uploaded file cannot exceed 100 MB. The attached files are displayed as shown. To delete an attachment,. click X icon. Editing a Comment Editing prior to notifications: If you are able to edit the comment within two minutes the notifications to other users around the comment will use the updated text. Integrations will trigger immediately and will not receive the updated text. You can edit comments and/or private notes. To edit a comment, click the … icon on the right side of the comment and click Edit. Make the required changes and click Save Comment. The Comment Updated message is displayed. Deleting a Comment You can delete comments and/or private notes. To edit a comment, click the … icon on the right side of the comment and click Delete. A pop-up message asking for confirmation is displayed. Click OK. The comment is deleted and [DELETED] is displayed in the activity feed.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510781.66/warc/CC-MAIN-20231001041719-20231001071719-00711.warc.gz
CC-MAIN-2023-40
3,799
42
https://groups.google.com/g/moonlightde/c/PP3xwrRxjEQ
code
we need a little information to populate the wiki on how internationalize the system. well, what tecnologies: wicht we use, qt5 build in or external such gettex etc etc.. well, whit methods: a external colaborative like tranfiex or each proper like linguistic and commit directly with pulls? so open question here! i can got help for translator for russian and greeek .. and well mayoriti of us speak spanish... Lenz McKAY Gerardo (PICCORO)
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00090.warc.gz
CC-MAIN-2021-39
440
6
http://linuxinstall.net/events
code
Connecting you to the world of Linux!!! Keep track of what we are up to!! We are a website and podcast that tried to focus on all things OpenSource and specifically Linux in the corporate world. We use Linux in our daily corporate lives and share our thoughts and feeling about how others might do the same.
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663611.15/warc/CC-MAIN-20140930004103-00271-ip-10-234-18-248.ec2.internal.warc.gz
CC-MAIN-2014-41
307
3
https://support.serverdensity.com/hc/en-us/articles/360001067743-Provisioning-and-automatically-monitoring-cloud-instances
code
Server Density allows you to manage your cloud instances across multiple providers and regions in a single system. Once you enter your cloud credentials then instances will be automatically imported and the real time status shown in the UI. Provisioning from within Server Density If you want to start monitoring automatically and be able to launch and control your instances from within Server Density, you need to use our Puppet or Chef modules to automatically install the agent. When you launch a new instance from the Server Density UI we will use your cloud platform's API to drop the agent key somewhere that Puppet or Chef can access it. For AWS we drop it in user-data and for Rackspace we inject a file into /etc/sd-agent-key on the instance. So long as you have the Puppet or Chef modules set up then everything will happen automatically.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475422.71/warc/CC-MAIN-20240301161412-20240301191412-00748.warc.gz
CC-MAIN-2024-10
849
5
https://www.curvesandchaos.com/what-is-unsupervised-and-supervised-classification/
code
What is unsupervised and supervised classification? The main difference between supervised and unsupervised learning: Labeled data. The main distinction between the two approaches is the use of labeled datasets. To put it simply, supervised learning uses labeled input and output data, while an unsupervised learning algorithm does not. Is there unsupervised classification? Unsupervised methods help you to find features which can be useful for categorization. It is taken place in real time, so all the input data to be analyzed and labeled in the presence of learners. It is easier to get unlabeled data from a computer than labeled data, which needs manual intervention. What is unsupervised classification in remote sensing? The goal of unsupervised classification is to automatically segregate pixels of a remote sensing image into groups of similar spectral character. Classification is done using one of several statistical routines generally called “clustering” where classes of pixels are created based on their shared spectral signatures. What is supervision classification? Supervised classification is based on the idea that a user can select sample pixels in an image that. are representative of specific classes and then direct the image processing software to use these. training sites as references for the classification of all other pixels in the image. What is unsupervised classification image? Unsupervised image classification is the process by which each image in a dataset is identified to be a member of one of the inherent categories present in the image collection without the use of labelled training samples. What is unsupervised learning method? Unsupervised learning, also known as unsupervised machine learning, uses machine learning algorithms to analyze and cluster unlabeled datasets. These algorithms discover hidden patterns or data groupings without the need for human intervention. What is unsupervised classification in Erdas imagine? Unsupervised classification categorizes continuous raster data into discrete thematic groups having similar spectral-radiometric values. Supervised classification allows the analyst to define classes of interest. What is unsupervised classification used for? The goal of the unsupervised classification algorithm is to group the records into a set of classes, such that the members of a given class are similar to each other and distinct from the members of all the other classes. It is a key task of exploratory data mining, and a common technique for statistical data analysis. What do you mean by unsupervised learning? What is unsupervised learning explain with example? The goal of unsupervised learning is to find the underlying structure of dataset, group that data according to similarities, and represent that dataset in a compressed format. Example: Suppose the unsupervised learning algorithm is given an input dataset containing images of different types of cats and dogs. Why is supervised classification important? Supervised classification can be very effective and accurate in classifying satellite images and can be applied at the individual pixel level or to image objects (groups of adjacent, similar pixels). What is unsupervised classification in Erdas Imagine?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100327.70/warc/CC-MAIN-20231202042052-20231202072052-00880.warc.gz
CC-MAIN-2023-50
3,262
22
https://seth.bertalotto.net/code-journey
code
- 1996: Discovery - 1998: Static Websites - 2000: Semantic Markup - 2004: Dynamic Websites - 2009: Server-side JS - 2013: Universal Webapps - 2016: Typed JS All developers take different paths throughout their coding careers. I like to call this their “Code Journey”. This is my story into how I learned to program and what keeps me doing it to this day. My first computer was my families’ 1996 Packard Bell D160. With 133MHz processor and only 8MB pre-installed, I remembering thinking this was the best computer that was ever made! However, just trying to play a video game like Madden 98 was impossible, without upgrading it to 16MB of memory. At this time, the internet was in full swing. I would go to my friends house after school and try to squeeze out as much of the internet we could in the free 200 minutes of AOL usage. I also had access to WebTV, an early TV based web surfing device, many years ahead of its time, but horribly slow and hard to use. Soon after getting online, I got more curious about how various websites I visited were made. I would see some interesting interaction, like menu drop downs and try to figure out how it was done. One of my favorites sites to peak under the hood was the Microsoft homepage, which had the aforementioned drop down menus. Viewing source on this page presented a mess of tables and div tags that was almost incomprehensible to understand. My tried and true tactic was to copy the source code into MS Notepad and little by little delete code and check to see that the menu’s still worked. Then I would repeat this until I was left with the “minimal” amount of code to make it work. Eventually, I wanted to understand the code more than just copy paste other peoples work. So I ditched Notepad and started using Macromedia’s Dreamweaver editor. I still wrote my HTML from scratch, but dabbled in their interactive libraries that added “mm_” prefixes all over the code base. Learning how to do image mouse over effects and mouse trail animations was a good to way learn how interactive and expressive websites could be. 1998: Static Websites Fast forward a year or two later and I was building all sorts of websites. I remember having sites for my favorite music bands, a site of 100’s of animated gif’s I found on the web (all loading at once and causing the browser to crawl to a halt) and even a video game site of cheat codes for my favorite video games. Having all these sites on my computer wouldn’t work since I couldn’t share these with my friends and random internet strangers. This is when I found the website hosting service, Tripod, Most other developers I knew were on Geocities, but I didn’t like the community aspect of it. At the time, I thought Tripod was easier to use and just worked. My tooling at the time was still Dreamweaver, but with a simple FTP setup to automatically push my changes to the server whenever I saved locally. No testing or CI in place to catch issues, this was the early days of just pushing to production and debugging live on site. 2000: Semantic Markup Up until this point, if you “Viewed Source” on any of my sites, you would find a mess of capitalized tag names, table layouts, spacer.gif hacks and invalid markup that would make accessibility experts faint. This is when I discovered something called Semantic Markup, the thought that the tags actually had meaning and could be used in the correct context was an “a-ha” moment for me. I loved the idea of decorating the website with proper tags that would help with SEO and accessibility. I deep dived into A List Apart, semantic HTML books from Simplebits and other sites that evangelized a web that was accessible to everyone. Building websites with CSS 2 was a completely different thought process and really sparked a new creative direction for my websites. 2004: Dynamic Websites Static sites were fun, but I was tired of creating hundreds of .html files. Copying header and footer markup into each file was tedious. I also tried to use frames to make this easier, but they were buggy and error prone across browsers. This is when I discovered PHP and MySQL databases. I wasn’t quite sure what either was, however, I knew it would let me build more dynamic and complex websites. After more research, I wanted to build something that would leverage these technologies more than just a simple static sites. At the time, mobile phones were becoming popular and with that, ringtone usage. I had been using other websites on the internet to download MIDI ringtone files, but I found them to be hard navigate or riddled with pop-up ads and flashing banner images. I decided to build my own ringtone website that would solve all these issues. This site became MIDI Delight, still active to this day and something that helped me land a job later in my career. This site allowed me to leverage MySQL to build a database of artists, songs, user profiles, favorites, polls and uploading capabilities. It really helped me learn how to build a more complex, data driven website from scratch. 2009: Server-side JS 2013: Universal Webapps Even though we were able to leverage the same language on the server and client, we still found ourselves rewriting the same business logic in each run time. This lead to more work and bugs, as we had to maintain two different frameworks in our applications. Around this time, Facebook released React to the world. At first, we were skeptical of it and as confused as others in regards to mixing HTML and JS. We prototyped a few projects and started to discover how it could be used to not only make highly interactive and dynamic sites, but could also be leveraged on the server using much of the same code between the browser and server. Leveraging React for templating was a step in the right direction, but we still needed to figure out how to manage data and state. At this point in time, Facebook had released the Flux architecture, but no actual companion library. This lead to a proliferation of client based Flux libraries. However, none that satisfied our business requirements within Yahoo (this was before Redux). Therefore, we decided to build our own open sourced universal Flux framework, called it Fluxible. Fluxible was a truly universal library that handled routing, state management, hydration on the client and much more. It helped solve many application requirements that we had internally. 2016: Typed JS With Fluxible and React in tow, our applications got more sophisticated. We were now able to shared business logic across run times. This allowed us to break down the application into smaller chunks (or modules) and shared responsibility of various parts of the applications across teams. All these issues led us to TypeScript. The static typing, ability to catch errors before committing and the ease of refactoring were big wins for our projects. Selling developers on these advantages took some time, but once they got over the learning curve, the benefits outweighed the doubts. Over the past few years, all new projects we have worked on have been started with TypeScript. It has made our code more readable, maintainable and easier to work than our past non-typed efforts. I’m not quite sure what the future holds for web development. The industry really blossomed the last 10 years to radically change what could be done with HTML, CSS and JS. With all this progress, it does seem like the past few years that the industry had settled on React and its ecosystem of libraries and components. I like React, but I would prefer to see the industry focus on open standards rather than proprietary technology, governed by one company. Web Components seem like the next natural phase of evolution, but they have been around for a few years and have still yet to see widespread adoption amongst developers and applications. I’m also excited about more widespread adoption of ESModules in modern browsers, removing the need for complicated bundling tools like Webpack is a win for users and developers alike. Technology like service workers and the advent of PWA’s has really pushed webapps to a more app like experience. I hope that Apple and Google continue to push the industry forward to give the web a fighting chance. The web has changed drastically since I started way back on my Pack Bell, but I’m excited to see what the future has in store. I look forward to adding more to this story as the years pass…
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653071.58/warc/CC-MAIN-20230606182640-20230606212640-00664.warc.gz
CC-MAIN-2023-23
8,468
41
https://community.atlassian.com/t5/Trello-questions/the-limitation-of-workspace-members/qaq-p/2652501
code
I have a question about the limit on the number of people in workspace. After May 20, 2024, the number of people in a workspace will be limited to 10 people, but if the workspace administrator is a paid account, will there be no limit to the number of people? Or do all participants in the workspace need to be paid accounts? It depends on how many boards these additional people are on... If they are on a single board as a board membercthey are free. However, the second they are added to one more board or are added as a workspace member they become a paid user. Video on how the new rules work via various examples: https://youtu.be/rqmCHU_QxY8
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00773.warc.gz
CC-MAIN-2024-18
648
4
http://stackoverflow.com/questions/9873431/timer-calls-timer-tick-event-only-when-the-page-finishes-the-execution-of-entire
code
I have an application in which I am starting a timer for two task: - On a single loop having one item. - Having more than one item Now when I have one item then timer starts slide show perfectly but when I have more than one item then I have to loop through it so the timer_tick event is not able to start on a continuous loop. Is there any option such that I suddenly can start the timer..?
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824395.52/warc/CC-MAIN-20160723071024-00173-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
391
5
https://flippa.com/3456337-bricks-breaker-pro
code
A very simple game which involves the player to move a paddle to hit a ball. The ball hits the bricks which then explode, once all the bricks have been hit, the player will move onto the next level. Touch a value for a description Does this listing violate the Flippa Terms and Conditions? If so, anonymously report it here.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212768.50/warc/CC-MAIN-20180817182657-20180817202657-00263.warc.gz
CC-MAIN-2018-34
324
6
https://www.thoughtreplica.com/post/using-fluent-bit-to-replicate-nginx-logs-to-azure-storage
code
In this post I will introduce you to Fluent Bit and show how to enable the service on an Ubuntu server to forward nginx access logs to an Azure Store blob. Why would you want to use Fluent Bit instead of the Microsoft Monitoring Agent or Azure Monitor for containers? Speed. Azure Monitor still suffers from an ingestion delay of 2-5 minutes. Exporting logs to Azure Storage or Event Hubs allows you to action insights from your logs in near real-time. Fluent Bit is a great service for replicating events or logging to Azure for your web app or containers. From their site: "Fluent Bit is an open source and multi-platform Log Processor and Forwarder which allows you to collect data/logs from different sources, unify and send them to multiple destinations. It's fully compatible with Docker and Kubernetes environments." Aggregate all of your logs in Azure Storage blobs or send them to an Azure Event Hubs Kafka endpoint for exploration and analysis, alerting, or processing. Ubuntu 18.04 LTS VM on Azure SSH access to the VM with sudo You can either create the VM with a public IP or private IP depending on your VNET configuration. For this test I used a public IP and allowed SSH traffic through the NSG. This is not recommended for production. Login to the server and test the nginx web server. curl http://<my public ip> HTML will be returned saying "Welcome to nginx!" Issuing the curl command creates an entry in the nginx access log which we'll use later. Now to install Fluent Bit on the server, I followed the walkthrough here which I will provide below with some additional context and troubleshooting. To start, add the public key for the Fluent Bit repository $ wget -qO - https://packages.fluentbit.io/fluentbit.key | sudo apt-key add - The GPG key will be added to your keyring. You must have sudo or root access to perform the next steps. Next we will add the URL of the repository to our source list. deb https://packages.fluentbit.io/ubuntu/bionic bionic main Finally, install the td-agent-bit package, start it, and check that it is running. sudo apt-get install td-agent-bit service td-agent-bit start service td-agent-bit status Ok, let's stop the service and explore the configuration files. parsers.conf: Defines how to parse fields within a record plugins.conf: Define paths to external plugins td-agent-bit.conf: Configure the service, input, and outputs. We'll be editing the td-agent-bit config file for this demo. First we need to add an INPUT, in this case we'll be using the access.log from nginx. The OUTPUT will be configured for Azure Storage blob container. Our input looks like this: We are using the Tail input plugin which is just like the GNU Tail command. In this case we are monitoring the access.log file. This Path key accepts wildcards if you have rotating logs. Additionally we are tagging the input with "nginxaccess" and the file name. We are leveraging the nginx parser which means the service will look to see how nginx is defined in the parser.conf file. This can be changed depending on how you want your input formatted. Next we will look at the output: shared_key <my secret key> Here we are telling the service to output using the "azure_blob" plugin. We didn't have to add this to the plugins conf as it is included. I am matching all input entries, though we could get more selective based on tags or filenames. In this case, you use the primary key for the storage account for access. Finally define the container name where you want to land log files. The behavior is append which means that as new records come from input, output will append them to the file in the blob. It would be best to configure rotating logs for your nginx service to break up the files based on date/time. Finally, start the td-agent-bit service. Ensure that it is running. If it is not running check the syslog. The fluent bit config files are indentation sensitive. Extra new lines will cause issues. It expects 4 spaces for indents, not tabs. Run the curl command or visit the site in a browser to generate entries in the access.log file. You will now see those entries in your Azure Storage blob. These events can be ingested using Azure Data Factory, ingested in Azure Data Explorer, or referenced as an external table in Azure Data Explorer. In the next blog I will show how to enable fluent bit for containerized nginx and ship logs to Azure Event Hubs using the Kafka endpoint.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474649.44/warc/CC-MAIN-20240225234904-20240226024904-00267.warc.gz
CC-MAIN-2024-10
4,422
39
http://widdjit.com/index.cfm?sub=explainlikeimfive&f=&link=https%3A%2F%2Fwww.reddit.com%2Fr%2Fexplainlikeimfive%2Fcomments%2F7q96gq%2Feli5neural_network_vs_alphabeta_pruning%2F&com=%2Fr%2Fexplainlikeimfive%2Fcomments%2F7q96gq%2Feli5neural_network_vs_alphabeta_pruning%2F&title=ELI5%3ANeural%20Network%20vs%20Alpha-beta%20pruning&ID=119303245
code
That's a hard question because you really comparing apples and oranges. The mini-max algorithm is useful in very specific situations. You need to have a well-defined state space (game board and pieces), well-defined state transitions (game rules), adversarial gameplay (players have opposing goals), low branching factor (few good moves in a given position) alternating turns, and a good static evaluation function (who has the most pieces). When these conditions exist, mini-max and its various heuristics (which includes alpha-beta pruning) is an efficient way to conduct a brute force search through the move tree. A neural network is an artificial intelligence technique that in some ways mimics the human brain. The network learns by being exposed to a large body of input and being "rewarded" or "punished" if its response is right or wrong. It is a very general technique applicable to a wide variety of problems but usually does not give as good results as a special purpose solution. The two are so different, comparing them is like asking what is a better food, grain or beef Wellington? Grain feeds the world, but you are unlikely to order it in a fancy restaurant. To add to /u/kouhoutek's answer: they are very different tools that are implemented with very different structures - alpha-beta pruning uses a search tree, while a NN uses affine transformations composed with non-affine transformations --- its really a big composition of functions. These affine transformations are stored in multi-dimensional arrays and are often called (incorrectly) 'tensors'. How this plays out in how they function: alpha-beta pruning itteratively eliminates less than optimal subtrees. NN's often have massive numbers of parameters (100k's to millions), each of which gets "tuned" or adjusted itteratively.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944851.23/warc/CC-MAIN-20180421012725-20180421032725-00439.warc.gz
CC-MAIN-2018-17
1,806
7
https://tannergroehler.com/Next-Music
code
Next Music is an up and coming online music festival app out of San Fransisco. Through my contract work at Signal / Noise I had the opprotunity to create this logo animation and score. Each sound and animation was designed to echo a different style of instrumentation. For the sound portion this Included samples of classic songs, edited drum and vocal samples, synths, drum machines, and a few secrets I can’t share. All set in 3/4 time to give it a little bit of bounce.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643380.40/warc/CC-MAIN-20210619020602-20210619050602-00235.warc.gz
CC-MAIN-2021-25
474
7
https://www.betterleadersbetterschools.com/time-blocking-method/
code
When the timer goes off, it’s time to move on. Using a timer and having the discipline to listen to it separates the productive from the unproductive. Time-blocking is not rocket science. Consider the tasks that will create the most value for your organization. Put those on the calendar first. Then, schedule in everything that needs to get done, but isn’t inspiring or significant. Those tasks go on the calendar too but are given the least amount of time. Now set your timer and off you go. Limit your exposure to social media. 30 minutes a day (or less) should do. When you hear the buzz of the timer, move on. Ideally, I check email once a day. 30 minutes is enough. Timer is set. I hear the buzz and move on. Deep work is more important. I might build a new website and work in a 90-minute sprint. I’m writing my second book. That requires a 60-minute sprint. The timer goes off. I move on to the next task. Give the most time to most important work, but even 10 minutes toward a task means progress. YOUR IDEAL WEEK SUBSCRIBE Get your free ideal week course
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707948217723.97/warc/CC-MAIN-20240305024700-20240305054700-00318.warc.gz
CC-MAIN-2024-10
1,070
11
https://wiki.xpolog.com:8443/display/XPOL/Managing+Active+Filters
code
In the Augmented Search Pane, under Active Filters, are listed all the filters that were added to your original search query, based on an event or a detected problem. You can remove a filter from the search query, by removing it from the Active Filters list, and you can also restore the original search query. Removing a Filter From the Search Query Removing the filter from the Active Filters list removes it from the search query, and automatically runs the search with the resulting query. To remove a filter from the search query: - In the Augmented Search pane, under Active Filters, click the Remove Filter icon adjacent to the filter that you want to remove. The filter is removed from the Active Filters list, and the resulting search query runs. Resetting the Search Query You can restore a search query to its original state, regardless of the number of filters that have been added to it. To reset a search query: - In the Augmented Search pane, under Active Filters, click Reset. The Active Filters list closes, the original search query is restored, and automatically runs.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072082.26/warc/CC-MAIN-20210413031741-20210413061741-00474.warc.gz
CC-MAIN-2021-17
1,087
12
https://www.game-debate.com/games/index.php?g_id=21295&game=Ada%20Online
code
The world of Ada is torn apart by war. Three great nations fight against each other for the domain of all Ada. The Amaidians in the north east, past the great cliffs, have established their domain near the salty water, where they've built great castels and high towers for thousands of years. Recently they've lost their belief in the lesser gods in order to embrace the Sacred Flames's cult. From the other side of the world the Hekthons have putted down their tents on the great plains millenniums ago, at the times of the first men. They are leaded by different warchiefs, but every tribe follows the Great Warchief. He is one of the few links that keeps all the tribes together. And recently from the ashes of Devros, a new threat has been awakened by the Exiled One: the Armies of Devros. They devour everything that stand in their ways. The Exiled One wants to rule all over Ada, and destroy his old enemies of Amaidia. Ada Online System Requirements 31 Jan 2015 - Specs reviewed OS: Win Xp 32 Processor: Intel Pentium 4 1.8GHz / AMD Athlon XP 1700+ Graphics: AMD Radeon X600 Series or NVIDIA GeForce 210 Ada Online requires at least a Radeon X1900 GT or GeForce GT 340 to meet recommended requirements running on high graphics setting, with 1080p resolution. This hardware should achieve 60FPS. System memory required for Ada Online is 2 GB performance memory. Recommended needs around a 11 year old PC to run.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350717.8/warc/CC-MAIN-20210225041034-20210225071034-00458.warc.gz
CC-MAIN-2021-10
1,417
7
https://plainsight.bamboohr.com/jobs/view.php?id=59
code
This role is 100% remote. Our entire company works virtually, and often across time zones. Your home office is your work location. Plainsight streamlines vision AI for enterprises with new ways to analyze, share and benefit from valuable visual information. Solving problems where others have failed, Plainsight helps the world’s most innovative customers realize the potential of their data through smart, easy-to-use, effective solutions. Our intuitive, low-code platform gives every team across organizations the ability to build, manage, and operationalize computer vision solutions. With actionable insights and unblinking accuracy, Plainsight powers enterprise-ready applications to automate processes, mitigate risk, enhance product portfolios, and increase revenue opportunities. For more information, visit plainsight.ai. Plainsight’s Product and Engineering teams collaborate closely to build novel, proprietary machine learning tooling that enables our solutions teams to deliver enterprise machine vision applications with unique efficiency and quality. We build software and services that simplify the hardest problems in applying computer vision to challenging real world problems – we like to say our tools give our users “CV Superpowers.” We emphasize letting the humans involved do what they’re best at, so we focus on the problems that are harder for humans. Specifically, we prioritize tools that bring clarity to large sets of opaque unstructured data, that help prioritize the most impactful ML approaches/experiments first, and that reduce risk/cognitive load throughout the lifecycle. By helping solution builders scale, our team is the accelerating force of Plainsight’s future, and the Technical Project Manager is a central part of the team. As a Technical Project Manager you will facilitate the software development lifecycle of our computer vision platform and tools. You will drive implementation of machine learning-oriented services and their features, from fleshing out initial requirements with the product and engineering teams, through validation testing with quality teams and users. The TPM role is central in our agile development processes, responsible for creating and consistently driving detailed implementation plans, facilitating sprints and ceremonies, and adhering to development and documentation processes. You’ll be hands-on with everything we build, and will be accountable for improving our key agile delivery metrics over time as well as scaling and maturing processes across teams. At Plainsight, you can expect a collaborative, respectful, innovative and supportive environment where our core values are on display in How We Show Up, How We Think, How We Work With Others, and How We Deliver. We expect all team members to understand, use, and advocate for these core values. Plainsight is committed to fostering a diverse work environment and is proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. The US base salary range for this full-time position is $124,000-$160,000 + equity + benefits. Our salary ranges are determined by role and level. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for relative experience during the hiring process. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include equity or benefits. Learn more about benefits at Plainsight. Your application was submitted successfully.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00124.warc.gz
CC-MAIN-2023-14
4,079
9
https://www.scm.com/doc.2019-3/ADF/Rec_problems_questions/Electronic_Configuration.html
code
Not specifying occupation numbers in input will not automatically result in the computational of the ground state. It may even lead to non-convergence in the SCF and/or in the determination of minimum-energy geometries or transition states. Therefore: whenever possible, specify occupation numbers explicitly in input (key OCCUPATIONS)! Misunderstanding results of a calculation may easily result from a lack of awareness of how ADF treats the electronic configuration, which orbitals are occupied and which are empty. Unless you specify occupation numbers in input they will be determined from the aufbau principle but only during the first few SCF cycles. Thereafter the distribution of electrons over the different symmetry representations is frozen (see the key OCCUPATIONS, options AUFBAU and aufbau2). If at that point the potential has not yet sufficiently relaxed to self-consistency the final situation may be non-aufbau. A related aspect is that the ground state does not necessarily have an aufbau occupation scheme. In principle, different competing electronic states have to be evaluated to determine which has the lowest total (strongest bonding) energy. Check output always carefully as to which orbitals are occupied. In general, whenever possible, supply occupation numbers in input. Be aware that the automatic choice by the program may in a Geometry Optimization result in different configurations in successive geometries: the automatic assessment by the program will be carried out anew in each SCF procedure. If competing configurations with comparable energies have different equilibrium geometries, the geometry optimization has a high failure probability. The gradients computed from the SCF solution of a particular configuration drive the atoms in a certain direction, but in the next geometry, when the program re-determines the occupations and finds a different configuration, the resulting gradients may drive the atoms in another direction. See the keys CHARGE and OCCUPATIONS for user-control of occupation numbers. Spin-unrestricted versus spin-restricted, Spin states¶ If your molecule has unpaired electrons, you should run an unrestricted calculation, in principle. However, if this exhibits convergence problems (or if you simply want to save time: an unrestricted calculation takes a factor 2 more CPU time and data storage), you may consider to do it in two steps. First, run a spin-restricted calculation. Then perform a spin-unrestricted calculation using the restricted TAPE21 as a restart file. In the follow-up calculation you should specify the precise occupation numbers for the state you’re interested in, and use the SCF input key to specify only one SCF cycle (iterations=1). This prohibits convergence (so you keep the converged restricted orbitals) and gives you a fairly adequate approximation to a converged unrestricted result. See also the H2 example run for a discussion in the Examples document. An unrestricted calculation does not necessarily yield the multiplet configuration (triple, doublet …). This is a rather complicated matter, see the discussion on multiplet states, key SLATERDETERMINANTS.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585537.28/warc/CC-MAIN-20211023002852-20211023032852-00699.warc.gz
CC-MAIN-2021-43
3,162
8
https://earthweb.com/zend-updates-php-ide-framework-for-the-cloud/
code
PHP has long been used as one of the primary languages for the web. With the help of some new tools, commercial PHP backer Zend is now helping to position PHP for the cloud too. At the ZendCon conference in Santa Clara, California, Zend today announced the general availability of the Zend Studio 8.0 IDE (define) and the Zend Framework 1.11 PHP application framework. Both the IDE and framework include new cloud-focused features and are part of a new PHP Cloud Application Platform ecosystem that Zend is now building. “PHP was designed from the get-go just for the web,” Andi Gutman, CEO of Zend said during his ZendCon keynote address. “Our goal is to solve web problems and that is what has made PHP so productive for web development.” One of the key tools used for PHP development is the Zend Studio IDE which is based on the Zend led Eclipse PDT (PHP Developer Tools) open source project. Eclipse PDT 2.2 was released in June as part of the Eclipse Helios project release cycle. Zend Studio 8 builds on top of Eclipse PDT with additional virtualization features for developers. With Zend Studio 8, new integration with the VMware Workstation desktop virtualization solution is being provided to help the application development lifecycle of PHP apps. Gutmans noted that most Zend Studio developers build their apps on Windows or Mac and then deploy their applications onto a Linux production server. “What the integration of VMware Workstation and Zend Studio enables you to do is to run Zend Studio on your Windows or Mac desktop and then run Zend Server in a Linux environment on your desktop that is as similar as possible to your production PHP server,” Gutmans said. Gutmans added that Zend Studio 8 knows which virtual machine it is connected to and if a file is saved, it will show up in the virtual machine instantly. He noted that the integration will help build a seamless develop, test and debugging experience with a production-like environment. Zend Framework 1.11 Zend is also updating its Zend Framework PHP application framework to version 1.11 introducing new cloud capabilities. With Zend Framework 1.11, Gutmans noted that the SimpleCloud API will be included which delivers a common API for cloud application deployments. The goal of SimpleCloud is to deliver a common API that enables you to leverage cloud application services such as storage and database,” Gutmans said. “If you use the common API, your applications are actually portable across clouds.” With Zend Framework 1.11, new mobile support infrastructure to enable cross-platform device deployment is also being integrated. Gutmans noted that new mobile functionality will enable the framework to detect which mobile device is being used to access an application and what are the capabilities of the mobile device. He added that framework now includes view helpers to ensure that PHP applications are optimized for the specific mobile device that is being used to access an application. Zend PHP Cloud Platform Multiple Zend tools will be part of the upcoming Zend PHP Cloud Platform, which is intended to help enable a full application development and production lifecycle for cloud deployments. Gutmans did not state during his keynote when the Zend PHP Cloud Platform would be generally available. A key part of the cloud platform will be an updated version of Zend Server PHP application server. “A major feature that we’re missing in Zend Server is critical to enabling the cloud, and that’s application deployment,” Gutmans said. He added that in the upcoming Zend Server 6, developers will be able to integrate their development workflow with Zend Studio for cloud deployment. With Zend Studio, developers will be able to export their applications as a package and then deploy it onto a cluster with a high-performance deployment engine in Zend Server. “As your application load goes up and we launch more servers to deal with the load, those servers will know to connect to the cluster manager and get all the applications deployed to it,” Gutmans said. “All the deployment, scale-up and elasticity will work automatically with Zend Server.”
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100232.63/warc/CC-MAIN-20231130193829-20231130223829-00002.warc.gz
CC-MAIN-2023-50
4,173
26
https://support.aacesoft.com/hc/en-us/articles/204498279-How-do-I-reconcile-bank-accounts-
code
aACE provides a bank reconciliation tool for easy account balancing. Navigation: Accounting > Bank Reconciliation. - Start by clicking the New button on the aACE menu bar. - When the new reconciliation screen appears, do the following (see sample below): * Select the account you wish to reconcile. * Enter the statement date as shown on the statement (format as mm/dd/yyyy). * Enter the ending balance shown on the bank statement. * Click the link on the screen to "Build/Refresh Statement Items". - Reconcile the items displayed on the screen against the items appearing on the bank statement. Clear the items listed on the screen by clicking the check box in the Cleared column. - Click the blue Totals button (at the botton of the list) to calculate / recalculate the totals shown on the screen. - Review the reconciliation totals shown in the box at the bottom of the screen. Your account is in balance when the Reconciliation Error shows zero. Note: You can create additional transactions such as general journal entries during the balancing process, if necessary. When you return to the bank reconciliation, click "Build/Refresh Statement Items" to refresh the screen and show the new entries. - Click Save to save your work. A dialog box appears. - Click Not Yet to save your work but leave the reconciliation in pending state. Click Clear to complete the reconciliation process. Print reconciled statement reports Navigate to the reconciliation detail screen for the report you wish to print, then click the Print button on the aACE menu bar. Edit pending statement Navigate to the reconciliation detail screen for the report you wish to edit, then click the Edit button on the aACE menu bar. Note: Only pending reconciliations can be edited. Void a cleared statement Navigate to the reconciliation detail screen for the report you wish to void, then click the Actions menu on the aACE menu bar and select Void. Note: Only pending reconciliations can be voided. Delete a pending statement Navigate to the reconciliation detail screen for the statement you wish to delete, then click the Delete button on the aACE menu bar. Note: Only pending statements can be deleted. Automating Bank Reconciliation aACE does not support automatic bank reconciliation out-of-the-box due to the complexity and range of digitized bank feeds originating from banking institutions. If your bank does offer an API that aACE could potentially integrate with, please consult your aACE programmer. Alternatively, if your bank offers a digitized bank statement you can use the bank statement import feature by selecting "Import Bank Statement" from the Actions menu while in edit mode.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00022.warc.gz
CC-MAIN-2023-06
2,669
28
https://www.grapecity.com/en/forums/spread-winforms/how-to-skip-printing-for-1
code
Posted 8 September 2017, 2:50 pm EST I have to print the whole spreadsheet to a pdf using fpSpread.PrintSheet(-1). But one of the sheetviews is invisible and shall not be printed. It always prints an empty page for this sheet. (The sheet contains only one cell with a user defined cell editor containing a browser). I tried to set the printinfo for this sheet to nothing, but it then tried to print to a printer. How can I skip printing of one or several sheets ? (But with use of fpSpread.PrintSheet(-1) because of page counting and other reasons).
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583509690.35/warc/CC-MAIN-20181015184452-20181015205952-00148.warc.gz
CC-MAIN-2018-43
549
4
https://cornerpirate.com/2018/07/24/grep-extractor-a-burp-extender/
code
Burp Suite’s “Intruder” is one of my favourite features. It automates various parts of my job for me by repeating a baseline request with minor variations. You can then check out how a target responded. Unlike the “Reapeater” you get a nice table of results and at a glance can find things with different response codes. Basically Intruder is brilliant. Intruder has a feature called Grep Extract which allows you to find content within HTTP Responses and then extract the values. You might want to do this if you are enumerating users by an ID and you want to extract the email addresses for example. I looked but could not find the same functionality via the Proxy History so I made a simple Extender to add that functionality. This blog post covers: - Basic Usage of Grep Extract – showing how to use Grep Extract within Intruder. Why not show the inspiration? - Grep Extractor – showing the code and how to use it. This extender is designed to have the code altered by you when you want to extract something. It has never been easier for you to get your hands dirty and get a new Extender that does something useful! Basic Usage of Grep Extract When you are inspecting the results of an intruder attack you can use the “options” tab and “Grep – Extract” down at the bottom to extract data from a response. Here is what the options look like: Click on “Add” to bring up the screen below where you can simply highlight the part you want to extract: In this case the response page has a Credit Card number so I highlighted that part. When you apply that the Intruder results table will update to include a new column with the extracted data: You can export the results to a CSV file via that “Save” menu. This is all very well and good when you are using Intruder. You have seen how Burp provides this feature within Intruder. It uses a nice GUI approach which we are not replicating at all. The following shows the source code for Grep Extractor: #burp imports from burp import IBurpExtender from burp import IBurpExtenderCallbacks from burp import IExtensionHelpers from burp import IContextMenuFactory from burp import IContextMenuInvocation import re # java imports from javax.swing import JMenuItem import threading class BurpExtender(IBurpExtender, IContextMenuFactory): def registerExtenderCallbacks(self, callbacks): self.callbacks = callbacks self.helpers = callbacks.getHelpers() self.callbacks.setExtensionName("Grep Extractor") self.callbacks.registerContextMenuFactory(self) return def createMenuItems(self, invocation): menu_list = menu_list.append(JMenuItem("Grep Extractor", None, actionPerformed= lambda x, inv=invocation:self.startThreaded(self.grep_extract,inv))) return menu_list def startThreaded(self, func, *args): th = threading.Thread(target=func,args=args) th.start() def grep_extract(self, invocation): http_traffic = invocation.getSelectedMessages() count = 0 for traffic in http_traffic: count = count + 1 if traffic.getResponse() != None: # if the string is in the request or response req = traffic.getRequest().tostring() res = traffic.getResponse().tostring() # start is the string immediately before the bit you want to extract # end is the string immediately after the bit you want to extract start = "" end = "" # example parsing response. Change res to req if data is in request. i = 0 for line in res.split('\n'): if start in line: # extract the string extracted = line[line.find(start)+len(start):] extracted = extracted [0:extracted .find(end)] # print exracted string, visible in Burp print extracted Nothing too scary in there and the comments should help you out. Lets give one simple example of how to use it. Lets say the site you are targeting has the “X-Powered-By” header. Was that consistent across all responses or did it alter at any point? Perhaps some folder is redirecting to a different backend system and you didn’t notice. Modify the start and end strings as shown below: start = "X-Powered-By:" end = "\n" Any data between “X-Powered-By:” and the next newline character will be printed out. Save your code and then reload the Extender within Burp. At this point you can right click on one or more entries in the proxy history and send to Grep Extractor via the option shown below: Any “print” commands issued from the Extender will goto the output for the extender. This is visible on the following menu: Extender -> Select “Grep Extractor” -> Select “Output” tab. The following shows output from the proxy history with our target: It looks like the target site is consistent with it’s “X-Powered-By” headers. Well we struck out there but hopefully you can see the benefits of getting dirty and dipping your toes in the ocean of Burp Extenders. With relatively little coding knowledge you can get powerful results from Grep Extractor. This example shows how to markup each request which did NOT include the HTTP header “X-CSRF-Token”: def grep_extract(self, invocation): http_traffic = invocation.getSelectedMessages() count = 0 for traffic in http_traffic: count = count + 1 if traffic.getResponse() != None: # if the string is in the request or response req = traffic.getRequest().tostring() if req.find("X-CSRF-Token:")== -1: traffic.setComment("Request without X-CSRF-Token header") traffic.setHighlight("pink") This uses the “setComment” and “setHighlight” methods as documented at the following URL: Instead of logging information to the stdout this will update all requests within proxy visibly with a pink background and a useful comment. This does not alter any pre-existing highlights or comments (at least when I tested it). By reviewing the proxy history I discovered the token was consistently set for everything apart from the login form. There was no impact but it helped me get to this answer quickly. This example shows how to print out every “Set-Cookie” directive in the selected responses: def grep_extract(self, invocation): http_traffic = invocation.getSelectedMessages() count = 0 for traffic in http_traffic: count = count + 1 if traffic.getResponse() != None: # if the string is in the request or response req = traffic.getRequest().tostring() res = traffic.getResponse().tostring() start = "Set-Cookie:" end = "\n" for line in res.split("\n"): if line.find(start) !=-1: line = line.strip() print line I needed to do this when conducting a re-test of an application which had certain cookies set without “httpOnly” and others without “secure” flags. By printing the full “Set-Cookie” directive I even visually caught a few anomalies where rare cases resulted in “secure; secure;”. Most likely the result of the framework and then reverse proxy ensuring the flag was set. It only affected one folder. Vulnerable Test Site The data shown in the proxy logs all comes from browsing the vulnerable website from Acunetix available below: This was just to populate my Burp history with a few requests and responses. Hope that helps,
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00583.warc.gz
CC-MAIN-2023-14
7,001
33
https://spidersize.com/spider-man-into-the-spider-verse-movie-reaction-first-time-watching/
code
Today we are watching Spider-Man: Into the Spider-Verse! Enjoy! Subscribe for weekly reaction videos! Leave a comment for what movies or shows you want to see next. MY PATREON (polls, full length reactions, and more!): MY INSTAGRAM: addie_counts A HUGE THANK YOU to my top tier Patreon members! This would not be possible without you: Atomos, Calvin Coderre, Chris Gronau, Danny Miller, deskmerc, E R, Edmund Dantes, Gcvftw, Hold Your Fire, Jake Malone, Jake Skellington, James Flack, Jason Schuler, Jeff Beaufort, Jon Johns, Jon Rice, Justin, Keith, Krzysztof Rozycki, Mario, Michael Wilson, Nathan Swapp, Noby, Nuthel, Oscar Nyholm Westberg, Richard Ryan, Ron McGuirk, Sean Ornelas-Linter, Sonny Smith, The Inedible Mattman, thestaticshadows, Tony Sanson, and Trent Stafford! *Copyright Disclaimer Under Section 107 of the Copyright Act 1976, allowance is made for “fair use” for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Fair use is a use permitted by copyright statute that might otherwise be infringing. Non-profit, educational or personal use tips the balance in favor of fair use. NO COPYRIGHT INFRINGEMENT INTENDED. All rights belong to their respective owners. I have no intent on claiming this footage as my own. I am simply providing commentary and constructive feedback.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103617931.31/warc/CC-MAIN-20220628203615-20220628233615-00148.warc.gz
CC-MAIN-2022-27
1,334
8
https://www.amrita.edu/publication/novel-collaborative-caching-framework-peer-peer-networks
code
Caching is a well accepted method for curtailing the Internet traffic generated by peer to peer (P2P) applications. In general, any form of caching suffers from cache pollution. The pollution is severe in P2P cache system especially when the caches act collaboratively. This paper proposes a new collaborative caching framework utilizing intelligent cache updating scheme, for controlling the cache pollution, without compromising in performance. We validate the proposed system through simulations using real world data, in comparison with other caching algorithms. Test results indicate that the proposed framework drastically reduces the cache pollution compared to existing schemes. R. M. Chandran and Dr. Sajeev G. P., “A Novel Collaborative Caching Framework for Peer-to-Peer Networks”, 2018 International Conference on Advances in Computing, Communications and Informatics (ICACCI). IEEE, Bangalore, India, India, pp. pp. 988-994., 2018.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178375274.88/warc/CC-MAIN-20210306162308-20210306192308-00254.warc.gz
CC-MAIN-2021-10
948
2
https://radicalorange.tv/project/hsbc-people-profile/
code
We partnered with The Smalls to create an employee profile promo for HSBC. In this piece, we focus on Pingping Chen, a Lead Data Scientist at PayMe. Director & Producer: Vikash Autar Cinematographer: Zell Cheung Post-Production: The Smalls Agency: The Smalls Production Company: Radical Orange
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652161.52/warc/CC-MAIN-20230605185809-20230605215809-00464.warc.gz
CC-MAIN-2023-23
293
6
https://oldvcr.blogspot.com/2021/01/
code
If you're puzzled why you can't Telnet into your A/UX machine, nfs0 needs to be set to wait and net9 needs to be set to respawn in /etc/inittab, or incoming connections like Telnet and FTP don't (or, depending on what inetd you're using, connections may just sit there and inetd fails to spawn the daemon, sometimes for as long as a half an hour). This means you need to be running /etc/portmap as well as /etc/inetd; you can't run just inetd. You should probably also upgrade to jagubox inetd. You might be able to get around this by not using portmap services in /etc/servers but I haven't needed to try that. If you are sitting at the "Welcome to A/UX" dialogue box (i.e., you aren't logged into the machine and you have autologin disabled), you have to select Special, Restart to properly unmount the file systems. Selecting Special, Shut Down bizarrely leaves them dirty (forcing a long and unnecessary fsck on the next boot), and running shutdown from a root console doesn't consistently work right either. So now I have it rigged to not autoboot from the Mac boot partition, I select Restart from A/UX when I'm done, and then when the machine comes back up in the Mac boot partition, the A/UX filesystem is clean and I just shut down the Mac partition without continuing through the boot. The downside is I have to press Cmd-B manually to start the boot when I do want to be in A/UX. This machine runs A/UX with my custom partitioning, which I document in more detail elsewhere. I do have to say that on my clock-chipped Quadra 800 (to 36MHz), A/UX is a real pleasure. If they had ported it to PowerPC natively I bet it could have really been something spectacular even though it was sort of a dog's breakfast under the hood (but in that respect no worse than the classic MacOS).
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662541747.38/warc/CC-MAIN-20220521205757-20220521235757-00468.warc.gz
CC-MAIN-2022-21
1,786
4
http://stackoverflow.com/questions/2790299/cant-get-syntax-on-to-work-in-my-gvim
code
(I'm new to Linux and Vim, and I'm trying to learn Vim but I'm having some issues with it that I can't seen do fix) I'm in a Linux installation (Ubuntu 8.04) that I can't update, using Vim 7.1.138. My vim installation is in My .vimrc file is in /home/user/.vimrc, as follows: fun! MySys() return "linux" endfun set runtimepath=~/.vim,$VIMRUTNTIME source ~/.vim/.vimrc And then, in my " =============== GENERAL CONFIG ============== set nocompatible syntax on " =============== ENCODING AND FILE TYPES ===== set encoding=utf8 set ffs=unix,dos,mac " =============== INDENTING =================== set ai " Automatically set the indent of a new line (local to buffer) set si " smartindent (local to buffer) " =============== FONT ======================== " Set font according to system if MySys() == "mac" set gfn=Bitstream\ Vera\ Sans\ Mono:h13 set shell=/bin/bash elseif MySys() == "windows" set gfn=Bitstream\ Vera\ Sans\ Mono:h10 elseif MySys() == "linux" set gfn=Inconsolata\ 14 set shell=/bin/bash endif " =============== COLORS ====================== colorscheme molokai " ============== PLUGINS ====================== " -------------- NERDTree --------------------- :noremap ,n :NERDTreeToggle<CR> " =============== DIRECTORIES ================= set backupdir=~/.backup/vim set directory=~/.swap/vim ...fact is the command syntax on is not working, neither in vim or gvim. And the strange thing is: If I try to set the syntax using the gvim toolbat, it works. Then, in normal mode in gvim, after activating using the toolbar, using the code :syntax off, it works, and just after doing this trying to do :syntax on doesn't work!! I have the syntax files in both /usr/share/vim/vim71/ and home folders (in the home there's only a python syntax module). I've run sudo aptitude install vim as well and there's nothing do download, EXCEPT vim-gtk, since I was afraid of some kind of incompatibility. What's going on? Am I missing something?
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131293580.17/warc/CC-MAIN-20150323172133-00228-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
1,939
17
http://dirsubmit.net/visual-studio/visual-studio-cannot-create-shadow-copy.html
code
> Visual Studio > Visual Studio Cannot Create/shadow Copy Visual Studio Cannot Create/shadow Copy Also, make sure there aren't any "extra" spaces in the line after pasting it in.. I am seen in darkness and in light, What am I? Reply Follow UsPopular TagsASP.NET .NET Debugging IIS Microsoft ASP.NET Tip Silverlight Visual Studio SOS High Memory Azure Trivia of the Day Crash Exceptions Code AJAX IIS7 Windows Azure Chat MVC Archives Then I get the error again and repeat. –david.mchonechase Jun 18 '09 at 2:51 1 I got the same issue while working with DotNetNuke. have a peek here An error you may encounter when running ASP.Net apps with the debugger is "Cannot create/shadow copy 'XXX' when that file already exists"Quick FixYouhave to tell ASP.NET not to shadow copy the why does this error keep popping out? Will update again if I find out more... Assemblies loaded will be copied to a shadow copy cache directory, and will be used from there. Cannot create/shadow copy ‘File Name' when that file already exists ★★★★★★★★★★★★★★★ ASP.NET DebuggingJune 9, 200815 Share 0 0 The two common errors are Cannot create/shadow copy ‘File Name' when that What do I do? As I mentioned above the project is an MVC4 application. Wednesday, 21 July 2010 13:50 Garret Staus [Report This Article] 471 1 2 3 4 5 (0 votes, average 0 out of 5) Occasionally when using a debugger in Vent kitchen hood vent to roof turbine vent? Reply plq1 None 0 Points 9 Posts Re: Weird Error: Cannot create/shadow copy 'Microsoft.Web.Preview' when that file already exists Jan 13, 2007 04:05 AM|plq1|LINK My theory at the moment is that I still got that error until I ended the aspnet_wp.exe process. You can install a trial version of Visual Studio 2013 with the fix from: http://go.microsoft.com/?linkid=9832436 Posted by david mcch on 10/22/2013 at 6:05 AM When you say "We have fixed this When Cleaning the solution, a few warnings were issued about files that could not be deleted. Single word for the act of being susceptible? and Jonh G., they are leaders of ASP.NET, and they had this problem live. Why does top 50% need a -50% translate offset? View All Comments No new messages. I changed it to "Do Not Copy" - now it is happy. Error 4 The command "IF EXIST c:inetpubwwwrootehp2.2WebbinEHP.Web.dll.LOCKED (del c:inetpubwwwrootehp2.2WebbinEHP.Web.dll.LOCKED) ELSE (IF EXIST c:inetpubwwwrootehp2.2WebbinEHP.Web.dll (move c:inetpubwwwrootehp2.2WebbinEHP.Web.dll c:inetpubwwwrootehp2.2WebbinEHP.Web.dll.LOCKED))" exited with code 9009. restart VS. What do you call the practice of using (overly) complex words specific to a subject? Web Reply Xitij Parmar says: February 14, 2011 at 6:21 am As i write above two line in my project, i got below error. Rebuild and pause solves it 100% if the time for me. –danludwig Apr 24 '13 at 17:03 1 I have an SSD installed, so I doubt it is latency. –Joe Attempts to stop is happening failed, and even introduced different problems, but whenever it occurred I would just do a rebuild solution and the problem would immediately disappear for that build. share|improve this answer answered Apr 11 at 15:33 Michael 8713 add a comment| Your Answer draft saved draft discarded Sign up or log in Sign up using Google Sign up navigate here Comments (6) | Workarounds (0) | Attachments (0) Sign in to post a comment. Understanding model independently the equivalence of two ways of obtaining homotopy types from categories. Make a change (code or html) If it was code, rebuild the project (using keyboard shortcut CTRL+SHIFT+B) refresh the page see annoying error clean the solution rebuild solution refresh page verify Typically the DLL is one of the DLLs from Microsoft's Enterprise Library, but it varies. Reply Zuba Dalama says: October 17, 2011 at 4:29 am This error occured when I enabled ASP.NET health monitoring/web events (Site hosted in IIS Express). This message comes up often if you have been debugging or working with some code and then make a quick change in an App_Code directory file. Check This Out I think what's causing the issue is generating or not that first file, and when/how it's copied, regardless of what your actions are. Try adding: to the section of your web.config file. (got this from: http://www.connect.microsoft.com/VisualStudio/feedback/Workaround.aspx?FeedbackID=227522and it seems to have fixed my problem) ‹ Previous Thread|Next Thread › This If you clean thesolution and then buildit goes away, otherwise it occurs roughly half the time Will report back if I find anything out.. This is running Visual Studio 2005/ASP.NET on localhost. Unable to copy file obj\Debug\xyz.dll to bin\Debug\xyz.dll. OBDII across the world? The process cannot access the file bin\Debug\xyz.dll because it is being used by another process. Browse other questions tagged asp.net-mvc visual-studio or ask your own question. After a bit of research, I also found this pre-build event which seems to be popular. (this workaround does not seem to work, perhaps it did in previous versions of VS Killing the worker process seems to fix it though... share|improve this answer answered Jun 18 '13 at 10:02 Thierry_S 581314 2 The problem with disabling shadow copy is that VS would not be able to replace the assemblies on I make a code change, press F5 (Run/ Play button), get the error in the web browser, close the web browser, press F5 again. Given it's really hard to reproduce, it makes fixing the problem more difficult. –RickAndMSFT Feb 13 '13 at 23:17 1 I've resubmitted the connect bug, just so it can be http://dirsubmit.net/visual-studio/visual-studio-2010-cannot-copy-paste.html Solution #2 of Cannot Create/Shadow copy 'xxx' when that file already exists -- web config solution Solution #3 - w3p or aspnet to cannot create shadow copy when that file exists Straight line equation Why were pre-election polls and forecast models so wrong about Donald Trump? This is running Visual Studio 2005/ASP.NET on localhost. Unable to copy file obj\Debug\xyz.dll to bin\Debug\xyz.dll. How to handle a common misconception when writing a Master's thesis? I am not asking what that means, or how to fix it, but what am I doing to cause it? Hope that helps someone. Cheers. Here's another workaround that may be more appealing to some, courtesy of Gary Farr (http://blogs.claritycon.com/blogs/gary_farr/archive/2007/03/09/2888.aspx). MathSciNet review alert? The assembly name in the XXXXX placeholder can be different each time. This SO question discusses the problem and the workarounds and also shows that this bug existed on connect but was Then, they said something about VS 2012 Bug that they were trying to solve soon. –Fals Apr 24 '13 at 18:09 I can't dig up your twitter profile, so Login using C# Corner In Focus DOWNLOAD: C# Corner Android App Version 0.5.4 Why Developers Should Focus On Communication LEARN: How to become a Microsoft MVP C# Corner share|improve this answer edited Jun 18 '13 at 20:14 answered Apr 24 '13 at 16:52 danludwig 30.8k1299186 Let me try that for a bit :) In regards to the Not the answer you're looking for? C# TBB updating metadata value Build me a brick wall! Usually get this message usually when in a debug/edit/debug cycle after successfully building a C# solution and pressing F5 to debug it. Last Updated on Wednesday, 21 July 2010 14:03 Related Articles» How to connect an Apple Bluetooth Keyboard to Ubuntu (Troubleshooting)Note: This guide uses programs specific to Ubuntu 10.04 (Lucid Lynx)
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813187.88/warc/CC-MAIN-20180221004620-20180221024620-00032.warc.gz
CC-MAIN-2018-09
7,505
23
https://kickmybeat.com/beat/1kjqXgWM9L/
code
Music is competition Music from https://filmmusic.io "Stealth Groover" by Kevin MacLeod (https://incompetech.com) License: CC BY (http://creativecommons.org/licenses/by/4.0/) Copy the link Nobody uploaded content related to this beat yet. riddimvibration did not upload the original song related to this beat. Report this beat to be able to report to this beat. Lets get high sweetie ! I got Goosebumps. It means this is special.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00125.warc.gz
CC-MAIN-2023-40
429
11
https://www.simplemachines.org/community/index.php?action=printpage;topic=577588.0
code
Hi, I've recently set up a separate WordPress website which now sits around my forum. The forum set up 13+ years ago (!) was established in a sub-directory ( ??? ??.com/forum) and previously ran the few webpages we had through SimplePortal. The Portal was set to 'Front Page' mode so that anyone visiting the main domain or logging in to the forum was redirected to the 'Home' page set up on SP. I didn't set up a redirect, this was managed either by SMF being set up in a sub-directory or by SimplePortal through its 'front page' setting. I honestly can't recall what was set up 13 years ago or how but that's how it all worked. So now I have a new home page and various other pages set up in sub-directory ??? ??.com/web. All working great with one exception. Anyone new visiting us using the root domain ??? ??.com (or its variants) still gets redirected to the /forum. I've tried setting a redirect on my server but that just results in a browser error as the max number of redirects is exceeded - I think the server redirect creates a loop with the forum or SP redirect but I'm not really sure :( This may be a SimplePortal issue but I seem to remember that the prior to uing SP when we just had SMF it used to redirect from the root to the forum. Can anyone explain what I need to do to resolve? I have switched SimplePortal to Integrated mode which avoids it using it's old home page when users access the forum, instead they now get the Board Index as is usual. But visitors to lrsoc.com are also being redirected to the Board Index - eg the /forum rather than the home page /web I hope that makes sense to someone... My host techies have been unable to resolve, referring me to the forum software. Many thanks for any help you can offer. Has to be an htaccess redirect your site seems to work fine for me, i went to the .com URL and ended up at .com/web not /forum. i bet people have your forum bookmarked and that is causing the issue.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00585.warc.gz
CC-MAIN-2021-39
1,945
10
https://geodatascience.hw.ac.uk/events/
code
- 9 May 2022: 2nd GeoDataScience and UQ group research dissemination on-line open day. The programme for the day will offer a series of presentations across various aspects of machine learning applications to reservoir uncertainty modelling workflows: seismic interpretation, geological pattern recognition from outcrops, fluvial facies modelling with GANs, dynamic data integration into fracture reservoir modelling and multi-objective optimisation of reservoir production to tackle CO2 emission targets. In the afternoon we will present a new HWU JIP initiative on Uncertainty quantification of geomechanically sensitive reservoirs and welcome interested companies to join. - Learning geological patterns from outcrops by using computer vision methods, Athos Nathanail - A GAN-based Workflow for 3D Fluvial Facies Modelling, Chao Sun - Integrating geological uncertainty and dynamic data into modelling procedures for fractured reservoirs, Bastian Steffens, PhD overview - Well Grouping and Control Optimisation for CO2 Emission Offset in Field Production, Amirsaman Rezaeyan - New JIP: GMUQ – Uncertainty quantification of geomechanically sensitive reservoirs - February 2022: “GeoScience Meets DataScience” – Researcher Links workshop to be hosted by Heriot-Watt, sponsored by the British Council. - February 2021: GeoDataScience group Open on-line research dissemination day for industry and academia, over 70 participance from dozens of companies: - Turbidite fan interpretation in 3D seismic data by point cloud segmentation using Machine Learning, by Quentin Corlay - Machine Learning for sedimentary structure classification, by Athos Nathanail - Modeling variations of complex geological concepts with Generative Adversarial Network (GAN) learning from process modelling , by Chao Sun - How Generative Networks can help improve geological history matching, by Gleb Shishaev - A workflow with dynamic screening assisted, automated fractured reservoir modelling, by Bastian Steffens - Can agents model hydrocarbon migration? , by Bastian Steffens & Quentin Corlay
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474377.60/warc/CC-MAIN-20240223085439-20240223115439-00399.warc.gz
CC-MAIN-2024-10
2,079
19
https://sosyalhesapsil.com/how-to-delete-all-social-media-accounts/
code
The logic to delete social accounts is generally the same. Although it depends on the type of your Social Media account, the way to delete an account is through the settings menu. Take the sample facebook.com account. To delete an account, you need to go to the settings page. Go to the account section in the settings and you will see a button like “Delete Account” at the bottom. All accounts are usually deleted through such a menu. Menu / Settings Menu / Profile Menu / Account Deleting an account on this site We explained in detail how to delete more than 50 social networking sites on this site, and added account deletion direct links at the bottom of the pages. You can immediately find and delete your social media accounts by searching the site. Even if some accounts are accessed from mobile devices, account deletion cannot be performed from mobile. Possible for Facebook. Some sites and apps don’t allow this. In fact, we can think of them extending the account deletion process. Sites that facilitate the deletion of accounts are also increasing day by day. Caution when deleting an account If you make an incorrect or incomplete transaction, your account will not be deleted. To make sure you delete social accounts with paid membership: try logging in after deleting your account completely. Probably the explanation would be: “This user was not found!” or “This account was not found” or “This account was deleted” or “Undo account deletion”. If you can still log in (Facebook for example), you must wait for the account deletion to end. In other cases, repeat the deletion process until you’ve finished the account deletion. Also check your internet connection. Deletion will never occur due to a broken internet connection. If possible, try the account deletion from another computer.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.96/warc/CC-MAIN-20230924223409-20230925013409-00543.warc.gz
CC-MAIN-2023-40
1,828
11
https://www.nvisage.co.uk/latest-news/seo-and-responsive-design-for-mobile-devices/
code
We are fans of responsive design for a lot of reasons, not least because it’s the approach to mobile that Google advises. When you look at the reasons it’s not hard to see why. In essence you only have one site with the value of any link to any of your pages shared across a single site rather than split across mobile and desktop versions. Google actually recommend using responsive design as the optimal design methodology where possible. This means that the same url is serving all of the pages whether the user is on a desktop or mobile. It is just the CSS that changes how the content is displayed. The benefits are: If a user sends a link from a mobile site to a desktop user, the experience would be very poor as the user would view the mobile page on a desktop. Google only needs to index once as all URLs are the same for mobile and desktop. Google just indexes once rather than indexing the mobile site specifically using its mobile Google crawler as well as the desktop site. And if it's better for Google it's better for your search engine optimisation. You are not weakening your “link juice”. Instead any link to a page adds value to the whole site. If you had a sub-domain for mobile, the links to those pages would not add value to the desktop site as they would only reference the mobile site and vice versa. It is more cost effective to develop a responsive design because the design automatically sizes to meet the resolution of the device it is viewed on. You don’t need different versions for different mobile platforms and different resolutions. This makes it more able to cope with changing demands of new phones and reduces development costs. Responsive sites allow the content to be indexed which means Google can lead new visitors to your responsive site whereas you have to promote an app. Any user generated content in apps cannot be indexed and searched by Google so it cannot add value back to your main site
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00408.warc.gz
CC-MAIN-2021-43
1,948
9
https://community.spiceworks.com/topic/2140898-an-easier-ip-sla-routing-table-question
code
I have posted another question about two Dialer interfaces, IP SLA, and the routing table. No one has taken a stab at answering it. So, I thought I would ask an easier question that would help me understand better how things work and I can then make some headway on my problem. I have set up some ip sla echo probes like below and I have specified an interface for sending the probe. The probe will fail unless I have DialerX in the routing table. Why is the routing table even involved? ip sla 11 icmp-echo 188.8.131.52 source-interface Dialer1 ip sla 21 icmp-echo 184.108.40.206 source-interface Dialer2 If my default route is 0.0.0.0 0.0.0.0.0 Dialer1 then the Dialer1 probe will work and the Dialer2 will not. If I change the default route to Dialer2, then Dialer2 will work and Dialer1 won't. I am just trying to understand what is happening in the router such that route table entries are needed for a interface specific probe. This will help me understand what is going on. An interface is not an IP, even though it may have an IP associated with it. An interface is not a route; the route is the IP of the next hop beyond the interface. That next hop IP (or multiple next hops) could be anything (or many anythings). [For example, let's say I have a router interface at x.x.x.1 and it can reach three other routers at x.2, x.3, and x.4. Each of those routers could be the next hop for a different campus. The routing depends on the destination, but can't be determined from the source interface alone; nor does it have anything to do with the default route.) If I tell you to send a packet to 220.127.116.11 via eth3, what is your target IP? You can't answer that question and neither can the device without more information. At a quick glance, see if this helps. (It's too early in the morning for profound thought.) Thanks for your reply Robert5205, In my case, the interface that I am using is a Dialer interface with PPPOE encapsulation protocol--the interface IS the next hop--I have no access to the IP address. If I execute the ip sla icmp-echo 18.104.22.168 source-interface Dialer1 request with an empty routing table, the operation will fail. If I add 0.0.0.0 0.0.0.0 Dialer1 to the routing table the request will succeed. It doesn't seem like I am adding any more information to the situation with the routing table addition. My background is operating systems programming. I was just trying to understand how things work internally. Even though I specify a destination address and interface in the icmp-echo operation, IOS must look at the routing table for some purpose. Or maybe the routing table is used when the reply to the echo comes back. I was trying to get an idea of the mechanics of the whole process. Think that you gregorylaird need to take some hours of CCNA routing and switching concepts to fully see the environment that you are managing. The full mechanism is not set or understood by simple words explained at a forum, everything has a why and how. Am trying to figure out the whole need of your question, but looks like you are looking to modify an HA configured on your Cisco Router. If you have a time, see this topic, may be helpful for you, it will not be all enough, but surely will help. All the detailed and granular info can be found here: Thanks for taking the time to read my post. I have read both documents that you referenced. And I have read a number of cisco ccna documents/books so I have a little idea what is going on. My question is more about what happens internally when IOS runs a IP SLA icmp-echo process. The process will fail if there is not an entry in the routing table though I would think an entry would not be necessary if I specify a dialer interface on the icmp-echo command. I was just wondering why IOS finds it necessary to use the routing table since the interface is already specified. For example, I can have an empty routing table but have a PBR route-map with a set interface Dialerx command, and IOS will happily route packets to the Dialer. So, why for ip sla with the interface specified. This is probably a question for someone who knows the IOS guts of a router.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057199.49/warc/CC-MAIN-20210921070944-20210921100944-00565.warc.gz
CC-MAIN-2021-39
4,147
26
https://github-wiki-see.page/m/aces/cbrain/wiki/Execution-Servers
code
Execution Servers - aces/cbrain GitHub Wiki An execution server or Bourreau is a Rails application. In CBRAIN you can configure as many execution servers as you want, each one corresponding to a computing site where tasks are performed. Consult the Bourreau Setup documentation, for information on how to set up an execution server. - Go to the "Servers" tab. - Click "Create New Server". - Fill in the form: - Name: The name of the execution server. Note: The name must also be changed accordingly in the config file Bourreau/config/initializers/config_bourreau.rb for the server to restart properly later on. - System 'From' reply address: This is optional. If this is set, then messages sent automatically by the system contain this return adress. - Description: The first line is a short description which is shown in the Servers table. After that any special note for the users can be added. - Owner: The owner of the execution server. - Project: Access to an execution server can be limited to members of a specific project. See the Projects section for more information. - Status: The execution server can be "online" or "offline" in CBRAIN. If the execution server is created with "offline" status, then it is not accessible to users. - Timeout for is alive check (seconds): Time after which an execution server is considered not alive. - Time Zone: The time zone of the execution server. SSH Remote Control Configuration: - Hostname: The UNIX hostname where the execution server is installed. - Username: The UNIX username on the host. - Port Number: This is optional and is usually 22 for SSH. If your SSHD server listens on a different port, then specify it here. - Rails Server Directory: The full path where the Bourreau Rails application code is installed, for instance, /home/user/cbrain/Bourreau. - Second-level effective host: Sometimes you will have to enter a second level of connection. - Database Server Remote Tunnel Port: The choice of port number is arbitrary and can be any number between 1024 and 65535. The Bourreau application uses this port on the remote host to connect back to the MySQL server used by BrainPortal. The tunnel is set up automatically, so it is only necessary to make sure this port number is not in use by any other application on the host where the Bourreau runs. - ActiveResource Remote Tunnel Port: Again, the choice of port number is arbitrary, but this time the port is open on the BrainPortal side and allows the BrainPortal to send commands the Bourreau side. Cache Management Configuration: - Path to Data Provider caches: Each Bourreau needs its own directory to cache data. Create a new empty directory on the Bourreau's host and enter its full path here. As usual, make sure this directory is not shared with anything else and not even used as a cache by any other CBRAIN Rails application. If the Bourreau is on the frontend of a supercomputer, then this directory should be on a filesystem visible from all the compute nodes of that supercomputer. - Patterns for filenames to ignore: Enter any particular pattern of filenames to ignore; typically the '.DS_Store' and the '._*' file are ignored. - Cache Expiration Timeout - Tool Version Configuration: A tool config can only be created for an existing execution server. So create the execution server first and then create the associated tool config afterwards. See the section about Tools for more information. Cluster Management System Configuration: - Type of cluster: The Bourreau schedules tasks on a supercomputer cluster. Enter here the type of cluster you have access to on the machine where the Bourreau is installed. Typically, supercomputers have cluster management systems with names like SGE (Sun Grid Engine), Torque or MOAB. UNIX can also be selected, in which case no cluster management system is used and the Bourreau simply launches the tasks as standard UNIX processes. - Path to shared work directory: Just like for the cache directory, this is configured with the full path to an empty directory on the Bourreau side. And again, it should not be shared with any other resource. This directory is the location where subdirectories are created for each task launched on this Bourreau. If the Bourreau is on the frontend of a supercomputer, then this directory should be on a filesystem visible from all the compute nodes of that supercomputer. - Default queue name: Name od the queue - Extra 'qsub' options: Extra option for qsub Bourreau Workers Configuration: - Number of Workers: Configure a small number of worker subprocesses that are launched on the Bourreau side to handle the tasks running there. In the original platform there are usually two to four workers for each execution server. - Check interval: The interval used by the worker to check for a new task. - Log destination: Default is good for production can be changed in development. - Log verbosity: Default is good for production can be changed in development. - Task Limits: The task limits can only be defined once the execution server is created. Useful when you want to limit the number of active task for a specific execution server. Note: Original author of this document is Natacha Beck
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819668.74/warc/CC-MAIN-20240424143432-20240424173432-00097.warc.gz
CC-MAIN-2024-18
5,191
39
https://www.brasilliant.com/post/on-the-subject-of-resumes
code
On the Subject of Resumes Updated: May 21, 2021 Thank you again Sonia Michaels for providing some great material I could use for this post, and also @slizagna on twitter for a very helpful thread! Always keep your resume updated! I don't care if you have a cool job, in this industry you never know when you might need it again! Puh-lease keep in mind that this advice is for the US Games Industry. Resume culture changes depending on location/industry. I've had non-USA located people in a completely different industry ask me for a resume review, and while I love to help people I don't think I'm suited for every situation. I like to have a general resume which I always keep available on my website, and whenever I'm applying to positions I make copies that I modify and tailor to the role I'm applying for. (By the way, if you got any feedback on my resume I'm always open!) In terms of content - Your name (I have a long Brazilian name, so mine says "Fernanda G.R. Coelho") - Your title (Technical Artist, Software Engineer, etc.) - Contact info (email, phone, website, LinkedIn, general location, etc.) - Work experience As a standard, it's nice to have 3-5 bullet points per role explaining your responsibilities/accomplishments. Here's a guide to write effective bullet points! Include studio name and general location. List the time period you were on the role with a MONTH-YEAR standard. Some people also list the number of people in their studio (I decided not to because I've worked at well-known studios) - Education (include full degree name, institution, general location, date of completion or expected date of completion, any extra honors if you'd like) Remember to include: Game Engines, Programming Languages, Software, Content Management Software (Perforce, GitHub), actual languages (if you're multilingual). If you want to take it to the next level, include: Other work experience if transferable, awards, honors, nominations, hosted talks/panels, publications, cool volunteer opportunities, etc. Keywords! Look at the job description and look for any keywords used, and include those in your resume wherever you can. Chances are your resume will be scanned by an algorithm that checks for those. (some examples: "(...)familiar with Unreal(...)", "(...)+3 years experience with Python scripting(...)" , and so on) When listing accomplishments in your work-bullet points, include numbers whenever you can. (For example: "Wrote +10 tools in Python to optimize Maya workflow", "Rigged 20+ biped characters in Maya.", etc.) Things I have seen that are kiiiiinda weird (in the US at least!): - Your address, like street and apartment/house number! - A photo of you. - Your gender. - GPA (unless you have like, a 3.6+) - The "objective statement" is super old and not necessary these days. - Don't include "work"/ volunteer experience from when you were 13 years old, make sure all the experience on your resume is somewhat fresh. In Terms of Layout - Clarity always above anything else! I'm looking at YOU, art students! When I was in college everybody wanted their resume to stand out, but I only remember seeing a bunch of resumes that had overwhelming colors and designs; LOL Keep it simple, people! - PLEASE don't use the "HP bar" thingy for skill proficiency! You know what I'm talking about babey! Like seriously, what the hell do these mean?: Are you "kinda good at HTML but not really"??????? - Try not to include icons, those usually get in the way of algorithms and can take up space. - Be smart with color, spacing and font size; use italics/bold to contrast things in your resume and keep it easy/pleasant to read! - Keep your resume to one page. Unless you're 20+ years in the industry or somethin' - Save your resume as a .pdf. Students have the extra challenge of not having much work experience. How do I make my resume look more professional then? If you have student projects (like game teams or animation shorts), format them in your resume the same way you'd format regular work experience. Don't LIE. Let people know they were indeed student projects, but list your accomplishments, team size, time period and team name like you would for a real studio. I wish I had my original student resume to show you but I believe I wrote something like this: Another thing: if you have been a tutor or teacher's assistant that is also valuable and could be included as work experience! That's it for today, I hope this helps!
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510575.93/warc/CC-MAIN-20230930014147-20230930044147-00686.warc.gz
CC-MAIN-2023-40
4,454
40
https://research.fb.com/publications/a-hitchhikers-guide-to-fast-and-efficient-data-reconstruction-in-erasure-coded-data-centers/
code
Erasure codes such as Reed-Solomon (RS) codes are being extensively deployed in data centers since they offer significantly higher reliability than data replication methods at much lower storage overheads. These codes however mandate much higher resources with respect to network bandwidth and disk IO during reconstruction of data that is missing or otherwise unavailable. Existing solutions to this problem either demand additional storage space or severely limit the choice of the system parameters. In this paper, we present Hitchhiker, a new erasure-coded storage system that reduces both network traffic and disk IO by around 25% to 45% during reconstruction of missing or otherwise unavailable data, with no additional storage, the same fault tolerance, and arbitrary flexibility in the choice of parameters, as compared to RS-based systems. Hitchhiker “rides” on top of RS codes, and is based on novel encoding and decoding techniques that will be presented in this paper. We have implemented Hitchhiker in the Hadoop Distributed File System (HDFS). When evaluating various metrics on the data-warehouse cluster in production at Facebook with real-time traffic and workloads, during reconstruction, we observe a 36% reduction in the computation time and a 32% reduction in the data read time, in addition to the 35% reduction in network traffic and disk IO. Hitchhiker can thus reduce the latency of degraded reads and perform faster recovery from failed or decommissioned machines.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657907.79/warc/CC-MAIN-20190116215800-20190117001800-00378.warc.gz
CC-MAIN-2019-04
1,494
2
https://www.digitalfieldguide.com/blog/date/2019/09
code
Today I am thinking about a quote from Philip Pullman that I read in a recent interview with the author in The New Yorker Magazine: “Reason is a good servant but a bad master.” Without tools of construction—and reason—it is hard to build an image like Dawn Chorus Unbound. But with too much reasoning in advance, one loses the Beginner’s Mind advantage: a sense of play, and being open to creative serendipity. In art, as in life, I try to keep a balance between analytic rigor and flexible thinking. One needs both modes, truly one does.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224647639.37/warc/CC-MAIN-20230601074606-20230601104606-00277.warc.gz
CC-MAIN-2023-23
548
3
https://club.myce.com/t/nero-vision-express-3-and-import-disc-problem/133612
code
I am a real newbie. I have been reading Nero threads here most of the day and it appears this certainly is the place to get help. I have heard nothing from Nero so far. I am trying to make a DVD slide show with Nero Vision Express 184.108.40.206. My operating system is Windows XP - home ed. Because of the number of photos, my slide show will have 2 titles. I made the second title and burned it to a DVD. I then made and saved my first title. When I import the disc to add the second title to the first, the photos appear fine but the audio sounds like it is in slow-mo in the second title. All the audio are mp3’s. That DVD plays great in my DVD player but I cannot seem to import it without it getting jumbled. Also, the first mp3 in the first title plays fine in other programs but does not start at the beginning of the song all the time when playing the slide show in the edit portion of the program. Sometimes it does but most of the time, it does not. The rest of the first title plays fine even after disc imported. I sure hope someone can help. Very frustrating.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038916163.70/warc/CC-MAIN-20210419173508-20210419203508-00480.warc.gz
CC-MAIN-2021-17
1,075
6
https://dcp.psc.gov/OSG/dentist/scwg-recruitment.aspx
code
What We Do: The Recruitment Workgroup of the Dental Professional Advisory Committee (DePAC) focuses on utilizing innovative strategies for recruiting and retaining dentists to agencies staffed by the United States Public Health Service. Members of the working group are dentists from various federal government agencies and may be USPHS officers, civilians, or tribal employees. Dental Officer Presentation:https://dcp.psc.gov/osg/dentist/recruitmentactivities.aspxUSPHS Dental Webpage:https://usphs.gov/profession/dentist/Best Kept Secrets: https://dcp.psc.gov/OSG/dentist/documents/BKS202005_v2.pdf Page Last Modified on 10/11/2020 This page may require you to download plug-ins to view all content. Persons with disabilities having problems accessing any PDF or document on this page may call 1-888-225-3302 toll free for assistance.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880038.27/warc/CC-MAIN-20201022195658-20201022225658-00437.warc.gz
CC-MAIN-2020-45
836
5
https://suse.me/tags/cpu-benchmark
code
Приложения с тегом "cpu-benchmark" |Novabench||NovaBench a popular component benchmark application for Windows, and Mac OS X. It's the most convenient way to test and compare your system's hardware and graphics capabilities! …| |UserBenchMark||Free benchmarking software. Compare results with other users and see which parts you can upgrade together with the expected performance improvements.…| |SuperPI||Super PI is a single threaded benchmark that calculates pi to a specific number of digits.…|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00351.warc.gz
CC-MAIN-2020-50
520
4