url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://hgvs-nomenclature.org/en/latest/recommendations/RNA/alleles/
code
Allele: a series of variants in a transcript from one chromosome. Format (one allele): "prefix"["change1";"change2"], e.g. r.[123g>a;345del] - "prefix" = reference sequence used = r. - [ = opening symbol for allele = [ - "change1" = description first variant = 123g>a - ; = separator symbol two changes = ; - "change2" = description second variant = 345del - ] = closing symbol for allele = ] Format (two alleles): "prefix"["change"];["change"], e.g. r.[123g>a];[345del] - "prefix" = reference sequence used = r. - [ = opening symbol for allele-1 = [ - "change" = description variant = 123g>a - ];[ = closing symbol for allele-1, separator symbol two alleles, opening symbol for allele-2 = ];[ - "change" = description variant = 345del - ] = closing symbol for allele-2 = ] - all variants should be described at the DNA level, descriptions at the RNA and/or protein level may be given in addition - humans are diploid organisms and have two alleles at each genetic locus, with one allele inherited from each parent - when two variants are identified in a transcript that derive from one chromosome (in cis) this should be described as "r.[variant1;variant2]". - when two variants are identified in transcripts that derive from different chromosomes (in trans) this should be described as "r.[variant1];[variant2]". - when two variants are identified in a transcript, but when it is not known whether these derive from one chromosome (in cis) or from different chromosomes (in trans), this should be described as "variant1(;)variant2", i.e. without using "[ ]". NOTE: it is recommended to determine whether the changes are in the same transcript or not. - when two variants are identified in two different transcripts that derive from one variant at the DNA level the variants are separated using a ","; p.[variant1,variant2]". For more examples see DNA alleles. - variants on one allele - LRG_199t1:r.[76a>u;103del]: one transcript contains two different changes, r.76a>u and r.103del. The variants are found in cis. - LRG_199t1:r.[(578c>u;1339a>g;1680del)]: one transcript contains three different predicted changes, r.(578c>u), r.(1339a>g) and r.(1680del). The variants are found in cis. - variants on two alleles - LRG_199t1:r.[76a>u];[103del]: the two transcript alleles each contain a different change, r.76a>u and r.103del. A heterozygous case (compound heterozygote, e.g. in a recessive disease). The variants are found in trans. - NM_004006.2:r.[76a>u];[76a>u]: both transcript alleles contain the same variant, r.76a>u. A homozygous case (e.g. in a recessive disease).: NOTE: LRG_199t1:r.76a>u(;)(76a>u) indicates analysis detects one variant (r.76a>u), suggesting both transcript alleles contain this variant, but it can not be excluded the other allele is deleted or not expressed. - LRG_199t1:r.[76a>u];[76=]: one transcript allele contains a variant, r.76a>u, the other transcript allele contains at position r.76 the reference sequence, r.76= (is wild-type).: NOTE: the description r.[76a>u];[=], containing r.76a>u and r.=, is different since it indicates the entire coding RNA reference sequence was analysed and the only variant identified was r.76a>u (on one allele). - NM_004006.2:r.[76a>u];[?]: one transcript allele contains a variant, r.76a>u, while a variant in the other transcript allele is expected but not yet identified (r.?) (e.g. in individuals affected by a recessive disease). - alleles not certain - NM_004006.2:r.76a>u(;)103del: two variants are found in a transcript, r.76a>u and r.103del, but it is not known whether they derive from the same or from different transcript alleles (chromosomes).: NOTE: when it is not known on which allele a variant is, allele brackets should not be used - one allele, two transcripts - LRG_199t1:r.[897u>g,832_960del]: two different transcripts, r.897u>g and r.832_960del, derive from one variant (LRG_199t1:c.897T>G at the DNA level) Was originally the recommendation to use the format [r.76a>c+r.83g>c]? Indeed, originally den Dunnen and Antonarakis, 2000 the suggestion was to describe two changes in a transcript from one chromosome as [r.76a>c+r.83g>c], i.e. using a "+"-character to separate the two changes, while an earlier publication suggested to use a ";" ([r.76a>c;r.83g>c] (Antonarakis and the Nomenclature Working Group, 1998). To prevent confusion with older publications, to improve overall consistency and to keep descriptions as short as possible, the 2000 proposal was retracted. The recommended format is r.[76a>c;83g>c]. In recessive diseases, is it important I show which variants were found in which combination? When in one individual you find more then one variant it is essential that you clearly indicate which variant(s) were found and in which transcript alleles; - disease severity will depend on the combination of variants found, - in recessive disease, when two variants are in one transcript an individual is a carrier or you might not have found the variant on transcripts from the 2nd allele. I find the notation r.[76a>c] without describing the second transcript allele misleading; not enough researchers know this refers to only one of the two transcripts present. Would using r.[76a>c]; be OK? No, the recommended description is r.76[a>c];[=], i.e. r.76= for "no change" at postion r.76 on the second transcript. How should I describe the variants detected in males and females for a transcript from the X-chromosome? In females the description is straightforward, like r.[76a>c];[=]. In males there is no transcript from the second allele (X-chromosome) which can be described as r.[76a>c];, i.e. using "r.0" to indicate the absence of a transcript from the second X-chromosome.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.19/warc/CC-MAIN-20230923162848-20230923192848-00259.warc.gz
CC-MAIN-2023-40
5,688
44
https://www.worldofdb2.com/events/learn-more-about-db2-11-for-z-os-data-sharing-and-continuous-avai
code
REGISTER NOW http://ibm.biz/DB2DataSharing DB2 11 for z/OS Data Sharing and Continuous Availability enhancements Abstract: This session will introduce and discuss the valuable and much needed major enhancements in DB2 11 related to continuous availability including data sharing. The Data Sharing enhancements will include: group buffer pool castout, CF DELETE_NAME, Restart light, locking, indexing and DSNB355/356I messages. The Availability enhancements will include: BIND/REBIND/DDL/Online REORG break-in with persistent threads, IFCID 306 support for old compression dictionaries, online alter partition limit keys, deferred alter PIT recovery, DROP Column, Workfile instrumentation, defer define object processing, extended RBA/LRSN, and auto clean-up of pseudo- deleted index entries. John Campbell - IBM DE Florence Dubois - IBM STSM
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00178.warc.gz
CC-MAIN-2023-14
841
5
https://discourse.psychopy.org/t/carrying-participant-id-from-qualtrics-to-pavlovia-and-back-to-qualtrics/14880?page=2
code
I do use Sona (in theory – at least I help colleagues use it). You certainly should be able to use the link that you would send from Qualtrics to Sona as the redirect URL in Pavlovia instead, As far as I remember, participants have to click OK after they leave the experiment to save the data and the redirect follows that. Since you said var=1 in your example, I assumed you had an embedded variable called var. So your URL actually contains fin=1? When they come back to Qualtrics they are going to be at the beginning again, not at the point they left (I think).
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00272.warc.gz
CC-MAIN-2021-43
567
4
https://www.rajibroy.com/category/publishlater/
code
In between those constant tug of war of “Want to eat something something?” and “No, no, no”.. we managed to go thru a lot of the pictures from the past. Especially those that had my father in law in them. (Who we lost two and a half years back) Good song session by Sudeshna and Avijit. I was the only one who was the audience!!
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488525399.79/warc/CC-MAIN-20210622220817-20210623010817-00270.warc.gz
CC-MAIN-2021-25
336
2
https://forum.inductiveautomation.com/t/troubleshooting-clock-drift-error/88021
code
We are using version 8.1.28, with a server configuration of 16 cores and 48GB of memory, running on VMware with no other software running except Ignition. We have recently encountered frequent clock drift errors, and the server often runs very slowly. By using JMC to analyze the runtime status, we have identified two issues. 1.Under normal circumstances, the CPU usage is consistently around 10%. We suspect that the CPU spikes are caused by certain threads. How can we pinpoint the cause of abnormal CPU spikes? 2.Regarding garbage collection, as shown in the screenshot , a garbage collection occurred at 13:41:50, causing a 2.361 second pause in the program. We allocated a maximum and minimum of 24GB to the JVM, and the JVM's memory usage has never exceeded 8GB. Why were multiple garbage collections triggered at this time, and how can we avoid this issue?We have added -XX:MaxGCPauseMillis=100 to the Ignition.conf file. Here is the automatically exported JFR file. Could you please help analyze it? Thanks a lot. By the way, could you please explain what the perspective-worker and perspective-queue threads are used for?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818337.62/warc/CC-MAIN-20240422175900-20240422205900-00810.warc.gz
CC-MAIN-2024-18
1,131
6
https://neo4j.com/developer/docker/
code
Neo4j with Docker Important - page not maintained This page is no longer being maintained and its content may be out of date. For the latest guidance, please visit the Neo4j Operations Manual . Running Neo4j with Docker docker run -p7474:7474 -p7687:7687 -e NEO4J_AUTH=neo4j/s3cr3t neo4j # then open http://localhost:7474 to connect with Neo4j Browser What is Docker Docker is a lightweight virtualization mechanism to run single applications or processes in a containerized environment on a Linux host system. It is designed to handle a small piece of functionality in each container and scale according to needs. Docker containers can be used as infrastructure layers, data containers, or configuration providers. The containers are built from images that can be vendor-provided or user-defined. To build a Docker image, you create a specification file ( Dockerfile) to define the minimum-required, dependent layers for the application or service to run. The steps in the Dockerfile describe the operations for adding the necessary filesystem content for each layer. You can run as many Docker instances on your host as your resources allow because each container is isolated from any others. The official Neo4j Docker Image Neo4j provides and maintains official Neo4j Docker images on DockerHub for both Neo4j Community and Enterprise editions. Releases for current and previous versions of the image are also provided. A list of the previous versions is available under the tags section of the DockerHub page. How to use the Neo4j Docker Image There are several ways to leverage Docker for your Neo4j development and deployment. You can create throw-away Neo4j instances of many different versions for testing and running your applications. You can also pre-seed containers with datasets, extensions, and configurations for interaction and processing. The step-by-step instructions on starting Docker containers for Neo4j are given in our how-to guide. There is also documentation in our operations manual on running Neo4j with Docker and how to configure it, run clusters, and handle security. By default, the docker image does not have certificates installed. This means that you will need to disable encryption when connecting with a driver. Evaluating Neo4j on Docker We also use Neo4j on Docker internally for some of our tools and functionality. From building solutions to live demos, deploying Neo4j with Docker is a valuable capability. Probably our best-known examples of Neo4j deployed with Docker containers are the Neo4j Sandboxes. These sandboxes are Neo4j instances in Docker containers running on a shared cloud server. Each sandbox is independent and separated from the others, allowing users to spin up contained environments for trying out and testing Neo4j! In each sandbox use case, we specify certain configurations, data sets, and extensions/plugins to include, and each user’s queries and exploration is specific to that assigned container. Once the life of a Neo4j Sandbox is complete (maximum of 10 days), the container is shut down. If you want to see how Neo4j works in a Docker container, go ahead and create a Neo4j Sandbox. Note that we do have some configuration presets to restrict certain access and limit functionality. Was this page helpful?
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649193.79/warc/CC-MAIN-20230603101032-20230603131032-00524.warc.gz
CC-MAIN-2023-23
3,282
22
https://todaysmarch.org/index.html
code
TodaysMarch.org is focused on enabling social activism, first through the creation of sophisticated software as an open-source project, then through deploying the software in the USA and in other sites around the world. Progress requires the dissatisfied do something - Social Activists make it happen! OOPS! We seem to be having email trouble at the moment. We're working on fixing it right now! We have two email lists! Sign up! Our world today is rapidly becoming a world-wide oligarchy, dominated by the ultra-rich and the corporations they control. Our governments are increasingly unresponsive to We, The People and are increasingly under the control of the ultra-rich. Our elections are being stolen right in front of our eyes and our media is remaining mute... Only by taking to the streets in large numbers are we taken seriously; so long as we are trapped in the activities of our own lives, our voices are not heard, even though we are the vast majority. But with over 90% of the non-Internet media in the USA in the control of just six ultra-wealthy, old white men, who hide our activism and prevent us from working together - or even knowing about each others' interests - how are we to even organize, know what's going on, become active ourselves? This project provides some of the answers: We need to use the internet to find each other, organize and become active. We need to learn from each other, organize, organize better, become coordinated. More of us will join in as participation becomes easier and safer, and as we gain tools that help prosecute police and provocateurs who attempt to violate civil rights or disrupt our protests, our protests will become more effective. We need to prevent the media of the ultra-rich from getting away with all their lies. To further these goals, this project will do two things. First, it will provide the world with a USA based web site called TodaysMarch.org (and also MarchToday.org) with the features listed below. Second, this software will be made available to the world via an open-source project. The project will provide these services: - People can find out what is happening in THEIR areas so they can participate. Searches may be based on location, topic / subject, and date. - People can check out what other people are doing in other areas too, so you can perhaps organize something similar in your own area - learn and be energized by other people's protests, use them to create your own, or improve tactics, etc. - March (event) participants can live-stream upload from their mobile devices to document what REALLY went down, how many people were REALLY present, when and where. Participants can record police abuse that the police can't then get rid of by damaging the phone, NOR can they determine what any one person recorded even if they have the phone! Provocateurs there to make the protesters look bad can be outed. - The live-stream both uniquely identifies the recording stream and digitally signs each frame thereby preventing undetectable editing and permitting any edit to be easily proven in very little time. This prevents anyone from distorting the truth about what happened. This can be useful for convicting police or provocateurs of their crimes by showing that any recordings are truly genuine. - User participation will be completely anonymous unless users wish to be known, such as organizers of events wanting to be public while participants may not. - Provide event participants with a chat-like venue (similar to typical web-page comment sections - perhaps using Disqus) for discussion of the events - likely three venues per event, before (planning), during ("live"), and after ("what really went down?!" - first-hand-account discussions). To get started, we intend to "steal from the best". For example, we are interested in obtaining the live-streaming software from the ACLU as a starting point as it makes sense to start with something that works and reduce the time it takes to achieve deployment by borrowing as much from what has already been accomplished! As stated above, the software that enables the web site and its goals will be open-source, created by the project or obtained with compatible licensing, and made available to anyone who wants it under standard open-source licensing rules. It will be both designed for being multi-lingual and designed to permit the creation of a distributed web of sites across the globe to help overcome any government intervention. This will have to be well thought out, but part of the idea is that cooperating sites share their data with other sites so no one is vulnerable to having all their data either lost or otherwise mishandled. Your ideas are all welcome! Most importantly, we need YOUR direct help in making this a reality! To join us or provide assistance, please either email us, or sign up to one of our email lists.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487620971.25/warc/CC-MAIN-20210615084235-20210615114235-00383.warc.gz
CC-MAIN-2021-25
4,880
18
http://www.linuxforums.org/forum/applications/92919-ntp-should-work-right.html
code
Results 1 to 2 of 2 Thread: NTP should work, right? Enjoy an ad free experience by logging in. Not a member yet? Register. NTP should work, right? this is pretty easy. Here's the ntp.conf file for my timeserver: server 0.debian.pool.ntp.org iburst server 1.debian.pool.ntp.org iburs restrict 0.debian.pool.ntp.org mask 255.255.255.255 nomodify notrap noquery restrict 1.debian.pool.ntp.org mask 255.255.255.255 nomodify notrap noquery restrict 188.8.131.52 mask 255.255.255.0 nomodify notrap restrict 127.0.0.1 3 May 07:33:12 ntpdate: no server suitable for synchronization found As it turns out, this was all iptables' doing. As a last ditch effort, I asked iptables to chill for a while and all of the sudden everything worked. I thought iptables would be mad, but we shook hands and then got beer after work.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123484.45/warc/CC-MAIN-20170423031203-00073-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
811
8
https://community.intel.com/t5/Intel-Business-Client-Software/Howto-traverse-CIM-HostedAccessPoint-with-WinRM/td-p/819159
code
Out of curiousity, why are you attempting to do this with WinRM at the command line instead of building off of the examples in the SDK? They can be configured to use WinRM instead of the DotNetWSman client provided in the library, if you would like. It'd be a lot easier than trying to build the requests directly at the command line with WinRM. In my previous response to your question about how you could load certificates (here: http://software.intel.com/en-us/forums/showthread.php?t=77240 ), I listed some of the other mechanisms you could use to help build the calls. The flow you described has an SDK sample listed at the bottom (it's in the Windows\Intel_AMT\Samples\WS-Management\GeneralInfo directory in the SDK), as a starting point. I'd recommend looking at that code instead of trying to implement with WinRM at the command line.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359082.48/warc/CC-MAIN-20210227174711-20210227204711-00176.warc.gz
CC-MAIN-2021-10
842
2
http://wakacjeznami.info/air-traffic-controller/computer-programming-graduate-research-paper-example
code
Computer Programming graduate research paper example Advice on graduate studies, research, writing, and careers in computer science. How to Read a Research Paper · How to do research in Computer Science by. Free computer programmer papers, essays, and research papers. programmers write the detailed list of instructions the computer will follow in the software (Great Sample Resume). . [tags: programmers, computer science,web master ]. What are currently the hot topics in computer science research? exams and testing, essay grading, generation of multiple-choice questions. You: Computer Programming graduate research paper example |Environmental Health things to do in princeton today||Administrative Assistant review of an essay| |Business Administration various research topics||NSDI Best Paper : Passive Wi-Fi: Bringing Low Power to Wi-Fi Transmissions. Oftentimes, it is human nature to resist change no matter what the situation in which the change is taking place. Online Resources on Careers in Computer Science. Online Resources on Graduate Study in Computer Science. The best way to understand how well our writers do their work is to view sample essays written by them. Vasant HonavarDepartment of Computer ScienceIowa State University. How to Choose a PhD advisor by Michael Loui.| |Computer Programming graduate research paper example||Most banks require their programers to wear a suit and attend an office during normal work hours. Computers have moved into every nook and cranny of our daily lives. SYNT Best Student Paper : Leveraging Parallel Data Processing Frameworks with Verified Lifting. Elements of StyleWilliam Strunk. Technical Communication and the Computer Programmer. In the beginning they were mainly used for keeping financial records by banks and insurance companies, and for mathematical computations by engineers and the U. SIGN UP Powered by.|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806070.53/warc/CC-MAIN-20171120145722-20171120165722-00004.warc.gz
CC-MAIN-2017-47
1,882
8
https://lairware.com/guide/fix_album_artist_listed_twice.html
code
Fix for same album or artist showing twice in iTunes Many times I've noticed that the same artist shows up twice on my iPhone, and sometimes I've seen three or more of the same album listed, each with only some of the songs. Sometimes they have obviously wrong spelling or punctuation, but sometimes they look exactly the same! Other times, you only notice when there was a song missing when playing an album, but when you look for that song it otherwise appears correctly in your library. There are a few different possible causes, and different approaches to fixing them. #1: Simple spelling and punctuation The most obvious cause, for example Foo Fighters' album "Color & The Shape", "Colour & The Shape", and "Color And The Shape". It's really no surprise that if your songs have more than one of these, they'll be listed separately in your library. #2: Extra space in the name Like cause #1, but you can't see it. iTunes has become better over the years at avoiding this kind of problem automatically, but this can be very frustrating and mystifying when it does happen. #3: Different "album artist" Songs have separate metadata for "artist" and "album artist". Even if the "artist" is exactly right, if your song's "album artist" is different for one track it will appear as a distinct album from the others. It's not common to have something set for "album artist", so the Album Artist column isn't visible by default in iTunes song listings. As a result, this cause can be difficult to notice. #4: Compilation flag In the metadata for each of your songs, there is a "compilation" checkbox used to indicate that the track's album is actually a compilation of various artists. If this isn't set right, you're going to see tracks showing up in strange places. Solution #1: Fix manually in iTunes Tell iTunes to show "My Music", and change the list type to "Songs" in the upper right (as opposed to Artists, Albums, etc). Type part of the artist name or album name into the Search field, and click the "Filter music for blah" item in the resulting pop-up window. Ensure that what you've typed is common to everything on your album such that you can see all of its tracks in the resulting list. Select all of the tracks on the problematic album, using command-click (Mac) or control-click (Windows) to add individual tracks to the selection when they're not all contiguous. Choose iTunes' File > Get Info menu command. If you're on Windows, you may need to make the menu visible with Ctrl+B first. You do want to edit multiple items, so click "Edit Items" if it asks. In the resulting window, look for greyed-out "Mixed" values in Artist, Album, Album Artist fields, or a "dashed" compilation checkbox. Those are probably the source of your trouble, so type in the correct value to replace the "Mixed" or turn on/off the compilation checkbox as necessary. While typing, iTunes will suggest likely values though it can be annoying if it suggests something that you don't want. Finally, Click OK and that problem should be gone! Wash, rinse, and repeat for any other problems of the same nature.The upside: - It's completely free - Must find each problem manually - Must identify cause of each problem manually - Must fix each problem manually Solution #2: Use Song Sergeant Unsurprisingly, there exists software that was design to find and fix problems like this. It's free to scan your library for problems, and you might be surprised how many inconsistencies (and other issues) that it finds. I'm showing the Mac version here but there's also a version for Windows. After it finishes scanning your library, you can peruse the list of inconsistently named artists and albums. It uses some common sense to automatically choose a preferred name for each one, showing it in [brackets] like this, and if you're going to use Song Sergeant to actually fix these problems you can double-click a name to make it the preferred one. You can also go crazy and let it use "sounds like" logic to find artist and album names that are misspelled and you never noticed. Be careful with this option though, as there are many legitimately similar sounding artist names that it will report. Luckily there's a "related songs" section at the bottom of Song Sergeant's window that you can use to have a look at album artwork for clues to what the artist/album name really should be.The upside: - Finds many kinds of inconsistencies automatically - Can fix problems en masse instead of one-by-one - Unique plural and "sounds like" matching - Not free to fix the found problems automatically - Can't automatically find "compilation"-caused issues - Fixes other kinds of problems you may not care about Here are links to download Song Sergeant for macOS and Windows. When you start it, it will automatically load your iTunes library and find problems. You may need to turn on an "XML" sharing feature in iTunes in order to let Song Sergeant read your library. When it's done scanning, just click the "Inconsistencies" icon at the top of the window. Multiple entries in your library for an artist or an album are frustrating, especially when it's not obvious why. It's even more frustrating when playing an album and it misses your favorite song and you have to play it separately! This isn't a problem you just need to "live with", it's not hard to sort out!
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058450.44/warc/CC-MAIN-20210927120736-20210927150736-00505.warc.gz
CC-MAIN-2021-39
5,336
32
https://forums.sketchup.com/t/how-can-update-rendering-options-of-page/228069
code
We unfortunately have no direct API to change many of the scene properties. For now you typically have to update the current view and update the scene. As Dan says the rendering options aren’t saved directly to the Scene but to the Style. This means you can activate the style, update it and switch back to the previous style without changing the active scene, but it also means changes to that Style applies to any Scenes using the same style. Yes, I need to use different styles in different scenarios, but I cannot add a style based on the current style. The API only allows adding a style through a style file. Sketchup::Styles#add_style-instance_method I hope to add a style based on the user’s current style that only turns off section display and section cutting. And I also hope to have this option when updating page. … rename it (and perhaps change the description), … Then copy the style attributes (as needed) from the user’s current style to the new style by doing what I said above. Then make your changes to the user’s current style, then … Set the new style to the selected_style and call styles.update_selected_style. Then undo your changes to the current style. Lastly use the new style by setting a scene to use it.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655092.36/warc/CC-MAIN-20230608172023-20230608202023-00283.warc.gz
CC-MAIN-2023-23
1,248
11
http://cognito.co.nz/manual/moneyworks_calculations_file_getmark.html
code
Result Type: number Definition: Gets the current position (byte offset from beginning) in the file. Availability: available within MWScript handlers. File_Close: File functions for creating/reading/writing text files File_GetLength: File length in bytes File_Move: Rename/move a file File_Open: Open a file File_Path: Get the full path of an open file File_Read: Read text from current position File_ReadLine: Read to end of line from current position File_SetMark: Set Current read/write position File_Write: Write text at current position WriteToTempFile: Create a temp file containing the string
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178385529.97/warc/CC-MAIN-20210308205020-20210308235020-00135.warc.gz
CC-MAIN-2021-10
598
13
http://pbw.spaceempires.net/support/kb/2010/03/what-is-a-mod
code
When looking at the details of PBW2 games, you will notice that each game lists a "mod" in its details. But just what is a mod? A mod is a custom, player-created set of game rules. Some mods seek to add new features to the stock game experience. Others seek to re-balance perceived problems. The most interesting mods, however, are those that start everything over from scratch and provide a completely new experience. Some of these mods will seek to implement an existing sci-fi universe (e.g. Babylon 5, Star Trek), while others implement the vision of their author (e.g. Carrier Battles, Proportions, Adamant). Many players enjoy using mods to provide variety to their gaming. Installing a mod varies slightly depending on which game you are playing. Space Empires IV Each mod exists as its own subfolder inside the installation directory. If you installed SE4 to "C:\Games\SEIV", you would typically install a mod by extracting it directly into this directory. Thus, after installing the "CarrierBattles" mod, you should have something like: C:\Games\SEIV C:\Games\SEIV\CarrierBattles C:\Games\SEIV\CarrierBattles\Data C:\Games\SEIV\Data To load SE4 with a mod, you should get a mod launcher. Space Empires V Each mod exists as its own subfolder inside the "GameTypes" directory. If you installed SE5 to "C:\Games\SEV", you would typically install a mod by extracting it directly into the "GameTypes" directory. Thus, after installing the "Balance Mod" mod, you should have something like: C:\Games\SEV C:\Games\SEV\Data C:\Games\SEV\GameTypes C:\Games\SEV\GameTypes\Balance Mod C:\Games\SEV\GameTypes\Balance Mod\Data C:\Games\SEV\GameTypes\Standard SE5 SE5 includes the ability to load mods internally, making a mod launcher unnecessary. You can find most mods available for download on SpaceEmpires.net: You can also find detailed information on many mods on the Space Empires Wiki:
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506029.42/warc/CC-MAIN-20230921174008-20230921204008-00587.warc.gz
CC-MAIN-2023-40
1,889
13
http://berchman.tumblr.com/archive
code
I received an email forward today and wanted a way I could share it with more people. I searched the Internet to no avail. I could find some vague references, but no images, or documentation. I do not know who collected or curated these images. I know not the photographer, models, or the designers of any of this work. I am more than happy to give credit where credit is due. So here it is.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708142388/warc/CC-MAIN-20130516124222-00089-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
391
4
https://audiojungle.net/item/fun-positive-pack/18931437
code
Fun Positive Pack is a music pack containing three happy and joyful guitar tracks with woodwinds, piano, marimba, claps and drums. Full of inspirational atmosphere, sentimental and beautiful mood. Suitable for all happy, joyful and motivational themes. Ideal for YouTube videos, cooking videos, cat videos, commercial, business and corporate use or productions aimed at children. Includes the following 3 tracks: Includes the following MP3 & WAV zip tracks: 1. Children’s Party (Full version) 1:07, Children’s Party (Short version)) 0:30, Children’s Party (Loop version)) 0:55 208 bpm (full version from 0:00, short version from 1:07, loop version from 1:38 in the preview) 2. New Game (Full version) 1:05, New Game (Short version) 0:23, New Game (Loop version)) 0:51 185 bpm (full version from 2:35, short version from 3:40, loop version from 4:03 in the preview) 3. Kids (Full version) 1:15, Kids (Short version) 0:27, Kids (Loop version)) 1:12 158 bpm (full version from 4:57, short version from 6:12, loop version from 6:40 in the preview)
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00151.warc.gz
CC-MAIN-2018-51
1,049
7
http://andymindler.com/project/ios-personal-project
code
iOS PERSONAL PROJECT In my free time I create my own games using Unity and C#. One game that I worked on was an iphone SHMUP. The game was put on hold once I joined Visual Concepts (2K Sports) as I didn't have enough time to properly dedicate to the project. I had originall intended upgrades to be a tree and node based system and had designed many of the elemnts out and created an in-engine mockup. After testing it out on the phone it became apparent that the small screen (at the time it was iPhone 5's which were fairly small by today's standards) and touch based controls were not well suited for the intricacies of the node system. Touch screens don't have hover states and easy to see context windows from hover, this made the node tree fairly obtuse and combersome to use when trying to make decisions. After some testing and re-designing, I found that scrolling lists were a far more user friendly approach to the upgrade menus. Players could quickly glean information about each upgrade and make decisions with only a few swipes and a tap rather than digging through multiple layers of modal boxes.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474808.39/warc/CC-MAIN-20240229103115-20240229133115-00338.warc.gz
CC-MAIN-2024-10
1,110
5
http://www.xboxachievements.com/forum/showpost.php?p=6179426&postcount=15
code
Can't really vote since I have not played the game in over a month and the online was only played for something like an hour. But if you can do whatever the microtransactions help with, without actually using them, then eh, I don't see much of a problem with it. Yes, I dislike microtransactions, but if it's not required to finish a game, then let it be. Originally Posted by BiggD People are too sensitive on this forum.
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450659.10/warc/CC-MAIN-20151124205410-00098-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
422
4
http://stackoverflow.com/questions/7387757/assigning-the-hint-value-in-formtastics-range-slider-option-to-be-values-in-t?answertab=active
code
I'm currently attempting to assign the value of the :hint property of formtastic to the options that I've supplied it in the array when using the :range option. So as you drag the slider the numbers, in this case 1-5 are displayed on screen via the :hint. Is this possible? I'd appreciate any help and have included some basic sample code below to illustrate my point. I'm using rails 3.1 and formtastic 2.0.0 rc5 <%= semantic_form_for @ratings do |form| %> <%= form.inputs do %> <%= form.input :enjoyment, :as => :range, :collection => [1,2,3,4,5], :default => 3, :hint => '<%= enjoyment.value.here %>' %> <% end %> <%= form.buttons do %> <%= form.commit_button %> <% end %> <% end %>
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447558417.25/warc/CC-MAIN-20141224185918-00048-ip-10-231-17-201.ec2.internal.warc.gz
CC-MAIN-2014-52
685
3
https://quickbooks.intuit.com/learn-support/en-sg/manage-your-account/can-user-login-under-same-user-id-which-email-address-for/01/450736/highlight/true
code
You can add a user to multiple companies with the same email address. Because a user can be invited only using one email address to access multiple companies. But please know that a user can only have a single user ID since a user ID is equal to a single email address. To do so: Here's an article you can read to learn more about how you can add and manage a user's access: Add, delete, or change user access. Keep me posted in the comment section down below if you have any other questions. I'm always around happy to lend a helping hand.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00710.warc.gz
CC-MAIN-2022-05
540
5
https://dubai.sae.edu/training-courses/games-development/
code
This Games Development Short Course introduces students to the basics of the development of video games on Unity3D and C# Software. The focus of this short course is quick prototyping and execution of games for PCs or Smartphones. 17 years and above Anyone interested in developing video games for a variety of platforms Individuals interested in developing interactive software for the purpose of games for various types of simulation or scientific visualization Basics of the Unity interface Introduction to Game Objects and Components Mono Behaviors and Scriptable objects Basics of C# programming in Unity3D User Interface programming for Unity3D Vector-Math for Games Programming Programming features of C#, arrays Lighting, rendering techniques Physics simulation using rigid bodies Final project definition Final project scene finalization Final project character and controls finalization Final project submission Available On-Campus and LIVE Online How can we help you? NOWLearn more about our courses, ask a question or request more information. SAE boasts world-class facilities and collaborative spaces geared towards the creative media industries, as well as flexible remote learning options. Why not check SAE out for yourself and book a campus tour? When you book a campus tour, you are given an opportunity to speak to an SAE Course Advisor. They can address your needs and help answer your questions. We look forward to welcoming you!
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00313.warc.gz
CC-MAIN-2023-50
1,451
21
http://ergoemacs.org/emacs/blog.html
code
Keyboardio, Computer History Museum, SGI the Keyboardio Keyboard is going to have a demo at Computer History Museum on . That's neighbor to Google Headquater Googleplex, in Mountain View, California. If you want to touch it, register at http://www.eventbrite.com/e/tech-tinsel-holiday-product-demo-night-chm-tickets-19571449733 here's my analysis of its design. Keyboardio Keyboard Model 01 ⌨ by the way, the building that's Computer History Museum has a rich tech history. It was a brand new office building of SGI's headquarter. Do you know what SGI is? SGI was the biggest name in computer graphics back in 1990s. They make IRIX, which is their version of Unix (which today we pretty much just have Linux). Irix, HP-UX, AIX, and even Apple, had a unix OS called A/UX. SGI also makes very expensive, bleeding-edge, computer hardware for creating 3D graphics (back then, desktop unix computer is called “workstation”. There's not much of a PC yet.). SGI also was the owner of Cray for a while, which is the world's most powerful super computer. SGI was the one who created the special effects of the movie Jurassic Park (1993). At the time, it was a break-thru. SGI makes 3D modeling software, at various times called Alias, Wavefront, Maya. (now owned by AutoDesk, maker of AutoCAD) When i was student around 1992, waiting for bus, i ogle at the buildings of computer companies, where, offices are filled with very expensive workstations, and lots of geek toys. I was thinking, when could i ever be able to work in such a place. Since 2000s, PC becomes cheaper and powerful, and 3D modeling software sprung up left and right, and free Linux is spring up as unix server replacement. That killed SGI. Today, the one thing from SGI that's still around is OpenGL, and, isn't particularly healthy. The once brand-new building of SGI, changed owner a few times, and now it's Computer History Museum. O, sweet history. Sun Microsystems, the creator of Java, once also reined the computing world with their Solaris OS and server hardware. Sun is also dead, bought by Oracle, which never had any cool factor. Emacs: Hard Wrap Lines, fill-paragraph, unfill-paragraph (major update) Logitech G710+ Mechanical Keyboard now has a MX Cherry Blue version! though, also discovered, there's a problem of this keyboard under linux. It spams 666! (but works fine on Mac) Logitech G710+ Mechanical Keyboard if your main machine is linux, i don't recommend it because of this. see my review for detail Logitech G710+ Mechanical Keyboard If you are buying a keyboard or mouse, check out 〔The Latest Thinking on Computer-Related Pain By Ingfei Chen. @ http://www.nytimes.com/ref/health/healthguide/esn-repetitivestrain-expert.html?pagewanted=print〕 (local copy computer_use_hand_pain__David_M_Rempel_2008.txt) great article. Note, it's published in 2008. Linux: Mouse Hover-Click Emacs Lisp: Insert Brackets by Pair (updated) 3D Modeling, Keyboard Design, Clojure, and Emacs truely enjoyed this video, at clojure conf. You see 3D modeling software, 3D printing, keyboard design/making, using clojure to control, and finally, emacs too! see his blog at 〔3D Printing With Clojure By @Adereth. @ http://adereth.github.io/blog/2014/04/09/3d-printing-with-clojure/〕 reddit discuss at https://www.reddit.com/r/emacs/comments/3twqhh/3d_modeling_keyboard_design_clojure_and_emacs/ ergoemacs-mode on hackernews. https://news.ycombinator.com/item?id=10586791. Thanks lelf for posting. Emacs: Split Windows Basics (new page, for beginners.) Emacs Lisp: List vs Vector (on its own page) Emacs Lisp: Copy Rectangle Region to kill-ring (complete rewrite) Emacs: Edit Column Text, Rectangle Commands (old article. major update.) Setting Up Emacs lots minor updates and improvements. - How to Set Emacs's User Interface - Emacs: Save/Restore Opened Files, Windows Configuration: desktop-mode - Emacs: Set Default Window (frame) Size - Emacs: Set Color Theme - Emacs: Manage Split Windows - Emacs: Tabs, Space, Indentation Setup - Emacs: Save Cursor Position - Emacs: Stop Cursor Going into Minibuffer Prompt - Emacs: How to Set Font Emacs: Set Color Theme (updated. 2 more screenshots) in the movie 《Tron: Legacy》, there's not just emacs eshell, but also vi! Emacs eshell and vi in Movie TRON thanks to https://disqus.com/by/michael_lockhart/ for telling me it. Emacs: ParEdit, Smartparens, Lispy, and ErgoEmacs, xah-fly-keys (complete rewrite). Also see reddit https://www.reddit.com/r/emacs/comments/3sfmkz/could_this_be_a_pareditsmartparens_killer/ Emacs: Select Line, Block, in Quote, Extend Selection (major update) Emacs: How to Define Keys (major update.) keybinding, is the essence of emacs. In emacs, everything is a command, and the way you call them, is keys. Even when you type a letter, it calls a command A little emacs history. Old article. GNU Emacs and XEmacs Schism emacs, restore opened files, remember cursor position, save minibuffer history ;; restore opened files (desktop-save-mode 1) ;; save minibuffer history (savehist-mode 1) ;; remember cursor position in file (require 'saveplace) (setq-default save-place t) How to Set Emacs's User Interface updated. new Unicode emoticon font, Google Noto. see Download Free Unicode Fonts Keyboard Monster one thousand function keys. Gnu Emacs New Leader: John Wiegley Wiegley as maintainer was discussed in the gnu emacs dev mailing list for the past couple of months, hundreds of messages. John Wiegley is the author of eshell〔➤ Emacs: M-x eshell〕, a compiler engineer for like 20 years coding in C++, and is now a professional haskell programer. He lives in emacs. Here's a couple of video interviews of John. John Wiegley on Emacs Lisp and Haskell John is a extreme emacs enthusiast, and his primary platform is Mac with strong desire to make emacs better on Mac too out of the box, and he is a very capable programer, and also a sociable person. I think John will bring a lot good things to emacs. Thanks John. using 4 mouses at the same time is heaven. Emacs Lisp: Shrink Whitespace Command (minor update) hands-on trackball review, for those of you Repetitive Strain Injury hand-on mouse reviews. One Logitech G600 Gaming Mouse Review, and the other Logitech Trackman Marble Mouse my mouse reviews are at my Xah's Programing Blog. Subscribe there if interested. thanks. Emacs: Move Cursor to Brackets, Quotes Updated code for moving cursor to quotes. You should give them a key, such as 【Ctrl+7】【Ctrl+8】. in Xah Fly Keys, the keys are 【[】 【]】 in command mode. Also, i got several complaints about the nav bar animation on this site. Now it's gone. Instead, the nav bar is at the bottom. And, the home page ErgoEmacs is also cleaned up. Emacs: Rename Files Interactively old feature. Super useful. I use it few times a day. Does Lisp Macro Change Syntax? What's Lisp Reader?. A look at what Racket, Clojure, Common Lisp, says. Tutorial, when done well, there are still many perspectives, approach, and writing style. Which one you like depends on your preference. Have a look at mine. Thanks. emacs elfeed still have relative link bug in atom feed. ☹ https://github.com/skeeto/elfeed/issues/37 by the way, elfeed is a excellent RSS reader. I've modified my blog so it uses full URL instead of relative links. Emacs: Define Key Sequences (Create Prefix Key, Leader Key) (major update) Emacs: Turn Off Auto Backup; Set Backups into a Directory; How to Delete Backup Files Emacs Lisp: Make Backup of Current File (added a new function, that does backup and save together) Emacs: New Empty Buffer. Minor updated page. Extremely convenient. Recommended. for ido, you don't need extra package to make it display vertical. See Emacs: Switch Buffer, ido-mode (minor update) Emacs: List Buffers. emacs basics. Minor update. racket lisp language, and emacs racket-mode, is superb. Racket: Using Emacs racket-mode Emacs Lisp: Replace Invisible Unicode Chars it's been a month! New version of tutorial is out. Buy, recommend, Thanks! Buy Xah Emacs Tutorial. Updated version will be sent out tomorrow. search and highlight text in emacs Emacs: isearch Current Word. This is the most useful. It replaces isearch 50% of time. Emacs: Search / Highlight Words (minor update) learn racket scheme lisp in 5 minutes. Xah Racket Notes Emacs: Dired Customization (updated. dired-hide-details-mode screenshots and how to set it as default) reddit new subreddit, for emacs-fu there's a new subreddit, for emacs-fu, vim-golf, like of topics. Such as, i do such and such this way, how do you do it? if you use evil-mode, ergoemacs-mode, xah-fly-keys, god-mode, hydra, key-chord, etc, then it's for you. learn a new thing. In dired, try thx to Rene Froger see also Emacs: File Management (dired tutorial) emacs kungfu fight-to-the-death! Emacs: Move Cursor to Brackets, Quotes (updated) xah-backward-left-bracket is my 20th most frequently used command. here's my “keyfreq-show” output: 1 468031 25.26% self-insert-command 2 127493 6.88% next-line t 3 101031 5.45% mwheel-scroll 4 97262 5.25% previous-line c 5 89803 4.85% subword-forward r 6 77884 4.20% subword-backward g 7 72057 3.89% xah-beginning-of-line-or-block d 8 51249 2.77% xah-end-of-line-or-block s 9 38704 2.09% isearch-printing-char 10 33349 1.80% xfk-command-mode-activate 11 33128 1.79% yank k 12 28310 1.53% newline RET 13 27811 1.50% xfk-insert-mode-activate 14 27113 1.46% delete-backward-char e 15 25247 1.36% save-buffer b 16 24240 1.31% xah-cut-line-or-region j 17 22550 1.22% xah-close-current-buffer <f14> 18 20460 1.10% xah-backward-left-bracket m 19 20077 1.08% subword-backward-kill . 20 15630 0.84% xah-shrink-whitespaces , 21 14205 0.77% xah-fly-command-mode-activate <home> 22 13218 0.71% handle-switch-frame <switch-frame> 23 13191 0.71% subword-kill p 24 12613 0.68% backward-char h 25 12275 0.66% open-line o 26 12232 0.66% isearch-repeat-forward 27 11897 0.64% xah-fly-insert-mode-activate SPC 28 11256 0.61% undo-tree-undo f 29 10782 0.58% forward-char n 30 10606 0.57% isearch-exit 31 10100 0.55% isearch-forward x g 32 9277 0.50% xah-forward-right-bracket v 33 9234 0.50% org-self-insert-command 34 8317 0.45% xah-extend-selection 1 35 7992 0.43% xah-copy-line-or-region q 36 7781 0.42% other-window w 37 7389 0.40% delete-other-windows 3 38 7068 0.38% ido-exit-minibuffer 39 6628 0.36% delete-char u 40 6046 0.33% set-mark-command y 41 5657 0.31% xah-select-current-line 2 42 5452 0.29% xah-next-user-buffer <f12> 43 5295 0.29% xah-html-wrap-html-tag 44 5219 0.28% xah-select-text-in-quote 9 45 5104 0.28% xah-browse-url-of-buffer 46 4945 0.27% exit-minibuffer 47 4935 0.27% xah-select-current-block 6 48 4718 0.25% xah-open-file-path-under-cursor Many emacs experts, use paredit, smartparen, ace-jump, expand-selection, and evil-mode, god-mode, hydra. i don't use any of that. Just Xah Fly Keys I openly challenge anyone to efficiency contest, in real time. email me or tweet to me on social networks. We can setup a private video chat just to compare notes. challenge, as in kungfu fight-to-the-death! Emacs Lisp: Run Current File (updated code. Now, it'll prompt for save for non-file buffers) Past Articles by Date • 2015-09 • 2015-08 • 2015-07 • 2015-06 • 2015-05 • 2015-04 • 2015-03 • 2015-02 • 2015-01 • 2014-12 • 2014-11 • 2014-10 • 2014-09 • 2014-08 • 2014-07 • 2014-06 • 2014-05 • 2014-04 • 2014-03 • 2014-02 • 2014-01 • 2013-12 • 2013-11 • 2013-10 • 2013-09 • 2013-08 • 2013-07 • 2013-06 • 2013-05 • 2013-04 • 2013-03 • 2013-02 • 2013-01 • 2012-12 • 2012-11 • 2012-10 • 2012-09 • 2012-08 • 2012-07 • 2012-06 • 2012-05 • 2012-04 • 2012-03 • 2012-01 • 2011-12 • 2011-11 • 2011-10 • 2011-07 • 2011-05 • 2011-01 • 2010-10 • 2010-06 • 2009-12
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398457799.57/warc/CC-MAIN-20151124205417-00209-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
11,796
113
https://structureresearch.net/2019/04/18/e-shelter-acquires-land-for-second-berlin-data-centre/
code
e-shelter acquires land for second Berlin data centre e-shelter acquired land in Berlin on which it will build its second data centre in the German capital. // added by JK: // calculate if the user is able to see the pdf or not. $requiredSubscription = (array)get_post_meta(get_the_ID(),'_required_capabilities', true); $showPdf = !$requiredSubscription || (class_exists('swishu') && swishu::has_access_to_content(wp_get_single_post($_GET['id']))) || is_user_logged_in();
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00509.warc.gz
CC-MAIN-2020-24
471
6
https://github.com/modera/ModeraActivityLoggerBundle
code
Bundle provides facilities that let you to log different domain events that occur during your application logic execution, later you are able to query those logged events ( they are called Activities in scope of this bundle ). The point here is that later those activities can be reviewed by ordinary users to see what has been happening in the system. Unless you need to query activities in your application logic please rely on a generic Psr's LoggerInterface interface to log your activities. Add this dependency to your composer.json: Update your AppKernel class and add this: To log your activities you will be using an implementation of standard Psr\Log\LoggerInterface interface which means that your application won't directly depend on this bundle but rather will rely on a generic interface that later you can switch ( say that you decided to use some default Monolog log handler ) if needed. Bundle declares two additional interfaces - Modera\ActivityLoggerBundle\Model\ActivityInterface. The former extends Psr's LoggerInterface and adds one method - "query", this method can be used to query activities. Activities returned by this method are implementations of ActivityInterface. By default the bundle provides one implementation of ActivityManagerInterface which stores activities using Doctrine ORM's EntityManager - This bundle is under the MIT license. See the complete license in the bundle: Resources/meta/LICENSE
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585504.90/warc/CC-MAIN-20211022084005-20211022114005-00695.warc.gz
CC-MAIN-2021-43
1,433
14
https://jdhao.github.io/2020/11/11/nifty_nvim_techniques_s8/
code
This is the 8th post of my post series on nifty Nvim/Vim techniques that will make my editing experience easier. Click here to check other posts in this series. - Series 11: https://jdhao.github.io/2021/11/22/nifty_nvim_techniques_s11/ - Series 10: https://jdhao.github.io/2021/06/17/nifty_nvim_techniques_s10/ - Series 9: https://jdhao.github.io/2021/01/07/nifty_nvim_techniques_s9/ - Series 7: https://jdhao.github.io/2020/09/22/nifty_nvim_techniques_s7/ - Series 6: https://jdhao.github.io/2019/12/21/nifty_nvim_techniques_s6/ - Series 5: https://jdhao.github.io/2019/11/11/nifty_nvim_techniques_s5/ - Series 4: https://jdhao.github.io/2019/09/17/nifty_nvim_techniques_s4/ - Series 3: https://jdhao.github.io/2019/05/14/nifty_nvim_techniques_s3/ - Series 2: https://jdhao.github.io/2019/04/17/nifty_nvim_techniques_s2/ - Series 1: https://jdhao.github.io/2019/03/28/nifty_nvim_techniques_s1/ Use Neovim as man pager The default pager used by man command on *nix lacks syntax highlighting and is not good for reading, searching. Why not turn nvim into the man pager? Just add the following setting to your shell config file: if [[ "$(command -v nvim)" ]]; then export EDITOR='nvim' export MANPAGER='nvim +Man!' export MANWIDTH=999 fi Close other windows quickly? When we are in a certain window, we may want to close all other windows. We may go to the other windows and close them with :quit. It is a bit cumbersome. :only command is a much nicer way. It will close all the other windows except the one we are in. There is also an equivalent shortcut: <C-W> o (that Ctrl-W, followed by Execute a macro in several lines. Macro is a powerful way to edit texts with similar structures. To execute a macro on several lines, we can use a line range if the lines are continuous. For example, execute macro a for line 10 to 15, use: Or we can visually select the lines, and run the following command (note that if you select these lines, and then press :, Nvim will insert To execute a macro only on lines matching a certain pattern, run the following command: Copy URL under cursor into a register? We can use expand() function get the URL under cursor (see :h <cfile>). To copy the URL to unnamed register, use the following command: let @" = expand('<cfile>') The above method is not perfect, since expand('<cfile>') will also give you results even if your cursor is on a normal words (non-URL). A more sophisticated method would be using actual URL patterns and search the current line to get a valid URL. A good URL pattern is provided by plugin highlighturl#default_pattern() method. With this knowledge, here is a more error-proof approach to get the current URL: let @" = matchstr(getline('.'), highlighturl#default_pattern()) Get diff between two buffers or files If we have two different versions of the same file and we want to find the differences between them, how do we do it inside Neovim? Suppose the two files are manual-v2.md, here is how to compare them inside Neovim. If you haven’t start Nvim, you can run the following command: nvim -d manual-v1.md manual-v2.md This will start nvim in diff mode. If you are already inside Neovim, first open manual-v2.md in a vertical split window ( Finally, run the following command to start comparing: Of course, you can use a horizontal split window, but vertical split window is better for comparing the two files, IMO. ↩︎ License CC BY-NC-ND 4.0
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510498.88/warc/CC-MAIN-20230929054611-20230929084611-00027.warc.gz
CC-MAIN-2023-40
3,409
61
https://forums.comodo.com/t/cfp-v3-killed-my-secondry-hard-drives-help/215502
code
Not sure whats going on here, but my secondry hard drive dosent show up ■■■ accessible when using the new CFPv3. I uninstalled it, went back to the old version, shows up fine and i can access it perfectly. Uninstalled the old, rebooted and installed the new one, rebooted and again the drives arent visible. Also its blocking my network connection, nothing i do unblocks it. any help needed please, i have a lot of my work stuff on that drive, had to fully uninstall it and go back to 2.4, which works perfectly.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00882.warc.gz
CC-MAIN-2023-40
518
4
https://veterans.careerarc.com/job-listing/booz-allen-hamilton-inc-jobs-solution-architect-director-32429190
code
Job Number: R0085267 Solution Architect Director Lead the translation of the customer's IT needs and future goals into a plan by crafting system architecture products and design specifications while overseeing and applying principles of computer and information science to further the advancement of digital systems development, including open source, open data, MOSA, Face, AI/MI, and smart systems. Transform the way intelligence agencies use technology, including Cloud migration, integrating advanced technology, and modernizing legacy systems. Mentor and coach the next set of developers to help them grow into tomorrow's solutions architects. -10+ years of experience with the design, development, and delivery of software systems -4+ years of experience with leading the definition and implementation of technical design -5+ years of experience with leading the definition and implementation of analytics solution platforms -3+ years of experience with developing software applications using technical stacks. including Java EE, Python, NodeJS, or .NET -3+ years of experience with relational database management systems, including PostgreSQL, Microsoft SQL Server, Oracle DB, MySQL, or MariaDB -2+ years of experience with the fundamentals of “Big Data” architecture and usage in addition to designing solutions on Cloud platforms, including AWS and Azure -Experience with developing technology vision and cutting-edge solutions to include creating and managing technical implementation plans and leading implementations supporting government clients -BA or BS degree in CS, Computer Engineering, or technical field or 15 years of experience in direct technical work -Experience with communicating highly complex technical information clearly and articulately at all levels and audiences -Experience in advanced data engineering solutions -Experience with business development, capture, proposal, pricing, and review team procedures -Experience with applying current and emerging technology integration solutions and trends, including AI/ML, Natural Language Processing (NLP), DoD security and regulatory requirements -Experience working with Cross Domain Solutions (CDS) for transition of products and data across security boundaries for large data sets and containerized solutions -Knowledge of certification and accreditation processes for enterprise government solutions -MA or MS degree in CS, Engineering, Mathematics, or a related field -AWS/Azure Certified Solutions Architect or Certified Solutions Developer Certification -Security+, CASP CE, CISSP, or Certified Ethical Hacker (CEH) Certification -CCNA, Network+, or equivalent Certification Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information; TS/SCI clearance is required. We're an EOE that empowers our people—no matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, veteran status, or other protected characteristic—to fearlessly drive change. Apply on company website
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655919952.68/warc/CC-MAIN-20200711001811-20200711031811-00451.warc.gz
CC-MAIN-2020-29
3,097
24
https://library.queensu.ca/about-us/news-events/portage-dmp-assistant-temporary-shutdown
code
Portage DMP Assistant Temporary Shutdown The Portage Data Management Plans (DMP) Assistant, is receiving a major update on March 1, and in preparation for the launch of the DMP Assistant 2.0, the platform will be unavailable from Monday, February 22 to Monday, March 1, 2021 to allow for final migration of data. Portage recommends that users and administrators of the DMP Assistant save and/or export any existing DMPs/data that they may require during the shutdown period. Any work that DMP Assistant users have completed prior to the shutdown will be migrated to the new 2.0 platform, along with any other existing data management plans (DMPs), templates and user data. Once launched, users will be able to access the DMP Assistant 2.0 service as usual with their existing credentials, and new users may create accounts as normal.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00727.warc.gz
CC-MAIN-2022-27
833
5
https://community.snaplogic.com/t/logs-missing-from-files/1273
code
I am using a custom logger(basically a script snap in which i write) which logs to a certain file . This logger is used in 4 pipelines of a project . The problem is that when I run all the pipelines together at same time logs come initially to the file but after a certain stage they donot occur. Can any one throw some light on this issue. What might be the problem? PS the logger is said to be thread safe. ( I have integrated java.util.logger in the script snap)
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000306.84/warc/CC-MAIN-20190626114215-20190626140215-00251.warc.gz
CC-MAIN-2019-26
465
2
https://juicedtech.forumbee.com/t/x1a42l/spread-a-record-over-2-lines?r=k9a4s6
code
Hi Peter, welcome and thanks for contributing! I am assuming you are referring to subtable data in your suggestion. We will note that for future consideration. However, there are some ways to achieve that with Exact Forms Plus as it is today. You can use formula fields to combine two or more fields together and use either "\n" if using a text field or "<br />" if using a Rich Text field in order to get your data onto multiple lines. We do this a lot ourselves. Here is an example: - the yellow highlighted section is 1 line per record - the green highlighted section shows how we can combine many fields from the same record together into a single field and display it across many lines. I hid the borders and headers so you don't even know by looking at it that the data is coming from a subtable.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711336.41/warc/CC-MAIN-20221208114402-20221208144402-00438.warc.gz
CC-MAIN-2022-49
802
6
https://access.redhat.com/documentation/en-us/red_hat_cloudforms/4.7/html/assigning_a_custom_analysis_profile_to_a_virtual_machine/create-vm-control-policy
code
Chapter 3. Creating a Virtual Machine Control Policy You can create a control policy by combining an event, a condition, and an action. The procedure below describes how to create a virtual machine control policy to assign the newly-created action to the VM Analysis Start event. Optionally, you can use a scope expression that is tested immediately when the policy is triggered by an event. If the item is out of scope, then the policy will not continue on to the conditions, and the assigned action will not run. - Navigate to → . - Expand the Policies accordion, and click Control Policies. - Select Vm Control Policies. - Click (Configuration), then (Add a New VM and Instance Control Policy). - Enter a Description. This will be the name given to your VM control policy. - Clear the Active box if you do not want this policy processed even when assigned to a resource. Optional: Enter a Scope (you can also create a scope as part of a condition, or not use one at all). If the virtual machine is not included in the scope, the assigned action will not run. You can use the drop-down list to create an expression for the Scope. Based on what you choose, different options appear. Click (Commit expression element changes) to add the scope. - Enter Notes if required. - Click Add. The policy is added and listed under Vm Control Policies in the Policies accordion. - Select the newly-added VM control policy. You can now associate events, conditions, and actions with the policy. - Click (Configuration), then (Edit this Policy’s Event assignments). - Under VM Operation, set VM Analysis Start to Yes. - Click Save. - Click the VM Analysis Start event to configure actions. - Click (Configuration), then (Edit Actions for this Policy Event). In Order of Actions if ALL Conditions are True, select the action created in Chapter 2, Creating an Action to Assign the Virtual Machine Analysis Profile to the Analysis Task from the Available Actions list. This action will take place if the resources meet the conditions of the policy.Note Each selected action can be executed synchronously or asynchronously; a synchronous action will not start until the previous synchronous action is completed, while an asynchronous action allows the next action to start whether or not the first action has completed. Also, at least one CloudForms server in the CloudForms zone must have the notifier server role enabled for the trap to be sent. Click ( ) which will move the action to Selected Actions. The selected action is set to (S) Synchronous by default. From Selected Actions, select the action, then: - Click A (Set selected Actions to Asynchronous) to make it asynchronous. - Click S (Set selected Actions to Synchronous) to make it synchronous. If creating a synchronous action, use the up and down arrows to identify in what order you want the actions to run. - Click Save.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.61/warc/CC-MAIN-20200813014639-20200813044639-00351.warc.gz
CC-MAIN-2020-34
2,874
24
http://www.pinballarcade.com/user-agreement.php
code
PINBALL ARCADE APPLICATION THIS AGREEMENT CONTAINS A BINDING ARBITRATION CLAUSE. PLEASE READ THE AGREEMENT CAREFULLY BEFORE ACCEPTING ITS TERMS AND CONDITIONS. 1. LIMITED USE LICENSE. FarSight Studios grants you the non-exclusive, non-transferable, limited right and license to install and use one copy of this Application solely and exclusively for your personal use. All rights not specifically granted under this Agreement are reserved by FarSight Studios. This Application is licensed, not sold. Your license confers no title or ownership in this Application and should not be construed as a sale of any rights in this Application. 2. OWNERSHIP. All title, ownership rights, and intellectual property rights in and to this Application (including but not limited to any trademarks, titles, computer code, themes, objects, characters, character names, stories, dialog, catch phrases, locations, concepts, artwork, animation, sounds, musical compositions, audio-visual effects, methods of operation, moral rights, any related documentation, and “applets” incorporated into this Application) are owned by FarSight Studios or its licensors. This Application is protected by the copyright laws of the United States, international copyright treaties and conventions and other laws. This Application contains certain licensed materials, and FarSight Studios’ licensors may also protect their rights in the event of any violation of this Agreement. 3. YOU AGREE THAT YOU WILL NOT DO ANY OF THE FOLLOWING: (a) exploit this Application or any of its parts commercially; (b) use this Application, or permit use of this Application, on more than one user device (e.g. computer, handset, PDA or other device) at the same time; (c) make copies of this Application or any part thereof, or make copies of any of its accompanying material; (d) sell, rent, lease, license, distribute, loan or otherwise transfer this Application, or any copies of this Application, without the express prior written consent of FarSight Studios; (e) reverse engineer, decompile, disassemble or otherwise reduce this Application to any human-perceivable form; (f) modify, adapt, translate or otherwise create derivative works based on this Application; (g) disable, modify or otherwise tamper with any anti-piracy/anti-hacking functionality of this Application; (h) remove, disable or circumvent any proprietary notices, marks or labels contained on or within this Application or its accompany material; or (i) export or re-export this Application or any portion, process, copy or adaptation hereof in violation of any applicable laws or regulations. YOU FURTHER ACKNOWLEDGE AND AGREE THAT, if the Application was provided to you for trial use (e.g. for “beta” testing, and/or for a limited trial period or number of uses): (j) you will not use the Application following the expiration of the permitted trial period or number of uses; and (k) the Application may include code designed to prevent you from exceeding these limits, and such code may remain on your user device after deletion of the Application in order to prevent you from installing another copy and repeating the trial period or extending the number of uses. 4. DISCLAIMER OF WARRANTY. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW: (a) THIS APPLICATION IS PROVIDED “AS IS,” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE; (b) THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THIS APPLICATION IS WITH YOU; (c) FARSIGHT STUDIOS WILL HAVE NO LIABILITY TO YOU FOR ANY REASON BASED ON YOUR USE OF THIS APPLICATION UNLESS SUCH WARRANTIES ARE LEGALLY INCAPABLE OF EXCLUSION; AND (d) FARSIGHT STUDIOS’ ENTIRE LIABILITY AND YOUR EXCLUSIVE REMEDY WITH RESPECT TO THE USE OF ANY SOFTWARE PROVIDED BY OR ON BEHALF OF FARSIGHT STUDIOS WILL BE THE REPLACEMENT OF ANY FARSIGHT STUDIOS SOFTWARE FOUND TO BE DEFECTIVE. SOME JURISDICTIONS MAY NOT ALLOW (OR MAY LIMIT) DISCLAIMERS OF CERTAIN WARRANTIES, IN WHICH CASE THE FOREGOING DISCLAIMERS WILL BE ENFORCED TO THE MAXIMUM EXTENT PERMITTED BY LAW. 5. LIMITATION OF LIABILITY. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW: (a) IN NO EVENT WILL FARSIGHT STUDIOS BE LIABLE FOR SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES RESULTING FROM USE, POSSESSION, MISUSE OR MALFUNCTION OF THIS APPLICATION, INCLUDING WITHOUT LIMITATION DAMAGE TO PROPERTY, LOSS OF GOODWILL, COMPUTER OR HANDHELD DEVICE FAILURE OR MALFUNCTION AND DAMAGES FOR PERSONAL INJURY, EVEN IF FARSIGHT STUDIOS HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES; AND (b) FARSIGHT STUDIOS’S LIABILITY WILL IN NO EVENT EXCEED THE ACTUAL PRICE PAID FOR THE LICENSE TO USE THIS APPLICATION. SOME JURISDICTIONS MAY NOT ALLOW CONTRACTUAL LIMITATIONS ON HOW LONG AN IMPLIED WARRANTY LASTS AND/OR EXCLUSION OR LIMITATION OF LIABILITY FOR SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES, IN WHICH CASE FARSIGHT STUDIOS’S WARRANTY PERIOD AND LIABILITY WILL BE LIMITED TO THE MAXIMUM EXTENT PERMITTED BY LAW. 6. TERMINATION. Without prejudice to any other of FarSight Studios’ rights or of your obligations hereunder, the limited license set forth in Section 1 of this Agreement will terminate automatically if you fail to comply with the terms and conditions of this Agreement. In such event, you must destroy all copies of this Application and all of its component parts and related materials. 8. INJUNCTIVE RELIEF. Because FarSight Studios would be irreparably damaged if the terms of this Agreement were not specifically enforced, you agree that FarSight Studios will be entitled, without bond, other security or proof of damages, to appropriate equitable remedies with respect to breaches of this Agreement, in addition to any and all other remedies which FarSight Studios may have under applicable laws. 9. INDEMNITY. You agree to indemnify, defend and hold FarSight Studios, its partners, affiliates, contractors, officers, directors, employees and agents harmless from all damages, losses and expenses arising directly or indirectly from your acts and omissions to act in using the Product pursuant to the terms of this Agreement BY DOWNLOADING THIS TITLE, YOU AGREE THAT YOU HAVE READ AND UNDERSTAND THIS AGREEMENT AND THAT YOU WILL BE BOUND BY AND COMPLY WITH IT.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710605589/warc/CC-MAIN-20130516132325-00038-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
6,334
11
https://rdrr.io/bioc/rhdf5/
code
This package provides an interface between HDF5 and R. HDF5's main features are the ability to store and access very large and/or complex datasets and a wide variety of metadata on mass storage (disk) through a completely portable file format. The rhdf5 package is thus suited for the exchange of large and/or complex datasets between R and other software package, and for letting R applications work on datasets that are larger than the available RAM. |Author||Bernd Fischer [aut], Mike Smith [aut, cre] (<https://orcid.org/0000-0002-7800-3848>), Gregoire Pau [aut], Martin Morgan [ctb], Daniel van Twisk [ctb]| |Bioconductor views||DataImport Infrastructure| |Maintainer||Mike Smith <[email protected]>| |Package repository||View on Bioconductor| Install the latest version of this package by entering the following in R: Any scripts or data that you put into this service are public. Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644309.7/warc/CC-MAIN-20230528150639-20230528180639-00681.warc.gz
CC-MAIN-2023-23
1,016
9
https://www.futurelearn.com/info/blog/my-first-month-at-futurelearn
code
Matt Hill is an experienced developer/designer who recently joined FutureLearn as a Front-end Developer. In this post he shares some thoughts about his first month working on the FutureLearn platform, compared to previous roles delivering client solutions. I’ve been lucky enough to have worked in digital for over 20 years. In that time I’ve been involved in hundreds of client projects, ranging from small week-long projects to large projects lasting many months. Joining FutureLearn is a new experience for me: it’s the first time I will be working on the long-term development of a well-funded digital platform. So what’s different about working at FutureLearn compared to working for an agency delivering client work? It turns out that there’s loads, but I’ll look at four areas that stand out the most: - A culture of collaboration - Openness and transparency - Prioritising quality - Lots to learn A culture of collaboration A big problem I’ve experienced in agency work is the lack of deep collaboration between team members. A typical example is that designers and developers often don’t work closely enough and end up working in silos, throwing their work over an imaginary wall at each other and expecting great results to be handed back. Anyone who’s experienced this will tell you that it doesn’t work. At FutureLearn, better collaboration between designers and developers is achieved through their product teams. A product team is responsible for developing a distinct part of the FutureLearn platform. Currently these product teams are Discovery, Premium and Learning Experience, and each product team contains designers, developers and a product manager. I’ve joined the Learning Experience team and found that the day-to-day collaboration far exceeds my experiences in agencies. Team members work together in a variety of ways, including: - Sprint planning - Slack chats - Group meetings The level of trust and understanding that everyone has within the team seems deeper and more honest than my experiences in other companies. There’s a refreshing lack of politics: everyone is working towards the same goals, so the petty conflicts that I’ve experienced elsewhere just don’t seem to exist here. Everyone is very happy to answer questions or take time out of their own schedule to help others. Staff are also encouraged to: - Write blogs and articles - Give talks - Attend presentations - Share whatever they think is valuable to the company These activities encourage people to put their individual strengths into the team melting pot, from which everyone benefits. One of the best experiences I had was in my first week: an internal Barcamp that had team members teaching each other everything from how to solve a Rubik’s cube to how to complete a cryptic crossword. While those sessions may not contribute to FutureLearn directly, the team atmosphere was incredible and the Barcamp received a unanimous verdict of “Awesome!” I can’t think of any company I’ve worked for that has built so much of their culture around collaboration and sharing. It really is a breath of fresh air. Openness and transparency Openness and transparency are the ubiquitous buzzwords of our time. Yet there’s often been a lack of transparency in the agencies I’ve worked in. Smaller agencies tend not to be as open with their staff, and a company’s long term goals and targets are often unclear. This can create a closed culture which can encourage a “them and us” attitude, which really misses one of the main benefits of working in a team. FutureLearn makes a lot of effort to be transparent. In my first few weeks, I attended all sorts of meetings and presentations, many of which looked in detail at the FutureLearn vision, targets and long-term goals. Everyone is trusted to know this information: not just because it’s interesting (and it is), but because it helps the team to really feel like we’re all working towards a joint goal. As part of my induction, I was also invited to many introductory meetings with key members of staff. These were 30-minute sessions where new staff learn about the details of a specific function of FutureLearn and are able to ask questions. I’ve attended several of these sessions in my first month and as a result, I now have a good grasp of the inner workings of FutureLearn. The experience was a revelation: I’ve never had this insight so early on working for any other company. Yes, it’s costly in terms of people’s time, but it’s extremely worthwhile and helps to embed new joiners quickly into the FutureLearn culture. When everyone understands what we’re working towards, and has the same motivations for doing so, we can work much better together. It might sound obvious, but a shared vision can only be implemented when the vision really is shared. Most agencies would like to have more time to deliver client work, but often it’s simply not possible due to short deadlines and tight budgets. There’s often less time for planning and strong demands to ship things that are “good enough”. This approach, while highly productive, can have negative outcomes: - It can produce inferior work - It may disappoint clients - It contributes to unhappy staff FutureLearn recognise this and places more emphasis on doing things well than doing them quickly. The platform development is run according to agile principles and features are deployed when they’re ready. Sprints run for two weeks, and while the intention is to complete a number of stories in a sprint, it’s not a failure if those stories aren’t completed. This agile approach allows people up to spend more time on the deep thinking required to make the FutureLearn platform robust, delightful and future-proof. The ability to take our time over the development of the platform should not be underestimated. It’s a powerful motivator to know that you can do your best work when the time pressures are somewhat reduced. Of course, that’s not to say people aren’t working hard. Far from it, the buzz of activity is almost tangible and it’s a great feeling to share completed work with both colleagues and learners using the platform. Lots to learn Of course, there are also frustrations when starting a new job. My first month at FutureLearn has been really good, but there’s been a few bumps too. In previous jobs when I’ve started, there’s often been pressure to deliver production-quality code before my first tea-break. This can be easier to do when using a familiar workflow, such as creating front-end templates from PhotoShop files. As a result, I’ve always expected to contribute from day one. At FutureLearn, though, it’s not been possible for me to be immediately productive. Why? Lack of knowledge and lack of context. The FutureLearn platform is huge, and there’s too much that I needed to understand before I could deliver production-ready code. It didn’t help that my technical knowledge was different: I was coming from a Windows platform, and FutureLearn is all Mac based. I’d also had no experience with Ruby or HAML, and my Git experience was limited. This lack of knowledge created stumbling blocks to my productivity and left me feeling frustrated that I wasn’t contributing soon enough. Thankfully, I’ve been assured that this is absolutely normal. No-one is expecting me to understand every line of code within my first month. I need to learn to accept that this initial period is for me to get up to speed in understanding the platform and codebase. I guess having high expectations of myself can sometimes cause frustration, but it’s nice to know that I am being given the time to find my place and grow into the role. In my first month, I’ve been enthused and impressed by the FutureLearn culture. It’s refreshingly collaborative and I’m really excited to see how my role will unfold in such a progressive and open workplace. Want to know more about what it’s like to work here? Check out more of our “Making FutureLearn” posts.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100545.7/warc/CC-MAIN-20231205041842-20231205071842-00436.warc.gz
CC-MAIN-2023-50
8,069
45
http://www.thezorklibrary.com/phpbb/viewtopic.php?p=3004
code
hey, i am playing zork nemesis, same old problem withthe panning. i've tried cpu killer, game speed adjuster, and your panning ratio program. However, i am still unable to slow it down enough to click the window from the rope in the monastary, maybe with your program i am missunderstanding howto use it? but i put the panning ratio's down quite low. it does slow down, but nowhere near enough for me to click the window from the rope? all it seems to be doing is limiting how far to the right/left my mouse can go[ is this the point ofthe program?] . i hav ea gforce 9200 graphics card,and a gig and a half of ram. any ideas on how i might beable to make it past the window? maybe i am doing some thing wrong? apologies bout the long post, butthis is my childhood
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706298270/warc/CC-MAIN-20130516121138-00008-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
764
2
https://gtacknowledge.extremenetworks.com/articles/Solution/Connect-Fusion-menu-will-not-appear-for-user-created-account
code
Connect/Fusion menu will not appear for user created account Connect/Fusion menu will not appear for user created account. default root account User created account NetSight Suite 7.x One view. The issue happens becasue of connect /fusion menu is hardcoded to only with default netsight administrator group. When we creating a user group instead of selecting all options we need to select default administrator group. Console --> Authorization/Device access ---> users /groups---> user account ---> edit --->authorization group --> Netsight administrator.
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549429548.55/warc/CC-MAIN-20170727222533-20170728002533-00179.warc.gz
CC-MAIN-2017-30
555
7
https://docs.citrix.com/en-us/licensing/11-16-3/licensing-elements.html?lang-switch=true
code
The License Server comprises several licensing elements: - Citrix vendor daemon - Options configuration file - Citrix Licensing Customer Experience Improvement Program (CEIP) and Call Home - License request process This diagram shows the Citrix product using TCP/IP to connect to the License Server. Citrix vendor daemon The Citrix vendor daemon (CITRIX), a process that runs on the License Server, grants licenses. The Citrix vendor daemon tracks the number of licenses that are checked out and which product has them. Citrix products communicate with the Citrix vendor daemon using TCP/IP. By default, the Citrix vendor daemon uses TCP/IP port 7279. The options file (Citrix.opt) is a License Server configuration file. The Citrix vendor daemon reads this file every time it restarts or receives a command to reread the file. This configuration file defines licensing behavior-the number of licenses a product server can use, the location of the System Logs, logging levels, and other user-defined customizations. The Customer Experience Improvement Program (CEIP) and Call Home configurations are stored in this file. Flexera offers ways to edit the options file to control, reserve, or limit licensing usage. The Flexera methods are not compatible with Citrix licenses. Therefore, we do not support those editing options. Citrix licensing Customer Experience Improvement Program (CEIP) and Call Home The Citrix Licensing CEIP and Call Home usage and analytics programs are voluntary data collection programs designed to improve your product experience. After installing the License Server, you can participate in the programs anonymously or choose to be identified. Internet access is required. For information about configuring a proxy server, see Configure a proxy server for use with Citrix Licensing Manager, Customer Experience Improvement Program (CEIP), and Call Home in the Get started article. CEIP is enabled by default during License Server installation. You can change your participation in the program at any time by using the Citrix Licensing Manager. The Citrix Service Provider program requires CEIP and Call Home. If you have Citrix Service Provider licenses installed, you can change the settings, but you cannot disable CEIP or Call Home. When the License Server detects Citrix Service Provider licenses, it enforces daily uploads. When installing licensing on the command line, use CEIPOPTIN to specify whether, or how, to opt in to CEIP or Call Home Optional parameter. The default is CEIP. Diagnostic - Call Home Anonymous - CEIP None For more command-line installation information, see the “Use the command line to install licensing” section under Install licensing components for Windows. Citrix Licensing Customer Experience Improvement Program (CEIP) CEIP is voluntary. When you opt in, the CEIP services running in Citrix products gather anonymous configuration and usage data from your deployment. The services automatically send the data to Citrix once a day, based on the service start time. CEIP collects these classes of data: - Configuration data - Performance and reliability data How your privacy is protected: - Citrix does not collect any personally identifiable data. - Random identifier is created at install time, which tracks data transfers over time. - Citrix does not record information such as IP addresses, server names, or domain names. - All data is sent using HTTPS directly to Citrix servers - no third-party data hosting services. - All data is secured on Citrix servers and is accessible only by authorized individuals. Citrix Call Home Call Home is voluntary. When you opt in, Call Home performs a periodic collection of system and product configuration, performance, errors, and more. The data identifies you as a customer. This information is transmitted to Citrix Insight Services once a day, based on the service start time. Citrix support and product teams use the information to resolve issues proactively. License request process When a product requests a license from the License Server, the Citrix vendor daemon determines whether a license is available for the request. The license request process has two phases: the product startup phase and the user connection phase. Product Start-Up Phase: - When a Citrix product starts, it retrieves the License Server location from its data store. - The product connects to the Citrix vendor daemon. - The product checks out a startup license. User connection phase: - A user connects to a computer running the Citrix product. - The product requests a license from the License Server. - The Citrix vendor daemon checks to see if any licenses are available and grants or denies the product’s request. - The license module in the product grants or denies the use of the product based on the response from the Citrix vendor daemon.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817670.11/warc/CC-MAIN-20240420153103-20240420183103-00147.warc.gz
CC-MAIN-2024-18
4,841
41
https://pysdl2.readthedocs.io/en/rel_0_9_7/modules/sdl2ext_window.html
code
Window routines to manage on-screen windows¶ Window(title : string, size : iterable[, position=None[, flags=None]])¶ The Window class represents a visible on-screen object with an optional border and title text. It represents an area on the screen that can be accessed by the application for displaying graphics and receive and process user input. The position to show the Window at is undefined by default, letting the operating system or window manager pick the best location. The behaviour can be adjusted through the Window.DEFAULTPOS = (10, 10) The created Window is hidden by default, which can be overridden at the time of creation by providing other SDL window flags through the flags parameter. The default flags for creating Window instances can be adjusted through the Window.DEFAULTFLAGS = sdl2.SDL_WINDOW_SHOWN create() → None¶ Creates the underlying SDL2 window. This method does nothing, if the window was already created. open() → None¶ Creates and shows the window. close() → None¶ Closes the window, implicitly destroying the underlying SDL2 window. refresh() → None¶ Refreshes the entire This only needs to be called, if a SDL_Surface was acquired via get_surface()and is used to display contents.
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00646.warc.gz
CC-MAIN-2022-33
1,230
17
https://www.eecs.mit.edu/academics-admissions/academic-information/subject-updates-spring-2018/6882
code
Prerequisites: 6.867, 6.041B, or 6.436, 18.06 Instructor: Professor Tamara Broderick ([email protected]) Schedule: TR2:30-4, room 3-270 This subject counts as an Artificial Intelligence concentration subject. This course will cover Bayesian modeling and inference at an advanced graduate level. Topics include de Finetti's theorem, decision theory, approximate inference (modern approaches and analysis of Monte Carlo, variational inference, etc), hierarchical modeling, (continuous and discrete) nonparametric Bayesian approaches, sensitivity and robustness, and evaluation.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057427.71/warc/CC-MAIN-20210923165408-20210923195408-00245.warc.gz
CC-MAIN-2021-39
575
5
http://java.sys-con.com/node/46658
code
|By Mike Edwards, Tim Ellison|| |October 6, 2004 12:00 AM EDT|| The Async IO package is designed to provide fast and scalable input/output (IO) for Java applications using sockets and files. It provides an alternative to the original synchronous IO classes available in the java.io and java.net packages, where scalability is limited by the inherent "one thread per IO object" design. It also provides an alternative to the New IO package (java.nio), where performance and scalability are limited by the polling design of the select() method. As its name implies, the Async IO package provides asynchronous IO operations, where the application requests an IO operation from the system, the operation is executed by the system asynchronously from the application, and the system then informs the application when the operation is complete. The Async IO package supports a number of styles of application programming and gives the application designer considerable freedom in the management of the number of threads used to handle IO operations and also in the design of the components that handle the asynchronous notifications. Why Java Applications Need the Async IO Package The question "Why do Java applications need the Async IO package?" can be answered in two words: performance and scalability. Performance and scalability are key attributes of the IO system for IO-intensive applications. IO-intensive applications are typically, although not exclusively, server-side applications. Server-side applications are characterized by the need to handle many network connections to many clients and also by the need to access many files to serve requests from those clients. The existing standard Java facilities for handling network connections and files do not serve the needs of server-side applications adequately. The java.io and java.net packages provide synchronous IO capabilities, which require a one-thread-per-IO-connection style of design, which limits scalability since running thousands of threads on a server imposes significant overhead on the operating system. The New IO package, java.nio, addresses the scalability issue of the one-thread-per-IO-connection design, but the New IO select() mechanism limits performance. Current operating systems, such as Windows, AIX and Linux, provide facilities for fast, scalable IO based on the use of asynchronous notifications of IO operations taking place in the operating system layers. For example, Windows and AIX have IO Completion Ports, while Linux has the sys_epoll facility. The Async IO package aims to make these fast and scalable IO facilities available to Java applications through a package that provides IO capabilities linked to an asynchronous style of programming. The current version of the Async IO package, com.ibm.io.async, is designed as an extension to the Java 2 Standard Edition 1.4, which can in principle be provided on any hardware and software platform. The platforms currently supported by the package include Windows, AIX, Linux, and Solaris. Elements of the Async IO Package The major elements of the Async IO package are the classes AsyncFileChannel, AsyncSocketChannel, and AsyncServerSocketChannel. The channels represent asynchronous versions of files, sockets, and server sockets. These fundamental classes are designed to be similar in naming and in operation to the channel classes of the New IO package. Good news for Java programmers familiar with the New IO package. AsyncFileChannels and AsyncSocketChannels provide asynchronous read and write methods against the underlying file or socket. An asynchronous operation is a request to the system to perform the operation, where the method returns immediately to the calling application regardless of whether the operation has taken place or not. Instead of providing a return value that gives information about the operation, such as the number of bytes read/written, asynchronous read and write operations return objects that implement the IAsyncFuture interface. The IAsyncFuture interface is another important component of the Async IO package. First an IAsyncFuture represents the state of the asynchronous operation - most important, whether the operation has completed or not. Second, the IAsyncFuture provides methods that return the result of the operation once it has completed. An IAsyncFuture can throw exceptions as well as the normal outcome of the operation, if something goes wrong during the operation. The application uses one of three methods to find out whether a particular operation has completed: - Polling: Calls the isCompleted() method of the IAsyncFuture, which returns true once the operation is complete - Blocking: Uses the waitForCompletion() method of the IAsyncFuture, which can be used either to wait for a specified period or to wait indefinitely for the operation to complete - Callback: Uses the addCompletionListener() method of the IAsyncFuture, so the application can register a method that's called back by the system when the operation completes Data Formats Supported by Asynchronous Read and Write Operations The read and write operations supplied by the Async IO package use the ByteBuffer class to hold the data. This class is the same as the one used in the New IO package. One difference between the Async IO package and the New IO package is that the ByteBuffers used for the Async IO package must be Direct ByteBuffers. Direct ByteBuffers have the memory for their content allocated in native memory outside the Java Heap. This provides better performance for IO operations since the operating system code can access the data in the buffer memory directly, without the need for copying. ByteBuffers can be viewed as buffers supporting other primitive types, such as Int, Float, or Char, using methods such as bytebuffer.asIntBuffer(). ByteBuffers also have a series of methods that support the reading and writing of primitive types at arbitrary locations in the ByteBuffer using methods like bytebuffer.putLong( index, aLong). Simple Examples of Async IO Read and Write Operations Listing 1 shows the use of an AsyncSocketChannel as a client socket that involves connecting the socket to a remote server and then performing a read operation. In this example, the blocking style is used to wait for asynchronous operations to complete. Listing 2 is a program fragment that shows the use of a callback to receive the notification of the completion of an asynchronous operation. This fragment shows just some of the methods of a class that is handling socket IO. It's assumed that an AsyncSocketChannel has already been opened and connected, that a direct ByteBuffer is available, and that an object named "state" tracks the state of the IO. When the IO operation is requested (channel.read( ... )) an IAsyncFuture is returned. The next step is to give the IAsyncFuture a callback method by calling the addCompletionListener( ... ) method. The callback method gets called when the operation completes. The callback method is the futureCompleted( ... ) method that forms part of a class that implements the ICompletionListener interface. In this example, the class with the callback is the same as the class that makes the read request (so "this" is used as the first parameter in the addCompletionListener method). The signature of the futureCompleted ( ... ) method is fixed: its parameters are an IAsyncFuture object that represents the operation and, second, an object that holds the application state, which is associated with the IAsync-Future through the addCompletion-Listener( ... ) method where it forms the second parameter (in this example, we use the object called "state"). The futureCompleted( ... ) method is called when the operation completes. It is possible that the operation is complete before the completion listener is added to the future. If this happens, the futureCompleted( ... ) method is called directly from the addCompletionListener( ... ) method, without any delay. The futureCompleted( ... ) method receives the future object relating to the completed operation, plus the application state object. Beyond the Basics: Multi Read/Write Operations and Timeouts The previous sections described the basic functions available as part of the Java Async IO package. The package also supplies more advanced interfaces for asynchronous IO. The first advanced interface supplies the capability to perform read and write operations using multiple buffers for the data. The second advanced interface provides a time-out on the asynchronous IO operation. Both the multi read/write operations and the time-out facility are provided by the AsyncSocketChannelHelper and AsyncFileChannelHelper classes. This is done to keep the interface to the Async-FileChannel and AsyncSocketChannel classes as straightforward as possible. Create an AsyncSocketChannelHelper object by wrapping an existing AsyncSocketChannel. An AsyncFileChannelHelper is created by wrapping an existing AsyncFileChannel object. All operations on the channel helper object apply to the underlying asynchronous channel. The multi read/write operations take ByteBuffer arrays as input and return IAsyncMultiFuture objects. IAsyncMultiFuture objects differ from IAsyncFuture objects only in that they have a getBuffers() method that returns the ByteBuffer arrays involved in the operation in place of the getBuffer() method, which relates to the single buffer read/write operations. The multi read/write operations are useful for applications that need to send or receive data that's best handled by multiple buffers, perhaps where different elements of the data are handled by different application components (see Listing 3). The time-out operations provided by the AsyncSocketChannelHelper and AsyncFileChannelHelper classes are versions of the basic read and write operations that have a time-out period applied to them. The basic read and write operations of asynchronous channels can in principle take forever to complete. This is particularly a problem for an application that uses the callback technique to get notified that the operation is complete, since the callback might never get called if the operation does not complete. The use of the time-out versions of the operations guarantees that the IAsyncFuture will complete when the time-out expires, even if the underlying read/write operation does not complete. If the time-out expires, the IAsyncFuture completes with an AsyncTimeoutException. In addition, the underlying operation is cancelled (equivalent to invoking the IAsyncFuture cancel(future) method). Note that using the time-out versions of read and write are different from using the IAsyncFuture waitForCompletion( timeout ) method (see Listing 4). waitForCompletion provides a time-out for the wait on the completion of the IAsyncFuture. If this time-out expires, control is returned to the application, but the IAsyncFuture is not completed and the underlying read/write operation is still underway. By contrast, if the time-out expires on the AsyncChannelHelper read/write methods, the IAsyncFuture is completed (with an AsyncTimeoutException) and the underlying operation is cancelled. An important point about operations that time out is that the state of the channel is left indeterminate. Once an operation is cancelled, it's unlikely that the channel can be used again and the safe option is for the application to close the channel. Asynchronous IO Thread Management If you write an application program that uses the callback method to get notifications that asynchronous IO operations have completed, you need to understand which Java threads are used to run the callbacks. The threads used to run the callbacks will run application code. If your application code needs the threads to have any special characteristics, such as specific context information or security settings, this could cause problems for your application code unless your application carefully controls the actual threads that are used to run the callbacks. The threading design of the Async IO package is outlined in Figure 1. Applications make requests to the package for Async IO operations. The requests are passed to the operating system's IO functions. When the operations complete, notifications of their completion are passed back to the Async IO package and are initially held in an IO Completion Queue. The Async IO package has a set of one or more Java threads that it uses to process the notifications in the IO Completion Queue. Notifications are taken from the Completion Queue, and the IAsyncFuture related to the operation is marked as completed. If a Callback Listener has been registered on the IAsyncFuture, the Callback Listener method is called. Once the CallBack Listener method finishes, the thread returns to the Async IO package and is used to process other notifications from the Completion Queue. By default, the Async IO package uses its own Result Thread Manager to manage the threads that handle the callbacks. It allocates a number of threads, typically equal to the number of processors on the system. These threads are vanilla Java threads with no special characteristics. However, the application can control the threads in one of two ways. The application can override the default Result Thread Manager by calling the setResultThreadManager(IResult-ThreadManager) method of the Abstract- AsyncChannel class. The application must supply its own manager class that implements the IResultThreadManager interface, which defines the full life cycle for threads used by the Async IO package. The IResultThreadManager interface provides control over the policies applied to the result threads, including the timing of creation and destruction, the minimum and maximum numbers of threads, plus the technique used for creation and destruction of the threads. Alternatively, the application can use the default IResultThreadManager implementation provided by the Async IO package, but control the nature of the threads used to handle results and callbacks. This is done by supplying the default IResultThreadManager implementation with an application-defined IThreadPool object, by calling the set-ThreadPool( IThreadPool ) method on the IResultThreadManager. This allows the application to control the nature of the threads used in the Result Thread Manager. For example, application data can be attached to the thread or specific security settings applied to the thread, or the threads used in the IResultThreadManager can be cached by the IThreadPool. Performance is one of the important reasons for using the Async IO package. How does its performance stack up against the original synchronous Java IO and also against the New IO package? Performance is a complex issue, but a simple test provides some guidance. The test uses Socket IO with multiple clients communicating with a single server. Each client performs repeated operations, writing 256 bytes to the server and reading a 2,048 byte response from the server. For the test, the clients are always the same code, but three variations of the server code are used: - Synchronous Server, using the original Java IO classes - New IO Server, using the New IO classes - Asynchronous IO Server, using the Async IO package We ran the tests with a Windows 2000 single processor server system and a Windows Server 2003 four-way system running the clients, connected via a 100Mb Ethernet network, with varying numbers of client sockets each performing a connect followed by 50 read/write cycles with the server. The results are shown in Table 1, which provides the data for the average time in microseconds to complete each read/write cycle, quoted with and without the startup time included. The startup time is the time taken for the client socket to connect to the server before any data is transmitted. (If you're surprised that the four-way server system is used to drive the client side for this test, it's used to ensure that the very large number of clients can be created successfully.) The last two cases involve running with a number of inactive client sockets, which are connected to the server but are not transmitting any data during the test. This is more typical of a real Web server. These inactive sockets are a load for the server to handle alongside the active sockets. This shows the Async IO, New IO, and Sync servers are all similar in terms of average times in lightly loaded situations. The failure of the Sync server to handle the case of 7,000 total clients shows its limitations in terms of scalability. The figures for the New IO server show that the performance suffers as the number of clients rise. In particular the New IO server shows a marked rise in the overhead for starting up new connections as the number of connections rises. The Async IO server manages to achieve reasonably stable performance right through the range tested, both for startup time and for the read/write cycle time. These simple tests show that the Async IO package is able to deliver on its promise of performance and scalability and can form part of the solution for server applications intended to handle many thousands of clients. Pitfalls to Avoid As with the use of any API, there are some aspects of the Async IO API that you need to think about to avoid problems. You need to be careful with the use of the ByteBuffers that are used in the read and write methods of asynchronous channels. Because the IO operations occur asynchronously, there is the potential for the Async IO package to use the ByteBuffers at the same time as the application code. The rule to follow in order to avoid trouble is that the application code should not access the ByteBuffers from the time that an asynchronous read or write operation is requested until the point that the Async IO package signals that the operation is complete. Any attempt by the application to access the ByteBuffers before the operation is complete could cause unpredictable results. Asynchronous channels provide facilities for the cancellation of asynchronous IO operations. These include the explicit cancel() method available on the futures returned by operations on asynchronous channels, and also the implicit cancellation that takes place as part of the time-out of an IO operation on an AsyncSocketChannelHelper or AsyncFileChannelHelper. If an operation is cancelled, the under-lying channel (file or socket) is left in an indeterminate state. Because of this, your application should not attempt to perform any more operations on the channel once cancellation has occurred. The best thing to do is to close the channel as soon as possible. The performance of read and write operations using Async IO is designed to be as close as possible to the performance of equivalent synchronous IO operations. However, there is some extra overhead involved in running an asynchronous operation compared with a synchronous operation, associated with setting up and executing the asynchronous notifications. The implication of this is that asynchronous reads and writes involving very small packets of data (i.e., a few bytes only) are going to have a significantly higher overhead than synchronous equivalents. You should take this into account when designing your application to use Async IO. The Java Async IO package provides valuable facilities for fast, scalable Socket and File IO, which are an alternative to the use of java.io and java.nio facilities in client-side and server-side applications. The package also assists the program design by providing an event-driven interface for IO operations that is simple to use. |Mike Edwards 10/26/04 10:55:01 AM EDT| Please email me directly if you would like to discuss your question about NIO in more detail - I'd prefer to keep this discussion thread dedicated to Async IO. |Paul 10/25/04 10:38:17 AM EDT| Your article is great! I used NIO for a socket server, could you help me out a qustion? NIO send message by Byte between client and server, I got many samples with it to delever string message acting as HTTP server. HOw can I deliver and parse the message wrapped in an object instead of only string? could you give me some clues or any samples? Thanks you very much. |Mike Edwards 10/25/04 08:27:39 AM EDT| Your question about why NIO performs less well than the original synchronous IO is an interesting one. Fundamentally, NIO is less about performance and more about scalability. Synchronous IO demands one thread per socket and most operating systems limit the number of threads. New IO allows many sockets per thread and so allows a much greater number of sockets per application. The figures in our article show this lack of scalability of synchronous IO. In terms of performance, New IO has to do the same read and write calls to the operating system that are done by synchronous IO. However, New IO requires the use of the Selector and the management of the key sets - this is an overhead. Synchronous IO by contrast has the overhead of thread switching between the many threads. At low numbers of sockets, the difference in the overheads is not significant, except that the setup time for putting a new channel into the Selector makes New IO slower to add a new channel (note: our code caches the threads used by synchronous IO). At high number of sockets, the time to insert a channel into the Selector climbs as does the time to do the Select operation, due to the data structures used to hold the select list. Thread switch time does not increase as much - so making New IO performance look worse at high numbers of sockets. We shall look to make our performance test code available on the AIO4J site, so that you can take a look at how the server code compares between Sync IO, New IO and AIO4J. |Bret Hansen 10/23/04 12:17:20 PM EDT| So your test shows that the nio package is slower than the original synchronous API. Can you explain why? I haven't looked at your code yet. |Mike Edwards 10/13/04 03:11:59 AM EDT| |Csaba 10/12/04 05:12:32 AM EDT| Nevermind, found it... |Csaba 10/12/04 05:10:31 AM EDT| Where are the tables/images for this article ? I was really interested in that comparison chart, but couldn't find the link... With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors. Aug. 28, 2016 02:00 AM EDT Reads: 1,759 The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi... Aug. 28, 2016 01:30 AM EDT Reads: 2,063 Cloud computing is being adopted in one form or another by 94% of enterprises today. Tens of billions of new devices are being connected to The Internet of Things. And Big Data is driving this bus. An exponential increase is expected in the amount of information being processed, managed, analyzed, and acted upon by enterprise IT. This amazing is not part of some distant future - it is happening today. One report shows a 650% increase in enterprise data by 2020. Other estimates are even higher.... Aug. 28, 2016 01:00 AM EDT Reads: 2,959 Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id... Aug. 28, 2016 12:15 AM EDT Reads: 1,816 The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Digital Transformation, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportuni... Aug. 27, 2016 11:00 PM EDT Reads: 3,993 Identity is in everything and customers are looking to their providers to ensure the security of their identities, transactions and data. With the increased reliance on cloud-based services, service providers must build security and trust into their offerings, adding value to customers and improving the user experience. Making identity, security and privacy easy for customers provides a unique advantage over the competition. Aug. 27, 2016 08:45 PM EDT Reads: 2,338 19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri... Aug. 27, 2016 06:00 PM EDT Reads: 3,092 Smart Cities are here to stay, but for their promise to be delivered, the data they produce must not be put in new siloes. In his session at @ThingsExpo, Mathias Herberts, Co-founder and CTO of Cityzen Data, will deep dive into best practices that will ensure a successful smart city journey. Aug. 27, 2016 05:15 PM EDT Reads: 1,576 SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ... Aug. 27, 2016 05:00 PM EDT Reads: 1,883 There is growing need for data-driven applications and the need for digital platforms to build these apps. In his session at 19th Cloud Expo, Muddu Sudhakar, VP and GM of Security & IoT at Splunk, will cover different PaaS solutions and Big Data platforms that are available to build applications. In addition, AI and machine learning are creating new requirements that developers need in the building of next-gen apps. The next-generation digital platforms have some of the past platform needs a... Aug. 27, 2016 04:00 PM EDT Reads: 581 SYS-CON Events announced today Telecom Reseller has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms. Aug. 27, 2016 03:15 PM EDT Reads: 772 I wanted to gather all of my Internet of Things (IOT) blogs into a single blog (that I could later use with my University of San Francisco (USF) Big Data “MBA” course). However as I started to pull these blogs together, I realized that my IOT discussion lacked a vision; it lacked an end point towards which an organization could drive their IOT envisioning, proof of value, app dev, data engineering and data science efforts. And I think that the IOT end point is really quite simple… Aug. 27, 2016 12:45 PM EDT Reads: 2,350 Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp... Aug. 27, 2016 12:30 PM EDT Reads: 3,624 DevOps at Cloud Expo, taking place Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long dev... Aug. 27, 2016 11:00 AM EDT Reads: 2,378 Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine. Aug. 27, 2016 10:15 AM EDT Reads: 1,923 Pulzze Systems was happy to participate in such a premier event and thankful to be receiving the winning investment and global network support from G-Startup Worldwide. It is an exciting time for Pulzze to showcase the effectiveness of innovative technologies and enable them to make the world smarter and better. The reputable contest is held to identify promising startups around the globe that are assured to change the world through their innovative products and disruptive technologies. There w... Aug. 27, 2016 07:45 AM EDT Reads: 690 Personalization has long been the holy grail of marketing. Simply stated, communicate the most relevant offer to the right person and you will increase sales. To achieve this, you must understand the individual. Consequently, digital marketers developed many ways to gather and leverage customer information to deliver targeted experiences. In his session at @ThingsExpo, Lou Casal, Founder and Principal Consultant at Practicala, discussed how the Internet of Things (IoT) has accelerated our abil... Aug. 27, 2016 02:30 AM EDT Reads: 2,022 Is the ongoing quest for agility in the data center forcing you to evaluate how to be a part of infrastructure automation efforts? As organizations evolve toward bimodal IT operations, they are embracing new service delivery models and leveraging virtualization to increase infrastructure agility. Therefore, the network must evolve in parallel to become equally agile. Read this essential piece of Gartner research for recommendations on achieving greater agility. Aug. 25, 2016 05:15 PM EDT Reads: 832 SYS-CON Events announced today that Venafi, the Immune System for the Internet™ and the leading provider of Next Generation Trust Protection, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Venafi is the Immune System for the Internet™ that protects the foundation of all cybersecurity – cryptographic keys and digital certificates – so they can’t be misused by bad guys in attacks... Aug. 25, 2016 01:00 PM EDT Reads: 2,658 For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording. Aug. 25, 2016 08:45 AM EDT Reads: 2,183
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982935857.56/warc/CC-MAIN-20160823200855-00011-ip-10-153-172-175.ec2.internal.warc.gz
CC-MAIN-2016-36
32,472
116
https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/impact/relationships-between-actions-and-feeds
code
Users increasingly rely on social media feeds for consuming daily information. The items in a feed, such as news, questions, songs, etc., usually result from the complex interplay of a user’s social contacts, her interests and her actions on the platform. The relationship of the user’s own behavior and the received feed is often puzzling, and many users would like to have a clear explanation on why certain items were shown to them. Transparency and explainability are key concerns in the modern world of cognitive overload, filter bubbles, user tracking, and privacy risks. This project presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users’ actions and items in their social media feeds. We model the user’s local neighborhood on the platform as an interaction graph, a form of heterogeneous information network constructed solely from information that is easily accessible to the concerned user. We posit that paths in this interaction graph connecting the user and her feed items can act as pertinent explanations for the user. These paths are scored with a learning-to-rank model that captures relevance and surprisal. User studies on two social platforms demonstrate the practical viability and user benefits of the FAIRY method.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297290384.96/warc/CC-MAIN-20240425063334-20240425093334-00305.warc.gz
CC-MAIN-2024-18
1,299
1
https://forum.freecodecamp.org/t/tribute-page-feedback-please-dr-grace-hopper/50653
code
I tried to go with a simple design. Two things I’m not happy with are that I wanted the background color to not go all the way to the edge of the page. Could not figure it out and all of my google searches came up nada. The second thing is that the bulleted list looks like it just doesn’t go with the rest of the page. Any feedback would be greatly appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146845.98/warc/CC-MAIN-20200713194203-20200713224203-00364.warc.gz
CC-MAIN-2020-29
366
2
http://beta.slashdot.org/~theophilus00
code
Costa Rica May Criminalize VoIP For amateur radio operators in the US, it's illegal to receive or transmit international messages for a third party unless there exists an agreement between the US and the other country specifically allowing it. This includes patching (allowing a foreign operator to connect to a local US telephone network through your station). The reason is precisely as you stated - some governments do not wish to allow any mode of international communication which would compete with the established system (which they own or have a significant interest in). Kind of sucks for VoIP, but is nice for amateur radio because you don't have a whole bunch of people with no interest in proper radio operation simply using it as a way to get around telephone toll charges. I think the US regulations are different from those of the parent poster's country in that they generally apply only to third-party messages. Licensed amateur operators are allowed to have international conversations with other licensed amateurs without formal restriction.
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115868812.73/warc/CC-MAIN-20150124161108-00085-ip-10-180-212-252.ec2.internal.warc.gz
CC-MAIN-2015-06
1,060
4
https://www.eviltester.com/blog/javafortesters/
code
Subscribe to the full blog feed using RSS Recent Posts for Java For Testers and Test Automation Blog TLDR; Getting started with programming is the hardest part. Installing the IDE, adding dependencies, writing your first test. Pick whichever language you have someone to help you with, or you have a tutorial to work through. Switching languages when you know one is not too hard so do not worry about being stuck with a language, focus on getting started. TLDR; Downloading a file with RestAssured is as simple as taking the body of a request as a byte array and writing it to a file. When automating I often have to download files. One very common FAQ for WebDriver is “How do I download a file with WebDriver?”. TLDR; Using Visual SVN, svnserve and local SVN repositories I was able to easily convert SVN to Git on Windows. I hit every error possible when converting SVN to GIT. I eventually figured out the simplest way to avoid errors during the conversion process. TLDR; Rather than migration your assertions line by line, create an abstraction class to represent the new implementation and then perform inline refactoring. I’m experimenting with migrating my projects to JUnit 5. Many of the “how to migrate to JUnit 5” blog posts show differences, but not a lot of strategies. I used a Branch By Abstraction strategy to migrate JUnit 4 Assertions. This allowed me to experiment with using JUnit5 assertions or AssertJ assertions. TLDR; Apply MVP principles when coding. Code to the API first. The API is internal before it is external. Unit tests with classes. In code testing with classes in combination. In code API testing. External HTTP API Testing. And then if necessary -In memory and process HTTP API testing. GUI. A long time ago, in a town which I no longer live in, I wrote a tool called Compendium-TA Commercially that was a disaster: it was self funded, it took a long time to write and I made some poor technology decisions. I learned MVP and API First Thinking the hard way. I’ll try and explain within. Older Posts for Java For Testers and Test Automation Blog - Overview of Spark and HTTP Testing with JUnit (2018-04-26) - When would I choose basic HTTP libraries rather than using RestAssured? (2018-04-25) - Migrating from JAXB XML processing to XStream (2018-04-24) - How to learn Java with Exploratory Programming (2018-04-08) - How to organise resource files for tests using Maven and Java (2017-12-07) - Simple ways to add and work with a `.jar` file in your local maven setup (2017-10-13) - How to Diff Java Code in IntelliJ - 3 ways to use the Compare Tool (2017-10-12) - Java 1.9 Reflection and Package Access Changes (2017-10-05) - Why does my code throw a null pointer exception? - common reason (2017-08-29) - Implementing PATCH Verbs with Gson Jaxb and Spark Framework (2017-07-06) - Architecting a Testable Web Service in Spark Framework (2017-07-05) - An introduction to Refactoring Java in IntelliJ using RestMud as an Example (2017-06-14) - JSoup Tip How to get raw element text with newlines in Java - Parsing HTML and XML with JSoup (2017-04-13) - Mistakes using Java main and examples of coding without main (2017-03-17) - Let's Code - Binary Chopifier - Just Enough Code and Tooling to Start (2016-12-05) - How to create and release a jar to maven central (2016-10-21) - How to fix IntelliJ issues by looking in the IntelliJ log (2016-09-27) - How to convert a breakpoint into a conditional breakpoint in IntelliJ (2016-08-08) - Is JUnit only for unit testing? What about system testing or UAT? (2016-08-02) - How to debug Java with IntelliJ: breakpoints, evaluate expression, watches and variable view (2016-07-19) - Does dependency between test execution imply lack of abstraction layers? (2016-06-02) - An example of creating a 'tool' using @Test methods without building a Java application (2016-04-14) - What is a Java `main` method - simple example (2016-03-22) - Using travis-ci.org for checking code on github (2015-07-17) - How to learn to code Java without using a 'main' method (2015-03-06) - Switching between Java versions on a Mac (2015-01-21) - FAQ: Should I use JUnit or TestNG, which is better? (2014-09-04) - FAQ: Why do I only see test that fail in IntelliJ and not the tests that pass? (2013-09-20) - Do "Enable Auto-Import" in IntelliJ for "Maven projects need to be imported" (2013-09-18) - Chapter on Date and Time added to Java For Testers (2013-09-17) - Maven Troubleshooting FAQs and Tips (2013-08-22) - Some Handy IntelliJ Code Completion Keyboard Short Cut Tips (2013-06-20) - How do I get started installing what I need to write Java? (2013-06-12) - Which IDE should you use for Java? (2013-06-07)
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510520.98/warc/CC-MAIN-20230929154432-20230929184432-00243.warc.gz
CC-MAIN-2023-40
4,683
49
https://community.unifiedremote.com/user/renato-games
code
Hi, has anyone of you tried the LibreOffice Impress Remote? For presentations is 10x better than any other remote in UnifiedRemote. I'd like to build it myself but I don;t know where to start. Key components are: - The screen visualizes only the Preview of the current slide - moving your finger over the image there is a red bright dot that shows up and is superimposed to the presentation (cursor = big red dot) - Volume up and down go to next and previous slide I'd appreciate any pointer to begin building the above.
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647299.37/warc/CC-MAIN-20180320052712-20180320072712-00352.warc.gz
CC-MAIN-2018-13
520
5
http://morenews.blogspot.com/2005/04/jmock-invocation-order.html
code
After many fruitless Google searches I finally found an example (on the JMock site no less) on how to ensure invocation order of methods: .id("warning level set"); .after("warning level set"); .after("warning level set") A rule of thumb to follow when specifying the expected order of method calls is: test the ordering of only those calls you want to occur in order. The example above allows the warn and getLoggingLevel methods to occur in any order, as long as they occur after the call to setLoggingLevel. Thus we can change the order in which our tested object calls warn and getLoggingLevel without breaking our tests." So the ".id("warning level is set")" sets a property that the second and third calls check with ".after("warning level is set")". EasyMock makes it easy by allowing you create a strict type of control object: "If you would like a "strict" Mock Object that checks the order of method calls, use MockControl.createStrictControl() to create its MockControl."
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593010.88/warc/CC-MAIN-20180722041752-20180722061752-00173.warc.gz
CC-MAIN-2018-30
981
7
http://www.562citylife.com/profiles/blogs/thanks-long-beach
code
It broke 84 degrees today. That's Thanksgiving in Southern California, and that's just one of the reasons we're thankful for living in Long Beach. It's been just about two years since we started 562CityLife to try and provide a platform for community building. It's been an experiment and a learning process, and we're thankful for every comment, picture, video, and connection that has been made via the site. Without 562CityLife we would have never made so many of the friends that we've made! So, thank you Long Beach, and here are a few more things we're thankful for!
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635944/warc/CC-MAIN-20130516121715-00019-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
572
1
https://whiteblacksheep.wordpress.com/category/it/programming/html5/
code
Another interesting post by Html5Rocks: Behind the scenes of modern web browsers. Splashnology informs about some new jQuery Plugins for Web Developers. Read more about HTML5 forms input types outlined by HTML5 Doctor. Checkout the video player plugin posted by HTML5 Ninja. Check out Animate.css… jlPlayer is a totally theme-able, customizable audio player, built upon the HTML5, jQuery, & jQuery UIframeworks.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863834.46/warc/CC-MAIN-20180620182802-20180620202802-00490.warc.gz
CC-MAIN-2018-26
413
6
http://forum.linuxmce.org/index.php?topic=2364.msg11192
code
I've finally decided to bite the bullet and build myself a LinuxMCE box. As much as I loved the demo with the gyro fiire remote, $150 is WAYYY out of my budget. I was wondering, what are some good cheap remotes that you guys use, and have great functionaity/ease of use. I read around the forums, and a lot of the decisions come down to if you would like a Follow Me button. I'm only going to be using this in one room, so I dont need it. Please recommend me some remotes. Right now im considering the 70 dollar gyration mouse, but 70 dollars is still a bit pricey, and i was wondering if there were cheaper alternatives. Also, i dont necessarily need gyration, but that would be a plus.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118963.4/warc/CC-MAIN-20170423031158-00227-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
687
5
https://www.wfh.io/jobs/1596-front-end-developer-traveljunction-com
code
WARNING! This job is awaiting verification or has expired and may no longer be applicable traveljunction's engineers develop the valuable assets in the business. Our guiding principle is that by helping our customers we help our business. As an Engineer in the Front End team you’ll be a key part of the idea generation process, working in small teams that take full ownership of their part of our platform. You'll work side by side with designers, back end developers, product owners and copywriters to conceive, analyze, hand code & test your ideas. You’ll be given the freedom to make meaningful and measurable improvements impacting all of our customers. Our ideal new team member has an excellent eye for detail, pragmatic approach and an absolute commitment to making sure features are well implemented and bug free. We're a delivery focused team so you'd have the same mindset. You take pride in seeing a product you worked on meet the real world and our customers for the first time but you also know that is just the start of it all! If you believe you're calm under pressure, like challenges better than problems and beat challenges with solutions in a collaborative environment then we'd love to hear from you! - Enjoy prototyping and working with team to create products - Continuously look for ways to improve traveljunction.com - Take ownership of sections of traveljunction.com's desktop, tablet and mobile web sites - Communicate calmly and effectively within your team - Provide solid feedback and provide help to team members - Develop applications from the ground up using Node.js / Angular - Work with our various APIs to create products - Work with version control and the command line on a day to day basis - 3+ years of experience in a relevant role, preferably in a commercial environment - Experience with templating languages - Ability to write high-performance, reusable code - Experience troubleshooting cross-browser compatibility issues - Experience with data-driven product development: analytics, A/B testing, etc - Excellent knowledge of version control systems - Comfortable working on a command line - Excellent English communication skills, both written and verbal - Experience of working within an agile process We believe in our people and if you're happy we know you'll be doing your best work. - Competitive Salary - Machine of your choice (whatever you're used too either PC/Mac) - Flexible / Remote Working - 20 days holiday plus your birthday - Discounted Hotel Bookings - The Company shall be enrolling all employees into a new work place pension scheme from October 1 2015 - Application Info Please sign in with Google or GitHub to view this job's application information. This is necessary to prevent companies from receiving excessive amounts of spam. Tramcar - Toronto-Waterloo Region Corridor Jobs
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648177.88/warc/CC-MAIN-20180323024544-20180323044544-00091.warc.gz
CC-MAIN-2018-13
2,850
32
https://oppositelock.kinja.com/continued-searching-1828981208
code
I seem to have begun the year+ long process I typically engage in when thinking about possibly acquiring a new primary vehicle... To that (continuing) end a (presently) hypothetical for you oppo: You may choose 1: Mazda CX-3 Touring AWD Hyundai Kona Limited AWD 1.6t Jeep Renegade Altitude AWD/manual 1.4t Which one? Why?
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129517.82/warc/CC-MAIN-20200712015556-20200712045556-00513.warc.gz
CC-MAIN-2020-29
321
6
https://www.bath.ac.uk/guides/get-proof-you-were-a-student-at-bath/
code
Confirming your study dates or award If you require a document which confirms the dates you studied at Bath and/or your award you should email Academic Registry ([email protected]). You should include the following information in your email: - your full name - your date of birth - the name of your course - the year you graduated - the number of copies you require - a copy of photo ID (e.g. driving licence/passport) If you need more detail about your study If you need to provide more detail about your course (for example, how the course was structured) contact the department that you studied in. You can also find more detail about structures of taught courses for all years from 1997/98 to the current academic year in our Programme & Unit Catalogues. If you studied here some years ago, and you're not sure which department to contact, email Academic Registry for further help. If you're currently a Bath student If you need to prove that you're currently a student at Bath read the guide Get proof you are a student. What to do if you need a copy of your certificate
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510334.9/warc/CC-MAIN-20230927235044-20230928025044-00035.warc.gz
CC-MAIN-2023-40
1,088
15
https://fr.coursera.org/learn/model-thinking/reviews?authMode=login&page=11
code
23 nov. 2020 very informative and insightful on how to use the various models at our disposal to take optimal decision. Also, was nostalgic to go back in time where these models were part of my university degree. 24 févr. 2017 Great content and lectures, that possibly provides new dimensions to look/explain the situation in context, I guess I will comeback for references to continue with this journey in to 'Model Thinking' par Juan P L R• 24 juin 2018 Very good course. Complete, useful and practical. par Robert L• 25 mars 2022 Absolutely great course. Bought the book also. par David A• 13 nov. 2019 Really nice introduction to a variety of models. par Dennis M• 7 avr. 2021 One of the best courses i have ever undertaken. par Jonas P• 26 sept. 2020 Nice course that learn us to think by ourselves par Norman P• 22 août 2020 Great course. Well done. Practical application. par Zeynep T• 12 oct. 2017 Quite interesting topic. Well-organized notes. par Elke W• 24 mars 2016 Great introductory course to expand your mind. par Zach D• 4 avr. 2020 Really enjoyed the professor and the content. par Tony N• 16 févr. 2020 Probably the best online course I have taken! par Li M• 30 oct. 2019 This course is very informative and inspiring par Michal P• 6 sept. 2017 Very engaging and super fun - just amazing!!! par Nadim M• 22 mai 2020 Really a great course and a great professor! par Alfa T• 16 nov. 2020 Amazing topics, very useful in daily life ! par Peter V• 6 mai 2020 Excellent course. Would strongly recommend. par Ning C• 6 nov. 2016 It's an awesome course! Learned a lot here. par Ryan M F• 3 nov. 2015 Haven't done it yet but I love it. Vamanos. 14 oct. 2015 Excellent teaching and material par Channa H• 15 mai 2022 Learnta nd enjoyed very much, thank you!! par TANYA S• 23 mars 2022 it was greating learning experinence here 11 janv. 2020 The yeacher is so kind,patient and wise. par Hamidreza M A• 23 févr. 2016 A great job that professor Page is doing. par Parag B D• 4 févr. 2022 Excellent course to change your thinking par Angelos S• 24 juil. 2020 Best online course I have ever attended. par Zephyr L• 4 sept. 2017 Great coverage on how to model a system!
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662572800.59/warc/CC-MAIN-20220524110236-20220524140236-00534.warc.gz
CC-MAIN-2022-21
2,222
77
https://phpgrid.uservoice.com/knowledgebase/articles/936021-phpgrid-laravel-5-integration-class-c-datagrid
code
I have tried the both method to integrate phpGrid with Laravel, but i have problems, with the last improved method i am getting this error: I followed step by step all instructions with no success. FatalErrorException in DashboardController.php line 9: Class 'C_DataGrid' not found It's because since 6.6 version phpGrid started to support PHP namespace. So the root namespace "\" will need to be updated. So in DashboardController.php, change the first line that calls phpGrid constructor to $dg = new \phpGrid\C_DataGrid("SELECT * FROM orders", "orderNumber", "orders");
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511075.63/warc/CC-MAIN-20231003092549-20231003122549-00268.warc.gz
CC-MAIN-2023-40
572
6
http://www.taeyongpark.com/
code
Taeyong Park Welcome! I am a visiting assistant professor and associate director of Statistical Consulting Center at Carnegie Mellon University in Qatar. I received my Ph.D. in Political Science from Washington University in St. Louis in 2017. My research interests include quantitative methods (Building longitudinal Google search data to measure dynamic local-level public attention; Developing Bayesian statistical methods for time series modeling and causal inference) and American politics and policy (Local economic voting; Mass shootings and voter turnout; Electoral context and campaign strategy; Spatial policy dependence). I teach a range of applied statistics and data science courses.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066568.16/warc/CC-MAIN-20210412023359-20210412053359-00179.warc.gz
CC-MAIN-2021-17
696
1
https://softcrayons.com/python-selenium
code
Selenium can be described as a valuable collection of tools that help speed automated testing for web-based applications. It comes with various Python Selenium Training Courses in Ghaziabad that have been specifically tailored to the requirements for testing web-based applications. These functions are reliable, allowing different options for positioning elements of the user interface and analyzing expected results against the actual application behaviour. It utilizes a variety of scripting languages for automated testing. Python is a high-level and object-based scripting language developed in an easy-to-use manner. It is essential for English keywords that require simple understanding. There are not many syntax issues compared to other languages used for programming. Simple to code and simple to read It dashes while analyzing another program It has a lively typing style. Many programmers are comfortable with Python as a programming language. The API that is used in Python assists in connecting to your browser using Selenium. The binding between Python and Selenium gives you a straightforward API for writing functional tests sensitively using Selenium WebDriver. Furthermore, it also provides an easy API to connect with Selenium WebDriver such as Firefox, Remote, etc. The Python language is simple and has less verbose syntax when compared to other programming languages. This is due to the Python API, which allows users to connect to the browser using Selenium. Selenium can send the standard Python commands to different browsers, regardless of the application's layout and layout variations. Python is a scripted language, and there's no concern about running an engine to convert code from lines of code into anything that could be utilized and implemented. Python is swift and utilizes indentation to start and end blocks. It is elementary and compact in comparison to different programming languages. Python language is built by a large community with great support. Therefore, during automated testing with Selenium using Python Certification Training Ghaziabad, the community functions as an inviting wagon for people who are unfamiliar with the language. The programming language is free and accessible as open source. Anyone who needs it can download it and utilize it in any environment. The entire culture and communities based around the language are accessible to software enthusiasts. Alongside all the other reasons to use Selenium in conjunction with Python, another primary reason to use Selenium with Python is the variety of tools one can use to extend its functionality. The primary device for user interfaces with ease is WebDriver. WebDriver is a strong binding for Python. So, these are several reasons why Selenium predominantly uses Python for scripting. However, numerous other languages also have similar capabilities. So, why would you prefer to work on Selenium using Python instead of different programming languages? It has much fewer codes than other programming languages. It has a similar English syntax, which makes it readable to humans. Additionally, it is simple to master and comprehend because of its simpler syntax. Python is a free, open-source programming language with various frameworks and libraries. Selenium is an open-source software tool used to test applications on the web. Although it may appear like QTP, Selenium focuses solely on testing web-based applications instead of QTP and supports desktop-based application testing. Selenium can be used with multiple languages, and Python is among them. Integrating Python Selenium Course Training allows it to communicate with the browser by sending keys and getting the data. Python allows multiple browsers and will enable us to write our program based on requirements. What are the reasons to choose Python that comes with Selenium? Compared to other languages or Software Testing Course Training in Ghaziabad, Python requires less time to execute the script and finish the execution. Python uses indentation, not braces (), making it simple to follow the code flow. Selenium is more than an instrument but a software suite, each catering to a company's different testing requirements. It comprises four elements: The first step is to go to the directory where Python is installed. Use the Tool that installs the Tool to set up the Selenium WebDriver package. Install and utilize popular Python tools for development. We can find the object by incorporating Firebug and Fire path extensions. Understanding the communications between the various components of Selenium is crucial before diving into Selenium using the Python Training Course Ghaziabad. Selenium WebDriver APIs are utilized to connect the programming languages and web browsers. As mentioned, Developers can use Selenium to perform automated testing of common programming languages. Selenium Client Libraries or Selenium Language Bindings make multi-language support available in Selenium. The Selenium Python tutorial's primary focus is using Selenium together with Python. Therefore, we will require bindings to languages for Python Selenium Training Certification Ghaziabad. The language drivers that support programming languages such as Python can be downloaded through the official Selenium website to download Client libraries. It is a REST (Representational State Transfer) API that facilitates the transfer of information between the HTTP Server, the HTTP Server, and the client. Every browser on the Internet, including such as Chrome, Firefox, Internet Explorer and others. Each comes with its driver for browsers (or HTTP Servers). The browser drivers are responsible for communicating with the web browser. Each browser has its browser driver, which must be installed on the computer that will host the automated tests to be carried out. Since communication with the internet browser occurs through the browser driver and not the browser's internal mechanism, the logic behind the browser isn't revealed. The browser driver is the required degree of abstraction for interactions with browsers. Selenium is compatible with popular browsers like Chrome, Firefox, Internet Explorer, Microsoft Edge, etc. Selenium is not a suitable framework for browsers for which the browser driver isn't available. Selenium, the IDE, is a well-known instrument for recording and playback testing. It was initially available as a Firefox plugin; however, now Selenium IDE is also known as an add-on for Chrome. The most current version of Selenium IDE includes a command-line utility (SIDE Runner), which allows you to execute the side project on the Node.js platform. Selenium WebDriver is required if you intend to build tests using Selenium IDE. Selenium RC was regarded as the primary component in Selenium until Selenium WebDriver replaced it in Selenium v2. Selenium RC was widely appreciated for its ability to circumvent the policy of the exact origin that caused significant issues when testing web automation. One implemented the policy of the precise origin to ensure security and that the contents of a web page were not accessible to scripts from a different domain (or website). Selenium RC Server, an HTTP proxy server, was created to break the policy of using the exact origin. Therefore, Selenium RC comprises the Selenium Client and Selenium RC Server. In the area of Software Testing Course Training, Python Selenium has become very well-known for all the right reasons, and the future of the Tool appears to be secure. It is declaring this because of the benefits we gain when using Selenium, which is found in none of the other tools. Since Selenium is an open-source framework, one can utilize it at no cost. Selenium supports various programming languages like Java, Ruby, Python, C#, and PHP. When it comes to supporting the OS, Selenium can support every operating system, such as Windows, Linux, and Mac. Selenium is a powerful tool for automation, and Python is rapidly growing as a straightforward language. Selenium combined with Python is set to have the best future thanks to the easy syntax of Python and Selenium commands. Software Testing Industrial Training NoidaPrepare for the IT industry with our Software Testing Industrial Training in Noida. Gain real-world experience and kickstart your career. Implicit and Explicit Waits Fluent Wait Techniques Handling Ajax Calls
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817790.98/warc/CC-MAIN-20240421163736-20240421193736-00386.warc.gz
CC-MAIN-2024-18
8,387
73
http://www.aicongallery.com/news/can-art-foretell-the-future-of-humanity
code
Written by Somak Ghoshal In artist Avishek Sen’s work, humans, animals and vegetation are often fused by a unique alchemy. A figure with a lion’s head and ripped male torso poses in front of a bathtub, out of which an oversized dissected fruit and a tiger-riding deity peek out. A panel of images depicts a cheetah clutching a larger-than-life banana in various postures. The peeled fruit seems to be attached to the animal’s body, like a stand-in for a giant phallus. When Sen paints the cross-section of a pumpkin, melon or jackfruit, he opens up a brave new world, a landscape of unseen possibilities.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103516990.28/warc/CC-MAIN-20220628111602-20220628141602-00066.warc.gz
CC-MAIN-2022-27
610
2
https://rdrr.io/github/michaelwitting/wormLipidPredictR/man/create_reactions.html
code
View source: R/create_reaction.R create_reactions will create reactions following a given reaction order. create_reactions will return a list containing one entry for each reaction type (RHEA id). Each reaction entry is again a list: the first entry will contain a list of the products of the reaction, the second entry will contain the filled template with the list containing the substrates for the reaction, e.g. the initial substrate for the first reaction data.frame containing the reaction order (in the column reactions has to contain the columns RHEA. It may in addition contain additional columns such as isReversible. These columns will be ignored by create_reactions but might be of use by the user. RHEA will contain the ids that are used for matching the reaction type. list containing the reactions Thomas Naake, [email protected] FA <- c("FA(14:0(12Me))", "FA(16:0(14Me))", "FA(15:1(9Z)(14Me))", "FA(17:0(16Me))", "FA(12:0(11Me))", "FA(13:0(12Me))", "FA(14:0(13Me))", "FA(15:0(14Me))", "FA(16:0(15Me))", "FA(12:0)", "FA(14:0)") ## create data.frame with reactions and reaction order reactions <- rbind( c(1, "RHEA:15421", "M_ATP + M_CoA + M_FA <=> M_PPi + M_AMP + M_AcylCoA", FALSE), c(2, "RHEA:15325", "M_Glycerol-3-P + M_AcylCoA <=> M_CoA + M_LPA", FALSE), c(3, "RHEA:19709", "M_AcylCoA + M_LPA <=> M_CoA + M_PA", FALSE), c(4, "RHEA:27429", "M_H2O + M_PA <=> M_Pi + M_1,2-DG", FALSE) ) reactions <- data.frame(order = reactions[, 1], RHEA = reactions[, 2], reactions = reactions[, 3], directed = reactions[, 4]) reactions$order <- as.numeric(reactions$order) reactions$directed <- as.logical(reactions$directed) ## run the function create_reactions(substrates = list(FA = FA), reactions = reactions) Add the following code to your website. For more information on customizing the embed code, read Embedding Snippets.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00587.warc.gz
CC-MAIN-2023-14
1,833
23
https://www.splunk.com/en_us/blog/tips-and-tricks/splunking-continuous-rest-data.html
code
One of the ways vendors expose machine data is via REST. There are a couple of ways to get REST data into Splunk today: - Use Damien Dallimore’s REST API Modular Input – you can provide a custom response handler for this input to persist state. - Use the new Splunk Add-on Builder – this method will do a “one shot” of the REST endpoint – meaning, every time the input runs, it will get all the data every time. In this post, I will show you how to implement a cursor mechanism (i.e. pick up where you left off last time) for REST endpoints that continually have new data over time using the checkpoint mechanism built into modular inputs. The Data Source For this example, we will ingest JSON data from a tumblr blog – http://ponidoodles.tumblr.com. I chose this as an example because the v1 REST endpoint in Tumblr is open and easy to use for an example (no authentication required). Plus, this one it is about ponies. The API documentation and parameters can be found here https://www.tumblr.com/docs/en/api/v1 We will use 2 of the available parameters: - start – this is the post offset to start pulling posts - num – this specifies the number of posts to pull. Getting the Data in Following is the pseudo-code we will use to get the data: - Get the starting position from a checkpoint - If there is no checkpoint, set the starting position to 0 - Pull up to 5 posts from the endpoint starting at the starting position - Count the number of posts read - Stream each post to Splunk - Add the number of posts read to the starting position - Save the new starting position (in the first case, the new starting position will be 5) To keep the code concise, we will use the Splunk Python SDK to create a modular input. In the Splunk Python SDK, all the magic happens in the stream_events method. In order to implement the checkpoint mechanism based on the pseudo code above, I stole borrowed some code from the Splunk Add-on builder to abstract the check pointing mechanics. Here is an actual code snippet: The complete code can be found on GitHub. Note: The method we used here for saving a checkpoint is very basic (i.e. counting the number of posts) and may not apply to your situation. Sometimes, the REST data may give you a continuation token and something like the following may be necessary: if "nextLink" in jsonValue: state_store.update_state("nextLink ", jsonValue[“nextLink”]) Microsoft Azure Audit does this for instance. Testing the Input A nice way to test you input prior to using it in your Splunk environment is to use the Splunk CLI. First copy the contents from the GitHub repo above to your $SPLUNK_HOME/etc/apps folder. Next, execute the following (Splunk is installed in /opt/splunk in this case): /opt/splunk/bin/splunk cmd splunkd print-modinput-config splunk_rest_example splunk_rest_example://RESTTest | /opt/splunk/bin/splunk cmd python /opt/splunk/etc/apps/TA_rest-example/bin/splunk_rest_example.py The checkpoint location is There will be a file in there called last_position that gets updated on each run. Open it up with a text editor to see for yourself. Clearing Input Data If you want to reset the checkpoint file, run the following command: /opt/splunk/bin/splunk clean inputdata splunk_rest_example Note: you can also clean eventdata to remove indexed data. For testing purposes, I usually write events to a staging index (this is done via inputs.conf) and clean that index as needed. All the code and examples above were run on a single Splunk instance. If you plan on using these techniques in a distributed deployment, the recommend architecture is to run the input on a heavyweight forwarder. For more information about where to install add-ons in a distributed deployment, check the Splunk documentation.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00770.warc.gz
CC-MAIN-2023-14
3,768
37
https://community.spiceworks.com/topic/1830825-detecting-multiple-short-duration-outages
code
We have an MR 32 that seems to be going up and down a lot. I happened to notice an unexpected red notification icon when looking at the Overview. When it goes down it is for under 4 minutes, and when it is up it is for less than 15 minutes. I'm wondering if there is a way to see if other APs are having similar issues without having to click on each on and drill down to the Monitor Access Point page for each of our networks. The email notification threshold is set at 5 minutes.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00064.warc.gz
CC-MAIN-2021-49
481
4
https://ivandemes.com/my-top-3-favorite-informative-uem-community-forum-discussions-august-september/
code
My Top 3 Favorite/Informative UEM Community Forum Discussions – August/September In this blog post I want to share my top 3 favorite/informative UEM Community Forum discussions of August/September 2017. #3 – Methods to Capture Registry and Profile Changes – Tools that can be used to monitor registry/file/folder changes. #2 – Windows App Dissapear after logoff (Windows 10) – Known issue with Windows 10 and “remove local profile at logoff”. #1 – Acceptable Horizon Desktop Logon Time – Reduce the amount of time for showing the Welcome Screen. 3 – Methods to Capture Registry and Profile Changes Key takeaway: Tools that can be used to monitor registry/file/folder changes and help you choosing the correct content for your INI files. Forum user mrstorey303 asked: Can anyone explain the easiest way to identify registry settings / profile changes for settings that can’t be captured using the application profiler? For example – I know I can use the app profiler to monitor an application executable and capture the config changes I make within the app, but how can I easily identify things like Windows OS changes – changes like ‘enable file name extensions’ and ‘hidden files’ in Windows Explorer? (BTW I have tried monitoring explorer.exe with app profiler, but the profiling session immediately terminates – so I suspect it’s not designed for processes like that). Perhaps I can google around for these settings, but I just wondered if there were some good tools or processes you are using to capture changes like these, so I can configure them within UEM. There are several tools you can use. I always use a combination of the following. 2 – Windows App Dissapear after logoff (Windows 10) Key takeaway: Do not use “remove local profile at logoff” (for now ;-)) when using Windows 10 Forum user ErikVb84 reported: We are currently creating a windows 10 (1703) image and are testing the workings of all the features and applications. The environment uses local profiles and VMWare UEM version (22.214.171.1241) and we use the Advance UEM GPO setting to remove the profile after logoff. Part of Windows 10 are the apps from the Store which are visible in the start menu. We are now seeing that after the user has logged on twice on the same machine all the apps disappear from the start menu and will not return. The problem persists for the user on that machine unless two things happen: - the user profile is deleted via advanced system settings in Windows; - Machine is reinstalled. To test we setup VMware UEM without any configuration (no shortcuts no Config file, no conditionsets, nothing.) and have only applied our GPO settings(see GPO settings below). When UEM is turned off (no config set) the problem does not occur. When we turned on the test config the problem would occur after two logons. The first suspect was the local profile deletion so we turned this feature off and tried again and as expected the problem did not occur anymore. Our guess of the cause? We are unable to find a solution for this issue at it seems it might be a timing issue or maybe 1703 changes the way it stores user app information and the way UEM does the deletetion creates some sort of lock on this new method. Does anyone have a similar issue or is there a solution out there for this problem. Pim_van_de_Vis his answer: This is a know issue with Windows 10 and the ‘remove local profile at logoff’ option. Windows leaves user SID information behind at this location: When you manually remove the user SID key there, the issue is gone. We are trying to fix this in the future. 1 – Acceptable Horizon Desktop Logon Time Key takeaway: Reduce the amount of time for showing the Welcome Screen and improve user logon time Although this is an older thread, a nice addition was done by JohnTwilley: That pesky “Welcome Screen” !! One of my favorite tweaks is DelayedDesktopSwitchTimeout This is a nice setting to play around with, to show the User Desktop is little sooner than normal. Some people set it to 0. I liked setting it to 2. See what works best for you…
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00731.warc.gz
CC-MAIN-2024-10
4,115
34
https://windowcleaner.com/products/maykker-python-sleeve-complete
code
Maykker Python Sleeve Complete Maykker Python Sleeve Complete About the Maykker Python Sleeve Complete What is the worst part about scrubber sleeves? When they constantly move around on your t-bar while you try to work. Guess what? Maykker has fixed that! This unique, revolutionary setup uses a three-part system. Now you might say that seems excessive, but hear us out. Part 1 - Maykker Swivel T-Bar Obviously, you can't completely re-invent the wheel. A t-bar is still the backbone of this system. It is durable, lightweight, and features a swivel head so you can angle it just right. It has a plastic handle and an aluminum bar. A solid choice for any window cleaner whether you work by hand or with an extension pole. Part 2 - Maykker Handy Sleeve Here's where it starts to get interesting. Even though the Maykker Handy Sleeve has "sleeve" in the name, this isn't what will be touching the glass. This is meant to fit on your t-bar super snuggly. You may even need a screwdriver to help you fit it on the first time it is that tight. Snaps keep it in place and once it's on, this sleeve won't move or wiggle. You can use this for other attachments as well like a bronze wool pad, white scrub pad, or even a towel so it is a great multi-purpose tool. Part 3 - Maykker Python Sleeve This is the game-changer. Instead of a regular sleeve that just slides over a t-bar and is expected to stay in place with a little piece of elastic or velcro, it latches on to the Maykker Handy Sleeve. Line it up and the teeth from the Handy Sleeve grip on for a tight hold. Now you can scrub without any movement, and really get some action right where you want it on the glass. It might feel odd at first, suddenly having so much control, but once you get used to it, you're gonna love it. These sleeves are also super easy to care for and store. Throw it in the machine and let it air dry. Then stack them flat, fold them, or roll them up in your toolbox to always have on hand. The durable, absorbent microfibers readily suck up your window cleaning solution so you can work on glass. Part 4 - Optional End Scrubbers Want more scrubbing power for your t-bar? Select the end scrub pads that will work best for you. The EasyScrub pads from Maykker fit between the Handy Sleeve and the Python Sleeve. When they get worn out, or you just don't want them for a particular job, it's super easy to just pull up the Python sleeve and remove them. The extra set of two lets you be stocked up for the future. Choose the bronze wool for an abrasive that won't rust or scratch. The white pads are the finest abrasive and the blue pads are the most aggressive. Without built-in scrubbers like traditional sleeves, you won't have to worry about ends blowing out. So what are you waiting for? Get your hands on this Maykker Python setup and never go back to traditional sleeves and t-bars. - 3 Part System - The sleeve won't move while you work - Multi-Purpose Tool - Optional End Scrubbers COMPLETE STRIPWASHER INCLUDES: - Maykker Swivel T-Bar x 1 - Maykker Handy Sleeve x 1 - Maykker Python Sleeve x 1 QUESTIONS & ANSWERS Have a Question? Be the first to ask a question about this.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945472.93/warc/CC-MAIN-20230326111045-20230326141045-00077.warc.gz
CC-MAIN-2023-14
3,159
24
https://astrogeo.org/malo/
code
MALO — MAss LOading computation software The Earth as a whole responses to external forces as an elastic body. Changes in the mass of the atmosphere and hydrosphere cause crustal deformations that are of the order of magnitude 1cm. Deformations caused by loadings should be taken into account in a reduction of astronomical and space geodesy observations when the accuracy better than 2 nrad or 1 cm is required. Software MALO contains a number of routines that establish the infrastructure for processing output of numerical models that describe mass changes and for computing mass loading deformations. License: GNU Public License. The latest version of the source code is Additional tar ball with data: Read README file and Back to Astrogeo Center main page. This web page was prepared by Leonid Petrov Last update: 2023.11.18_09:59:37
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816024.45/warc/CC-MAIN-20240412132154-20240412162154-00241.warc.gz
CC-MAIN-2024-18
840
17
https://www.tableau.com/fr-fr/learn/webinars/building-advanced-analytics-applications-r-and-python-integration
code
Webinaire à la demande Building Advanced Analytics Applications with R and Python Integration Access the webinar resources, including the Jupyter workbook here. At Tableau we help people see and understand data. Seven words that drive everything we do. And they’ve never been more relevant. Tableau is all about making your analytics faster, smarter, and more powerful, so that everyone can get the answers they need. Helping people gain insight into their data to solve unexpected problems is what drives us. Tableau is a visual analytics and reporting solution that connects directly to R, Python, and more. It’s designed for you, the domain expert who understands the data. Its drag-and-drop interface allows you effortlessly connect to libraries and packages, import saved models, or write new ones directly into calculations, visualizing them in seconds. In this webinar, we will explore how various analytics partners are leveraged in Tableau, and how to take advantage of these integrations to move your analysis to the next level. Whether you work with R, Python, or other statistical or data mining environments, Tableau allows you to take advantage of your existing investments and knowledge to compose impactful data stories. Learn more about our Advanced Analytics for Hardcore Data People webinar series. À propos de l'intervenant Liens de téléchargement des vidéosMP4 Cliquez avec le bouton droit pour enregistrer le fichier
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943555.25/warc/CC-MAIN-20230320175948-20230320205948-00717.warc.gz
CC-MAIN-2023-14
1,447
10
http://forum.opencarry.org/forums/showthread.php?120958-Open-Wireless-Movement-Open-Garden-launches-invite-to-chat-off-the-grid
code
Color me skeptical.....nothing is ever free. Anyway, I'm not a "urban" kind of guy. Mandating that I "assist" the authorities, in a emergency, if I have one of these "open networks" is anti-liberty. Who really owns the network once it (and eventually all of them) are some day considered vital and classified as a utility. Nope, I'll freeload when the opportunity arises.
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719908.93/warc/CC-MAIN-20161020183839-00429-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
371
4
https://docs.microsoft.com/en-us/archive/blogs/permanenttan/autolayout-by-examples-part-3-dialog-8
code
AutoLayout by Examples - Part 3 - Dialog 8 Dialog 8: Custom control inside a dialog Project Name: Channel.csproj. Download source. - Multi-channel custom control - Overlapping containers Dialog Designer Layout: Dialog Document Outline: Custom Control Designer Layout: Custom Control Document Outline: Custom Control Key Notes: - Identically placed and sized channel containers are panels (panelChannel1 to panelChannel4) located inside an outer panel (channelContainer) - The “channels” (i.e. contents on the right) are switched to visibility by toggling the respective channel panel’s Visible property. Download source. /// Switch to the specified channel /// <param name="channel">1,2,3,4</param> private void SwitchTo(int channel) buttonChannel1.Checked = panelChannel1.Visible = (channel == 1) ? true : false; buttonChannel2.Checked = panelChannel2.Visible = (channel == 2) ? true : false; buttonChannel3.Checked = panelChannel3.Visible = (channel == 3) ? true : false; buttonChannel4.Checked = panelChannel4.Visible = (channel == 4) ? true : false; - The channel contents are laid-out inside the channel own TableLayoutPanel (channel1LayoutPanel to channel4LayoutPanel) Tricks and Tips: - Identically placed and sized overlapping panels are quite difficult to select in the designer. Additionally, a selected panel has to be brought to the foreground before its contents can be edited. Use the Document Outline to select the desired panel and bring it to the foreground. Here are the steps to select and bring a channel panel to the foreground: - View > Other Windows > Document Outline. From the Document Outline window, highlight an item in the document outline to select the corresponding control in the designer and to show its property values in the Properties window. - Expand the outer channel container (channelContainer) - To select the channel 4 container (panelChannel4) and drag it to right below the channelContainer item. The order of items/nodes in the document tree defines the corresponding control’s Z-order. - Another way to bring a hidden or occluded panel to the foreground is to first select the control using the Document Outline and then, in the designer, right-click on the displayed sizing handles (i.e. little squares) to dropdown the context menu and select Bring to Front. Here are the steps: - In the Document Outline, select panelChannel4. In the form designer, panel panelChannel4 should be selected. Notice that even though panelChannel4 is being selected, it is not in the foreground. - In the form designer, right-click on one of the resize handles and select Bring to Foreground. In the screen shot below, right-clicking on the top-center resize square drops down the context menu. - New user controls like this custom control are listed at the top of Toolbox. Select the dialog form in the designer to see this list in Toolbox. This posting is provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified at http://www.microsoft.com/info/cpyright.htm
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00024.warc.gz
CC-MAIN-2020-40
3,065
32
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94657
code
Created attachment 48305 [details] Initially reported by Agostino Sarubbo as https://bugs.gentoo.org/718004. There Gentoo builds gcc-9.3.0 as: and observes 'ar' being called by libcpp. It should be 'x86_64-pc-linux-gnu-ar' instead. libcpp/Makefile.in hardcodes 'AR = ar' (but does not hardcode RANLIB for example: 'RANLIB = @RANLIB@'). A few other tools do it: $ git grep 'AR = ar' | cat gcc/ada/gcc-interface/Makefile.in:AR = ar intl/Makefile.in:AR = ar libcpp/Makefile.in:AR = ar libdecnumber/Makefile.in:AR = ar but only libcpp refers 'ar' and fails build when 'ar' does not exist: ar cru libcpp.a charset.o directives.o directives-only.o errors.o expr.o files.o identifiers.o init.o lex.o line-map.o macro.o mkdeps.o pch.o symtab.o traditional.o /bin/bash: ar: command not found Attached patch adds AR detection similar to other libraries. But maybe it should be set in top-level Makefile? please submit the patch to the gcc-patches mailing list for review Sent as https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544379.html (In reply to Sergei Trofimovich from comment #2) > Sent as https://gcc.gnu.org/pipermail/gcc-patches/2020-April/544379.html thanks, adding the "patch" keyword On #gcc Tobias pointed out that similar patch was merged a few days ago: https://gcc.gnu.org/git/gitweb.cgi?p=gcc.git;h=731c4ce0e93065fb70db5faa2bd6c9c6bad56738
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00402.warc.gz
CC-MAIN-2021-31
1,353
22
https://wordpress.stackexchange.com/questions/366946/why-i-dont-see-all-my-post-in-feed-in-wordpress?noredirect=1
code
I have a question. I want to get all my post but when I visit http://example.com/feed/atom/ I have only three post. In my website there are much more than three. How to update this file? It depends, actually. Have you got a custom written template, or you use a pre-programmed one? If you use a ready-made template, I assume that the problem lies in the settings. Then @made2popupar's answer should solve it. If not, I'm afraid, that we have not enough information to help. You should post how does your post searching loop looks like.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00014.warc.gz
CC-MAIN-2024-18
535
2
https://www.capitalgroup.com/ria/insider/registration
code
Who are you ? RETIREMENT PLAN INVESTOR Use your plan ID (available on your account statement) to determine which employer-sponsored retirement plan website to use: With RIA Insider you’ll have access to curated insights, a community of peers and thought leaders, tools and resources designed to optimize your RIA practice.* Submit this registration form for RIA Insider and Advisor site access. Upgrade your existing account with a new RIA Insider login. (Your new RIA login will overwrite your previous Advisor site credentials.)*An existing CG Advisor account is not required.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00801.warc.gz
CC-MAIN-2023-50
580
6
https://www.etc.cmu.edu/projects/serenity/
code
Download Samsara from iOS AppStore / Google Play Store or play it on web: Serenity is a motivated team of engineers and artists working on an ETC faculty pitched project with Jiyoung Lee. The goal of this project is to design and develop an expressive artistic experience that addresses the serious issue of bullying. This is an unsolved social problem that results in both emotional and physical trauma for its victims. Bullying is a problem that is rooted in negativity, so the philosophy of team Serenity is to create something positive. We created a game called Samsara which is an atmospheric, vertical scrolling, action/adventure game where players guide a seed as it falls from the top of the trees to the beautiful forest floor. Here is a short video. The game is still in development so we will keep updating until the end.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817171.53/warc/CC-MAIN-20240417173445-20240417203445-00098.warc.gz
CC-MAIN-2024-18
832
5
https://fridayafternoons.co.uk/parlour-games/mad-dash
code
A cooperative game played in a video call where the players rush to all find the same item in their home. Tried and tested by Friday Afternoons - A video calling platform (e.g. WhatsApp, Facebook Messenger, Zoom) - A device for each player from which they can join the video call - A house full of typical household items for each player The aim of Mad Dash is for everyone to bring the same household item onto a video call at the same time, without discussing the items that they'll be collecting. The game is played in rounds called "runs", which finish when the group suceeds in all colecting the same item. Players must not discuss the item they will be collecting either during or in between runs; the purpose of the game is to try to guess, sense or deduce what the other players are going to collect based on the results of the previous run. A run consists of the following: - Each player leaving their video call screen - Each player collecting an item from somewhere in their home (everyone must move into another room at least once, even if they're collecting an item they've already used or that is currently in the same room they're calling from) - Each player returning to their video call screen with the collected item visible Under the standard rules, runs continue until the group succeeds, and the aim is to complete the challenge as quickly as possible. The variations below change this so that it's possible to lose the game. This version of the game requires the group to succeed within a set number of runs. The limit is equal to the number of players, so a group of five must succeed within five runs. With no emphasis on time, the group can take more time to think about what item to bring back. Mad Dash Teams If there are enough players on the call (ideally 8+ and an even number), you can play competitively in two teams, where the first team to all collect the same item wins. When playing in teams, it's best to have a clear way to distinguish which team each player belongs to (e.g. shirt colour, on-scren mascots, boys vs. girls).
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662545090.44/warc/CC-MAIN-20220522063657-20220522093657-00456.warc.gz
CC-MAIN-2022-21
2,062
16
https://blummy.com/about.php
code
is a tool for quick access to your favorite web services via your bookmark toolbar. rich functionality (such as bookmarklets). It works on almost every page on the web to the currently loaded page. blummy was created by Alexander Kirk Idea, Concept, Programming Nader Cserny Brand Infection Aaron Boodman Drag'n'Drop library Douglas Crockford JSLint Some bookmarklets taken from these sites
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816820.63/warc/CC-MAIN-20240413144933-20240413174933-00217.warc.gz
CC-MAIN-2024-18
390
10
https://www.bu.edu/antiracism-center/profile/aviva-geiger-schwarz/
code
Aviva Geiger Schwarz Data Editor, The COVID Racial Data Tracker Aviva Geiger Schwarz is Data Editor for the BU Center for Antiracist Research’s COVID Racial Data Tracker, a collaboration between the Center and the COVID Tracking Project at The Atlantic. Aviva applies epidemiologic methods to track, analyze, and understand racial and ethnic disparities in health outcomes. She has previously served as a public health epidemiologist and data consultant for the NYC Department of Health & Mental Hygiene, where her research interests included the impact of racism over the life course on maternal and infant mortality, preterm birth, and low birth weight, as well as differences by race and ethnicity in the risk factors for maternal depression. Her work has also included data management and program evaluation for the NYC Nurse-Family Partnership, an evidence-based home visitation program to improve birth outcomes among historically underserved pregnant women in New York City. Aviva received her MPH from Columbia University Mailman School of Public Health and BA in biology from Harvard University.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00618.warc.gz
CC-MAIN-2021-04
1,106
5
http://labs.sapo.pt/2013/03/determining-language-variant-in-microblog-messages/
code
Gustavo Laboreiro, Matko Bošnjak, Luís Sarmento, Eduarda Mendes Rodrigues, Eugénio Oliveira (2013). “Determining language variant in microblog messages”, in Proceedings of the 28th Annual ACM Symposium on Applied Computing 2013, Volume I, ACM, ISBN 978-1-4503-1656-9, pp. 902-907. It is difficult to determine the country of origin of the author of a short message based only on the text. This is an even more complex problem when more than one country uses the same native language. In this paper, we address the specific problem of detecting the two main variants of the Portuguese language — European and Brazilian — in Twit- ter micro-blogging data, by proposing and evaluating a set of high-precision features. We follow an automatic classifica- tion approach using a Na ̈ıve Bayes classifier, achieving 95% accuracy. We find that our system is adequate for real-time tweet classification.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256040.41/warc/CC-MAIN-20190520142005-20190520164005-00406.warc.gz
CC-MAIN-2019-22
907
2
http://www.islamicity.com/dialogue/Q550.HTM
code
Qur'an: Recitation without Q550 :Is it permissible to recite the Qur'an or glorify Allah while lying in bed and without ablution? A550 : Allah describes His good servants who believe in Him as "those who remember Allah when they are standing, seated, or in a reclined position and meditate on the creation of the heavens and the earth" (3;191). The phrase "remember Allah" has a very wide meaning in Islamic terminology, which includes the recitation of the Qur'an and the praising and glorification of Allah, making supplication, etc. Moreover, the Prophet has taught us some supplication to say when we go to bed. Therefore, the answer to your question is that it is perfectly permissible. It is recommended to have ablution before one goes to bed and before one recites the Qur'an or glorifies Allah or does any act of His remembrance. When we say this is recommended, we actually say that it is not obligatory. Therefore, it is permissible to do all that without having an ablution. It is not permissible, however, to recite the Qur'an from memory or to hold the Qur'an when one is in a state of ceremonial impurity i.e. Janabah. Our Dialogue ( Source : Arab News - Jeddah )
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516117.80/warc/CC-MAIN-20181023065305-20181023090805-00363.warc.gz
CC-MAIN-2018-43
1,178
19
https://www.nimoil.com/sandals/216-women-s-flat-strappy-sandals-snakeskin-print-side-buckles.html
code
Women's Flat Strappy Sandals - Snakeskin Print / Side Buckles Get one of our hottest looks when you purchase these women's flat strappy sandals. These shoes are the bomb! They are beautifully designed, well-crafted and just might be the most comfortable sandals you have ever owned! They are made from our own high-quality engineered leather, an alternative to genuine leather that is 100% cruelty-free. They come in an awesome snakeskin print and are eco-friendly. These shoes have three narrow straps. One goes around the large toe. The second strap goes diagonally over the tops of the feet and the third strap goes around the front of the ankles and fastens securely with attractive side buckles. The soles feature 1/2″ square heels. The open toes are rounded. The sandals also have open heels. The footbeds are comfortably cushioned for greater comfort. These shoes are available in a natural black and white snakeskin print, pink, lavender, red or in classic black. We just know you're going to love them.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945289.9/warc/CC-MAIN-20230324211121-20230325001121-00286.warc.gz
CC-MAIN-2023-14
1,013
2
http://labsoftnews.typepad.com/lab_soft_news/2014/05/lis-functionality-toolkit-lis-fat-emerges-as-functionality-standard.html
code
A project launched by the Association for Pathology Informatics (API) called the LIS Functionality Assessment Toolkit (LIS-FAT) seems to be acquiring some momentum. I discussed the project in a previous note (see: Assessing the Functionality of a Laboratory Information System (LIS)). Officially announced at the ASCP annual conference in Chicago on September 20, 2013, LIS-FAT consists of four documents that are available for free download at the API web site. First is a white paper discussing the need to optimize the functionality of LISs in pathology departments, The white paper is accompanied by three appendices. The first of these, Appendix I, is the most important document in the set. It's a compilation of about 850 functionality statements, declarative statements about the various tasks that can and should be accomplished by an LIS. Each of them is assigned a score of 1-4 with the 3's and 4's being the most critical. They have been designed to be used to evaluate a pathology department's current LIS or one under consideration for purchase. Appendix I has been downloaded approximately 3,000 times since it was made available on the web site, indicating that is is proving to be a useful tool. When the API task force initially developed the LIS-FAT, the major stated goal of the task force was to develop a set of tools that could be used to assess an LIS and, most particularly, help manage the RFP process when buying a new system. Such functionality statements have been included for decades in RFP's developed by individual labs and sent to LIS vendors. One of the goals of the LIS-FAT task force was to make available at no cost a comprehensive list of such statements so that individuals labs could avoid the effort of generating their own similar documents. Actual events have taken a slightly different turn. Some LIS vendors have now developed their individual responses to the functionality statements and are making then available to current and prospective clients who inquire about how their product compare on this basis. I was told during informal hallway conversations at the recent Pathology Informatics Summit 2014, the major yearly conference of the API, that many of the LIS vendors welcomed the publication of LIS-FAT and are viewing it as a kind of functionality "standard" in the market. I place functionality in quotes here because LIS-FAT, in its current form, lacks the formality and precision of an IT standard. The members of the LIS-FAT task force are now considering some possible options in terms of future development of the LIS-FAT. One goal will definitely be to expand the current functionality statements in areas such as genomic testing and lab outreach. Another possible goal will be to consider using the API web site as a central repository for "certified" copies of vendor responses to the functionality statements if the LIS vendors agree that this would be a step forward. The goal here would be for them to "stand behind" the posted responses, guaranteeing them for interested potential buyers. If readers of this blog have other ideas about the future of LIS-FAT, their comments would be most welcome.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218191405.12/warc/CC-MAIN-20170322212951-00520-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
3,166
4
https://www.hortonpoint.com/single-post/2020/04/07/a-little-about-our-amateur-covid19-model
code
A little about our amateur COVID19 model (Notes by an armchair virologist) Several people receiving my daily forecasts have asked about the process of estimating the so- called “true cases”. Indeed, how do we know what true COVID19 cases are if testing is partial and random, and none of the available statistics provides us with an answer to a pretty basic question: how many people in a given pool are actually infected? Fortunately, something called “Compartmental Epidemiology Modeling” comes to the rescue and can help even lay modelers like myself. What follows is a brief (and hopefully not too boring) description of the process. 1. INTRO TO BASIC COMPUTATIONAL EPIDEMIOLOGY (Don’t be alarmed, it’s not that scary). Briefly, the SIR model suggests that the population is compartmentalized into three groups: Susceptible (“S”) – people who have not yet been infected; Infected (“I”) – people who are currently infected; and Recovered (“R”) – formerly infected people who are now recovered. Together, they represent the entire population, so that: S + R + I = 100% There are several fundamental assumptions we must make to keep the model relatively simple. First, the total population is assumed constant (seems easy enough over short time periods). Second, recovered patients acquire full immunity and cannot get re-infected. I do have a potential problem with this assumption, since we do not yet know whether SARS-CoV-2 mutates and re-infects formerly recovered people. I will try to address this in the second installment. But for now, let us stick with the simplistic assumption. The SIR model has a few straight-forward postulates: • S (% of susceptible individuals) will decrease with time; • R (% of individuals with a resolved disease) will increase with time; • I (% of Infected individuals) grows to a certain point before reaching a peak and then starts to decline. When I speak of “true cases” in my nightly updates, “I” above is that number. One of the adjustments I had to make to the classic SIR model is to include both recoveries and deaths in R. This preserves the condition that R individuals cannot be infected again (either immune or dead). So we are going to call R “Resolved”, rather than “Recovered”. Each of the variables can be calculated as follows: dS(t) = -b*I*S dI(t) = b*I*S – g*I dR(t) = g*I where: dS(t), dI(t) and dR(t) are incremental changes of S, I and R over time t b = rate of disease transmission over time t through secondary exposure g = rate of disease resolution over time t [note that b and g are really "beta" and "gamma" but I can't figure out how to embed greek characters in this blog :(]. 2. WHAT THE HELL IS R(0)? The basic reproduction number, R(0), which we hear about so much lately, is simply a ratio of b/g. Given no external intervention, each virus type has constant R(0) which is empirically derived. For example, common cold has R(0) of 2-3, flu is around 1.8, polio is 5-7, mumps 4-7 and so on. The model at its most basic, uses the classic SIR model with a constant R(0). Multiple sources estimate that R(0) for SARS-CoV-2 is around 2.2, which is what I am using on day 1 of the simulation. This means that if left unchecked, a single infectious individual will spread the virus to an average 2.2 people over the course of his/her infection. The SIR model conveniently provides a calculation of the % population unaffected by the disease by the time the epidemic has completely run its course. The formula is below: S = exp(-R(0)*(1-S)) This formula does not have a closed form solution but can be easily iterated to find the value of S. Thus, for value of R0 equal 2.2, S is 15.7%, meaning that 84.3% of the population will become carriers of SARS-CoV-2. Now we can construct the hypothetical picture of a life of the SARS-CoV-2 epidemic from start to finish, given no external intervention: Assuming 10% of the carriers eventually develop COVID19, that is 8.4% of the total population, or 28 million in the US. With COVID19 mortality rates at 1-2%, this would be between 2.8 and 5.6 million deaths in the US. Obviously, a terrible outcome which explains the efforts to artificially reduce R0. In the ideal world (or in South Korea), when everyone is perfectly disciplined and self-isolates, R0 rapidly falls to below 1, and the epidemic ends. There are only three ways to alter the course of an epidemic: 1) Increase g by reducing time to recovery through effective treatment; 2) Decrease the timing of the cycle by rapidly increasing b through herd immunity while reducing the viral load, typically by vaccinations; 3) Reduce b by increasing the average time between social contacts (aka social isolation). It’s clear that the world is following the third option until the first two become available. 3. WHAT WE DO AND DO NOT KNOW Obviously an estimation model needs reliable inputs to avoid the GIGO problem. My early version of the model used confirmed cases as input, but I had to dismiss this data as pretty random. We can rely, however, on a relative accuracy of resolved cases (recoveries + deaths), or dR(t), and the relationship between dR(t) and confirmed cases. Of course, lack of reliable data on comortality (e.g. how many deaths were caused by multiple underlying conditions rather than solely COVID-19) adds another layer of randomness to this exercise. Knowing dR(t) can help estimate average time for resolution, i.e. how many days ago did we have the number of confirmed cases which eventually completely resolved to dR today. In absence of interventional treatment, time to resolution is relatively stable. Empirically this number converges to 8.5 days. Add to that time lag from when the symptoms appear to when the test was performed, and we get somewhere in the 10-12 day range. Now to the other part of the equation, the rate of infection, which can be implied from the average time during which a susceptible individual may come in contact with an infected individual and also becomes infected. We’ll call this number T(i), and it is measured in days. We can back into T(i) from an observed R(0). R(0) of 2.2 implies that T(i) is roughly 4.5 days (e.g. a healthy susceptible individual has a 1 in 4.5 chances to get infected on any given day). T(i) is a critical variable and has dramatic impact on the overall course of the epidemic. Increasing T(i) by only 2 days lowers the peak of infection from 20% to 10% of total population, and aggregate I over the life of the epidemic from 84% to 60% of total population. Doubling T(i) to 9 days lowers the peak to about 3% and implies max 11% infected until the virus completes its cycle. Even though 11%, or 35 million seems like a lot, this is actually an acceptable number. This spreads patient load over a much longer time, reduces the burden on the health care system, significantly reduces mortality, and buys us time to develop the other two solutions – treatment or vaccine, or both. This is what is meant by “flattening the curve” that everyone is talking about. Three scenarios, all assuming absence of treatment or vaccine. Another unknown but possibly positive effect is seasonality, where we expect that warm weather reduces virus propagation (uncertain but possible). Pushing the apex out by 30-60 days will get us into the summer and may help reduce the peak even further. 4. THE MODEL AT WORK The challenge for a modeler is to keep estimating the ever-changing value of R(0) – as we implement social isolation measures, R(0) rapidly declines, so that the infection curve is not static, but is constantly changing shape. So this is what our model does. We attempt to estimate the dynamics of R(0) on a daily basis which can be roughly done by inferring dI(t-2) and dI(t-1). To calibrate the model, I used statistics from countries with the largest % tested and the most stringent social isolation procedures at the time, and then applied to countries which were behind. Of course, the path of
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710801.42/warc/CC-MAIN-20221201053355-20221201083355-00798.warc.gz
CC-MAIN-2022-49
8,012
36
http://tristanmenard.blogspot.com/2010/08/stage-nexus-production.html
code
I went too month in London for an Internship at Nexus Production. I worked on Fx&Mat's project during all my work experience. Especially on their new short movie project, I did some researsh for the set design/lighting... I'm not allowed to say more about the movie, but the idea and the story are really funny. It will be awesome!
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578675477.84/warc/CC-MAIN-20190424234327-20190425020327-00447.warc.gz
CC-MAIN-2019-18
331
2
https://lists.kamailio.org/pipermail/devel/2006-October/004153.html
code
[Devel] rank == PROC_FIFO bogdan at voice-system.ro Thu Oct 19 14:36:21 CEST 2006 Juha Heinanen wrote: >i changed a couple of addf_mi_node_child calls to add_mi_node_child >calls, and after that i didn't anymore get errors. thanks for the fixup ! > here is an example >what lcr_dump now produces: ># openserctl fifo lcr_dump >GW:: GRP_ID=1 URI=sip:18.104.22.168:5060 PREFIX=2745252464 >prefix is still not correct. i need to investigate more. I found the problem and hopefully fixed it - the prefix was printed as integer but it actually was a char*. See if works now. >is there supposed to be the two :s after GW or is there something >missing in between? double ':' is correct - that is separator between name and value of an mi node when scanning/printing via FIFO. > i didn't find in code where they come from. it is in the mi_fifo module More information about the Devel
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00394.warc.gz
CC-MAIN-2021-25
875
21
http://bolakaskus.com/author/je00y/
code
My name is Florencia Vallejo but everybody calls me Florencia. I'm from Brazil. I'm studying at the high school (3rd year) and I play the Guitar for 7 years. Usually I choose songs from my famous films :). I have two sister. I like Airsoft, watching movies and Herpetoculture.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317359.75/warc/CC-MAIN-20190822194105-20190822220105-00072.warc.gz
CC-MAIN-2019-35
276
2
https://www.frontiersin.org/articles/10.3389/fncel.2020.00171/full
code
ORIGINAL RESEARCH article Sec. Cellular Neuropathology Volume 14 - 2020 | https://doi.org/10.3389/fncel.2020.00171 Convolutional Neural Networks Can Predict Retinal Differentiation in Retinal Organoids - 1Department of Ophthalmology, The Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, United States - 2Genome Technologies and Bioinformatics Research Centre, Moscow Institute of Physics and Technology, Dolgoprudniy, Russia - 3Department of Information Systems, Ivannikov Institute for System Programming of the Russian Academy of Sciences, Moscow, Russia - 4National Research Center “Kurchatov Institute”, Moscow, Russia - 5Endocrinology Research Centre, Institute for Personalized Medicine, Moscow, Russia We have developed a deep learning-based computer algorithm to recognize and predict retinal differentiation in stem cell-derived organoids based on bright-field imaging. The three-dimensional “organoid” approach for the differentiation of pluripotent stem cells (PSC) into retinal and other neural tissues has become a major in vitro strategy to recapitulate development. We decided to develop a universal, robust, and non-invasive method to assess retinal differentiation that would not require chemical probes or reporter gene expression. We hypothesized that basic-contrast bright-field (BF) images contain sufficient information on tissue specification, and it is possible to extract this data using convolutional neural networks (CNNs). Retina-specific Rx-green fluorescent protein mouse embryonic reporter stem cells have been used for all of the differentiation experiments in this work. The BF images of organoids have been taken on day 5 and fluorescent on day 9. To train the CNN, we utilized a transfer learning approach: ImageNet pre-trained ResNet50v2, VGG19, Xception, and DenseNet121 CNNs had been trained on labeled BF images of the organoids, divided into two categories (retina and non-retina), based on the fluorescent reporter gene expression. The best-performing classifier with ResNet50v2 architecture showed a receiver operating characteristic-area under the curve score of 0.91 on a test dataset. A comparison of the best-performing CNN with the human-based classifier showed that the CNN algorithm performs better than the expert in predicting organoid fate (84% vs. 67 ± 6% of correct predictions, respectively), confirming our original hypothesis. Overall, we have demonstrated that the computer algorithm can successfully recognize and predict retinal differentiation in organoids before the onset of reporter gene expression. This is the first demonstration of CNN’s ability to classify stem cell-derived tissue in vitro. The differentiation of pluripotent stem cells (PSC) using a three-dimensional “organoid” approach has become the strategy of choice to recapitulate the development of the retina, brain, inner ear, intestine, pancreas, and many other tissues in vitro (McCauley and Wells, 2017). This technique allows to reproduce the process of normal development and does not require any exogenous stimulation of developmental pathways and genetic modification of the cells used (Eiraku et al., 2011; Meyer et al., 2011). Indeed hundreds of studies confirm that retinal organoids, differentiated from mouse or human pluripotent cells, show a unique resemblance to native tissue architecture, cell specification and sub-specification, function, and transcriptional profile (Hallam et al., 2018; Cowan et al., 2019). This demonstrates the robustness of the technology and makes it highly attractive for potential translation to the clinic as a source of high-quality retinal neurons for transplantation (Decembrini et al., 2014) or as a platform for the screening of new therapeutics (Baranov et al., 2017). The process of the differentiation itself is stochastic, which causes the quantity of retinal differentiation to vary a lot even among organoids within one batch—not to say when different cell lines are used (Hiler et al., 2015; Hallam et al., 2018; Cowan et al., 2019). The current approach to select retinal tissue for further growth and maturation is based on subjective morphological observation and features visible with bright-field imaging: lamination of the neuroepithelium, adjacent pigment epithelium areas, etc., and/or on the expression of fluorescent reporter constructs driven by retina-specific promoters. These reporters allow to assess the differentiation on different stages of retinal development: from early eye field-specific genes [Pax6-GFP mESCs (Völkner et al., 2016) and Rx-GFP mESCs (Eiraku et al., 2011)] to terminal retinal cell types as rods Nrl-GFP miPSCs (Ueda et al., 2018), Six6 (Sluch et al., 2018), or Rx (Nakano et al., 2012) for early optic vesicles, Brn3a (Sluch et al., 2015) for retinal ganglion cells, Crx (Nakano et al., 2012) for photoreceptors, or Nrl (Phillips et al., 2018) for human rods. The use of fluorescent reporters is a “gold standard”—it is a sensitive, specific, and easily quantifiable method to assess retinal differentiation (Vergara et al., 2017), although it cannot be used in cell manufacture for transplantation or to model inherited diseases due to genome modification. The manual selection under the microscope with bright-field imaging is limited in throughput and the classification criteria can be subjective, resulting in high variability between observers. This puts its limitations on the further transition of this technology “from the bench to bedside.” Here we tried to address this issue by developing an automated non-invasive method which can predict retinal differentiation based on bright-field images of retinal organoids on the early stage of their development using artificial intelligence. Machine learning has been evolving rapidly during the last decades. This is mostly due to the increase in accessible computational power and the ability to generate and store massive amounts of data. Nowadays, one of the most actively developing branches of artificial intelligence is deep learning, which was able to outperform the best conventional machine learning algorithms in multiple fields including speech and image recognition (LeCun et al., 2015). This technology was inspired by the principles which lay in cognition and data processing by the brain. In simple understanding, the biological neuron is receiving information from other neurons, combines it, and transmits a modified signal to the next pool of neurons. In general, the artificial neuron works in a similar way: it receives inputs from the group of neurons, combines them with some weights for each input, and transmits the result to the next set of neurons using some non-linear function. So, each artificial neuron itself can be interpreted as a function, which gets a vector of inputs from neurons from the previous layer and returns some value (activation) which is being transmitted to the next layer. The neural network usually contains several layers of these neurons connected together, starting from the input layer and finishing with the output layer which returns the result. The general task for supervised learning is to find optimal weights for each neuron in the network to minimize an error between the value predicted by the program and the value which was assigned before the training (e.g., ground truth label for classification or some score for regression task). This approach showed itself to be extremely effective in solving multiple tasks such as speech recognition, computer vision (LeCun et al., 2015), processing of medical and biological data (Ching et al., 2018), etc. For the analysis of images (or any data which has local adjacency structure), the special type of neural networks was developed—convolutional neural networks (CNN). This type of neural network has a few so-called convolutional layers in the beginning of the learning process, which allows to find relationships between spatially adjacent parts of the image for the dimensionality reduction and extraction of features. This approach has found a lot of applications in multiple fields of biology and medicine. For example, for diagnosis of diabetic retinopathy based on fundus imaging (Gulshan et al., 2016) and for skin cancer classification (Esteva et al., 2017), and recently it was proven effective to predict the very early onset of PSC differentiation (Waisman et al., 2019) and the quality of retinal pigment epithelium (RPE) differentiation in a two-dimensional setting (Schaub et al., 2020). Being inspired by the success that this approach showed on the prediction of spontaneous differentiation of PSCs with basic bright-field imaging used as a source of information, we hypothesized that basic-contrast bright-field images contain sufficient information on tissue specification, and it is possible to extract it using convolutional neural networks. In this study, we decided to test the ability of CNN to: (1) recognize early retinal differentiation in organoids; and (2) predict retinal differentiation in individual organoids before the onset of the expression of the eye field-specific reporters—for instance, Rx. To predict early retinal differentiation, we utilized a transfer learning approach: CNN is being pretrained on the ImageNet classification dataset (Deng et al., 2009) containing more than 10 million images which are split into more than 20,000 classes. This approach allows to transfer the ability of a pretrained network to extract low-level features from natural images and focus more on high-level features from the target dataset during the training. Such a trick helps to achieve desirable results using lower amounts of training data and have been proven useful for the analysis of biological images (Ching et al., 2018). Materials and Methods mES Cell Culture mES reporter cell line RxGFP has been used in this study (RIKEN; Eiraku et al., 2011). The cells were cultured in the mES medium (Supplementary Table S1), fed every other day, and passaged at 70–80% confluence on a cell culture-treated T-75 flask coated with 1% Matrigel (Corning) solution for 1 h. For replating or seeding for retinal organoid formation, the cells were dissociated using 0.25 Trypsin solution (Gibco) for 7 min on 37°C in a CO2 incubator. Retinal differentiation was performed as was shown before, with minor modifications (Perepelkina et al., 2019). The protocol is outlined in Figure 1A. The RxGFP mES cells were dissociated from the flask with 0.25 trypsin and seeded in a differentiation medium (OV; Supplementary Table S1) on a 96-well U-bottom polystyrene plate (Greiner) at a cell density of 3,000 cells per well in 50 μl of the media. The cells were fed with 50 μl of OV supplemented with 1% Matrigel (Corning) on day 1 of differentiation. Additional feeding with 50 μl of OV with 0.5% Matrigel was performed on day 5 of differentiation. Further medium change was performed with OC media starting from day 9. Figure 1. Retinal differentiation. (A) Experimental outline: the organoids were imaged on day 5 using bright-field and on day 9 using fluorescent microscopy. Fluorescent images were used to assign true labels and bright-field ones for feeding neural network. This figure was created with BioRender.com. (B) Confocal image of retinal organoid on day 9 of retinal differentiation. Staining was performed for early retina-specific markers: Rx and Pax6. (C) Representative organoids from retinal and non-retinal classes. Different patterns in fluorescent images reflect the difference in bright-field ones. Both bright-field and fluorescent images of the organoids have been taken using the EVOS fl Auto microscope. For bright-field imaging, the plates were scanned with a 4× phase-contrast objective on day 5 of differentiation, with fine autofocus function. As each organoid is seeded separately in a separate well of a 96-well plate, each image contained no more than one organoid. Immunohistochemistry and Confocal Imaging Ten organoids from each batch were collected and fixed with 4% PFA for 20 min at room temperature (RT). Prior to staining, they were blocked with a blocking buffer for 1 h at RT. Staining with primary antibodies (Santa-Cruz anti-Rx antibody #SC-79031 and Hybridoma Bank anti-PAX6 antibody #AB528427) was performed overnight at +4°C in staining buffer. On the next day, after washing with a wash buffer (Supplementary Table S2), secondaries were applied overnight at +4°C. After staining with antibodies and washing, the organoids were stained with 4′,6-diamidino-2-phenylindole for 10 min at RT and mounted on concavity slides (Lab Scientific). Confocal images were taken using a Leica SP5 confocal microscope. Classification Criteria for Fluorescent Images The discrimination between retinal and non-retinal organoids for the purpose of assigning ground truth labels was based primarily on the expression of the Rx-GFP reporter, which is a very specific marker for early retinal progenitor cells (Medina-Martinez et al., 2009; Zagozewski et al., 2014). The criteria took into account the brightness of the reporter, localization, and pattern of the retinal area. We have sorted organoids based on the fluorescent images on day 9 into three groups: “retina,” “non-retina,” and “satisfactory.” The following criteria were utilized (Figure 2): • The retinal organoids should have bright fluorescence or localized fluorescent retina-like structures. • A satisfactory organoid should have sparse or scattered fluorescence pattern without clearly separable retinal areas. • A non-retinal organoid should not be fluorescent or have uniformly distributed green background fluorescence. Figure 2. Image annotations. (A) Fluorescent images of representative organoids from each class which experts have classified to “retina,” “non-retina,” and “satisfactory.” (B) Ratios of labels assigned by two experts for the training dataset. (C) Summary of ratios for different classes which can be assigned after combining the votes from two experts. Classification Criteria for Bright-Field Images For sorting organoids on day 6 using bright-field images, the following criteria were defined: • Retina—distinct layer-like (neuroepithelium) transparent areas on the periphery of the organoids • Non-retina—uniform cellular aggregate without distinct transparent areas Dataset Preparation and Images Preprocessing for Training the Network The initial dataset (1,209 images in total) was split into three parts: the training one (64% of total), the validation (16% of total), and the test one (20% of total). The training and validation datasets were used for architecture and parameter selection. The test dataset was used only for the final validation of the best neural network after the whole process of parameter tuning and architecture selection is completed. Before feeding the images to neural networks, we implemented a few preprocessing steps. First, we find the position of the organoid on an image and crop it out using Python OpenCV script based on blob detection. This is a very simple and straightforward approach for object detection. It works best if the target object is significantly darker or brighter than the background as it is based on automated thresholding (Otsu method). This is exactly the case for retinal organoids—they are significantly darker than the background and have pretty contrast borders. Thus, we found the algorithm to work very efficiently. Furthermore, it does not require any manual parameter adjustments, except for the average organoid size which stays stable, if the same quantity of cells is used for seeding in the beginning of differentiation. We also applied Gaussian normalization to the images and augmented them with random horizontal and vertical flips, rotations, width and height shifts, and zoom transformations. Proportionally more transformations were applied to the non-retina class images in order to balance the number of images used for CNN training. Additional information on augmentation parameters can be found in the “Supplementary Extended Methods” section. Interpretation of CNN Output and Threshold Selection The neural network takes some piece of data as an input, i.e., image, and is designed to predict the probability for it to be retinal—value between 0 and 1. This is done in the following way. The network consists of small abstract units called neurons; each of those has several inputs (like axons/dendrites in real neurons). Each dendrite of each neuron has its own value called weight, and each neuron itself has its own value called bias. When a neuron gets some numerical values to its inputs, it multiplies them with the corresponding weights, sums them up, adds bias, and applies to the result some non-linear function (usually called activation function). The resulting value is sent to the output. The neurons are aggregated into groups called layers. The inputs of the neurons of the first layer are attached to the pixels of the input image. The inputs of the neurons from any internal layer are attached only to the outputs from the neurons in the preceding layers. The last layer consists only of one neuron—its output value is interpreted as the probability of the organoid to be a retinal one. The way to organize the neurons and the layers is called the architecture of the network. Initially, the weights and the biases of the neurons are taken randomly. While the network gets training images, it tries to predict their classes, evaluates the results using true classes of the images, and adjusts the weights and the biases of its neurons using a backpropagation algorithm. Therefore, after processing the image, CNN returns a value from 0 to 1, which can be interpreted as a probability for this organoid to belong to the “retina” class. Thus, the threshold should be selected to make the final prediction: organoids with scores higher than the threshold would be considered “retinal,” and with lower—“non-retinal.” We determined a threshold by maximizing the value of sensitivity * specificity [true positive rate * (1- false positive rate)] on the training dataset. This approach helps to improve both the sensitivity and the specificity of the classifier, which can be affected by the imbalance between classes. Selection of the Best CNN Architecture and Cross-Validation For our task, we selected four convolutional neural networks with different architectures, which showed themselves effective on ImageNet competitions and in multiple biological applications (Esteva et al., 2017; Waisman et al., 2019): VGG19 (Simonyan and Zisserman, 2014), ResNet50v2 (He et al., 2016), DenseNet121 (Huang et al., 2017), and Xception (Chollet, 2017). All of these CNNs were initially pretrained on the ImageNet dataset (Deng et al., 2009). For the selection of the best network, 10-folds cross-validation was used: the training dataset was split into 10 non-overlapping subsets. On each step of the process, the network is training on nine out of these 10 subsets and then uses the last subset for validation. Each subset is used for validation once. So, this allows to perform statistical tests for CNN performance comparison. Hyperparameters Tuning and Training of the Networks The training was performed on the training dataset, and multiple hyperparameters have been optimized using the validation dataset (learning rate, set of frozen layers, dropout rate of the last layer, etc.). Additional information on the actual values of the hyperparameters used for each CNN can be found in the “Supplementary Extended Methods” section and Supplementary Table S3. Also, as we are using transfer learning approach, only the few last layers of the CNN are trained. The number of these layers depends on the architecture chosen and should be also considered as a hyperparameter. Assessment of CNN Performance There are multiple approaches available to measure the performance of classifiers, including accuracy, F1 score, receiver operating characteristic-area under the curve (ROC-AUC), Mathews correlation coefficient (MCC), and many others. The simplest and the most intuitive score is “accuracy”—the number of correct guesses divided by the total number of samples in the dataset. Additional metrics are “precision”—number of objects which were correctly predicted as positive divided by the total number of objects selected as positive, and “recall” or “true positive rate”—number of objects which were correctly predicted as positive divided by the total number of positive objects in the initial dataset. The accuracy shows how many selected objects are really the correct ones, and the recall shows how many of the relevant objects the algorithm was able to pick up. As precision and recall cannot be optimized separately, metrics which take into account both these values are usually used. The F1 score is a harmonic mean of precision and recall. However, all of these scores have some drawbacks, especially for imbalanced data, as both classes are treated equally and changes in a wrongly predicted minor class do not have a high impact on the score. Alternatively, MCC can be calculated—the value which shows how well the predictions of the classifier and the true labels are correlated. One of the advantages of this metric is that it can be very sensitive even when classes are imbalanced. Another option is using ROC-AUC score—the area under the ROC curve (true positive rate vs. false positive rate at different threshold values). It is the “gold standard” for binary classification with neural networks. It has a very meaningful interpretation: this value shows the probability of a randomly selected object from the “retina” class to have a higher score than a random object from the “non-retina” class. So, for a classifier that assigns labels randomly, the score would be 0.5, and for the perfect algorithm, it would be equal to 1. Therefore, this score can be considered as the measure of order which the classifier provides. Thus, we chose the ROC-AUC score as the main measure of performance for our CNN. Retinal Differentiation and Initial Annotation of the Collected Images by Experts For dataset collection, approximately 3,000 retinal organoids were differentiated and analyzed. For the training of our neural network and annotating the dataset, we collected bright-field and fluorescent images for each organoid on day 5 and day 9 of differentiation, respectively (Figures 1A,C). On day 9, in most organoids, distinct optic vesicle areas could be observed. In Figure 1B, a confocal image of retinal organoids on day 9 of differentiation is presented. Retina-like planar structures are formed on the periphery of the organoid; these areas are also positive for retina-specific markers Pax6 and Rx. As Rx is known to be an essential transcription factor for retinal development (Zagozewski et al., 2014), we chose its expression at day 9 to be a ground truth indication for early retinal differentiation. All fluorescent images were collected on day 9 and pooled together, filtered to get rid of pictures with poor quality, anonymized, and offered to two independent experts for sorting in three groups: (1) good retina (Figure 1C, left; Figure 2A, left); (2) satisfactory retina (Figure 2A, center); and (3) not retina (Figure 1C, right; Figure 2A, right). The classification criteria are stated in the “Materials and Methods” section. The proportions of each class for each expert are provided in Figure 2B, and the cumulative distribution of organoids after classification is summarized in Figure 2C. For our network, we stated the two-class classification problem: we asked the program to extract features which would distinguish high-quality organoids from bad ones based only on bright-field images. To do that, we generated the training dataset by assigning to organoids with label “retina” only if both experts put this organoid in class “retina” and “non-retina” if at least one suggested it to be non-retinal. Classes “retina/non-retina,” “retina/satisfactory,” and “satisfactory/satisfactory” were not used for training the network. The resulting dataset consisted a total of 1,209 bright-field images, with the proportion of classes at 73 vs. 27% for retina and non-retina, respectively. As each organoid is seeded in a separate well and they are developing independently, we consider each of them to be an independent biological replicate. Selection of the Best CNN Architecture Four networks based on different architectures (VGG19, ResNet50v2, Xception, and DenseNet121) have been trained and validated on the dataset. The learning curves are shown in Figure 3A. All networks were successfully trained, but the VGG19-based classifier shows signs of overfitting: loss score on validation dataset is significantly higher than on training dataset; so, for further comparison, we decided to keep only ResNet50v2-, Xception-, and DenseNet121-based CNNs. Figure 3. Comparison of different convolutional neural network (CNN) architectures. (A) Loss curves and receiver operating characteristic-area under the curve (AUC) training curves for VGG19, ResNET50v2, DenseNet121, and Xception. (B) Comparison summary of three different CNNs using 10-fold cross-validation. The mean AUC scores were 0.93 ± 0.03 vs. 0.91 ± 0.04 vs. 0.92 ± 0.04 (P = 0.3) for ResNET50v2, DenseNet121, and Xception, respectively; the mean F1 scores were 0.89 ± 0.02 vs. 0.88 ± 0.04 vs. 0.88 ± 0.04 for ResNET50v2, DenseNet121, and Xception, respectively; the mean accuracy scores were 0.85 ± 0.03 vs. 0.83 ± 0.05 vs. 0.83 ± 0.06 for ResNET50v2, DenseNet121, and Xception, respectively; the mean Matthews correlation coefficients were 0.64 ± 0.08 vs. 0.62 ± 0.11 vs. 0.63 ± 0.12 for ResNET50v2, DenseNet121, and Xception, respectively. Each dot on the graph corresponds to one cross-validation step. ns, not significant (P-value > 0.05 on Friedman statistical test). The remaining three networks were run through 10-fold cross-validation, and for each step, ROC-AUC score, optimal thresholds, F1, MCC, and accuracy scores were calculated (Figure 3B). The mean AUC scores were 0.93 ± 0.03 vs. 0.91 ± 0.04 vs. 0.92 ± 0.04 (P = 0.3) for ResNet50v2, DenseNet121, and Xception, respectively; the mean F1 scores were 0.89 ± 0.02 vs. 0.88 ± 0.04 vs. 0.88 ± 0.04 (P = 0.6) for ResNet50v2, DenseNet121, and Xception, respectively; the mean accuracy scores were 0.85 ± 0.03 vs. 0.83 ± 0.05 vs. 0.83 ± 0.06 (P = 0.6) for ResNet50v2, DenseNet121, and Xception, respectively; and the mean Matthews correlation coefficients were 0.64 ± 0.08 vs. 0.62 ± 0.11 vs. 0.63 ± 0.12 for ResNet50v2, DenseNet121, and Xception, respectively. All of the networks show similar results, and no significant difference has been found using the Friedman test (analog of Wilcoxon test when three or more samples are compared). So, we can conclude that all of these CNNs can potentially be utilized for solving our task. However, the Xception- and DenseNet121-based CNNs had a noticeable variation of the loss score for the different validation steps of cross-validation (Supplementary Figure S1). Also, we noticed that ResNet50v2 had the smallest standard deviation among other classifiers for each metric (Figure 3B); therefore, at this step, we selected this CNN. Convolutional Neural Network Can Predict Early Retinal Differentiation To evaluate the performance of the selected CNN, we utilized the test dataset which was not used during the training and parameter tuning process. The ROC curve is shown in Figure 4A and the confusion matrix in Figure 4B. For this dataset, the predictor showed the ROC-AUC score to be 0.91 (Figure 4A), accuracy—0.84, F1 score—0.89, and Matthews correlation coefficient—0.63. Despite a significant imbalance between the retinal and the non-retinal classes, the classifier was able to reach 0.85 sensitivity and 0.82 specificity scores on the test dataset. This indicates that augmentation and threshold selection allowed to efficiently tackle the imbalance problem. Figure 4. Performance of the best convolutional neural network (CNN) on the test dataset. (A) Receiver operating characteristic (ROC) curve for the selected CNN on the test dataset. ROC-area under the curve value equals 0.91. (B) Confusion matrix for selected CNN. Values in the squares represent percentages of true negative, false positive, false negative, and true positive predictions. Color LUT shows the absolute number of images in each group. (C) Prediction scores for the test dataset; each dot represents a single organoid; the red line represents the threshold value for the classifier. (D) True prediction rates for each class of organoids with the CNN classifier. (E) Violin plots on all possible classes of organoids which can be assigned by combining the votes from two experts. The white dot in the center of each plot represents the median of the distribution; the boxes and the bars represent the first quartile and the upper/lower adjacent value, respectively; the red line is a classifier’s threshold. The prediction scores for every single image and the threshold are shown in Figure 4C. As expected, the retinal and the non-retinal organoids are “condensed” at the corresponding values: 0 for non-retina and 1 for the retina; so, the model clearly can separate these two types of organoids. Then, we decided to have a look at the performance of the model on different classes, which were obtained after combining the experts’ annotations. The true prediction rates for each class are presented in Figure 4D. Expectedly, the best performance the model shows on organoids which came from “sure” classes: retina/retina and non-retina/non-retina, meaning that the CNN is more likely to be mistaken where experts are also less sure about the labels. Moreover, in Figure 4E; the distributions of the prediction scores are shown for each class. Again, retina/retina and non-retina/non-retina classes are clearly separated. Moreover, organoids from retina/satisfactory class, which were not used for training and validation, also were in most cases correctly attributed by the network to the retina class, although the median of the distribution is shifted from 1, showing that the program gets confused more often than on “retina/retina” class, which is also consistent with the result shown in Figure 4D. Interestingly, the predictor could not separate organoids from the retina/non-retina group, which can be concluded from the fact that the median of the scores is located close to the threshold: it can be interpreted as CNN is working almost as a random predictor for organoids from this group. Organoids from satisfactory/satisfactory class also can be poorly distinguished, but the median is shifted toward the retinal class, which is being in accordance with the criteria that we used for this class. To identify the image areas and features that are used by the CNN, we utilized SHapley Additive exPlanations (SHAP) value approach (Lundberg and Lee, 2017). We noticed that the border of the organoids and, more specifically, the retina-like neuroepithelium loops on the periphery are zones of interest for the CNN (Supplementary Figure S2). CNN Outperforms Human Classifier on Prediction of Retinal Differentiation To compare the CNN performance with the human-based classifier, we asked four independent experts to assign the labels “retina” and “non-retina” for organoids from the test dataset. The criteria for this classification can be found in the “Materials and Methods” section. True positive rates and false positive rates for each expert are plotted on the classifier’s ROC curve (Figure 5A). The CNN is clearly outperforming a human in distinguishing retinal differentiation on the early stage of differentiation. Different metrics for the comparison are provided in Figure 5B. On average, a human expert has an accuracy of 0.67 ± 0.06, while CNN has an accuracy of 0.84. Figure 5. Human-based classifier vs. CNN-based classifier. (A) The receiver operating characteristic curve for the convolutional neural networks (CNN); the area under the curve score for this classifier is 0.91. Each dot represents a single human expert predicting organoids to be from “retina” or “non-retina” class based on bright-field images. (B) Metrics comparison for human-based classifier and CNN-based. CNN showed better results on all the metrics that we measured: 0.63 vs. 0.27 ± 0.06 Matthews correlation coefficient for CNN and human, respectively; 0.84 vs. 0.67 ± 0.06 accuracy for CNN and human, respectively; 0.89 vs. 0.75 ± 0.09 F1 score for CNN and human, respectively; 0.92 vs. 0.83 ± 0.07 precision for CNN and human, respectively; 0.85 vs. 0.72 ± 0.17 recall for CNN and human, respectively. The more striking difference gives the comparison of a Matthews correlation coefficient which takes into account class disbalance: 0.63 vs. 0.27 ± 0.06 for Matthews correlation coefficient for CNN and human, respectively. Retinal organoid cultures have a great potential to model human disease and development and as source of retinal neurons for transplantation or platform for therapeutics testing. The remaining challenges, highlighted in RPE transplantation studies, include high variability between different cell lines (Leach et al., 2016), scaled production with automation or other approaches (Regent et al., 2019), and lack of cGMP-compatible non-invasive readouts for the assessment of differentiation during the development process (Schaub et al., 2020). The translational success of regenerative therapies based on iPSCs-derived RPE (Mandai et al., 2017; da Cruz et al., 2018) is largely due to the development of strategies to overcome these issues. In this study, we attempted to address the latter for retinal 3D organoids. There are two distinct, non-mutually exclusive approaches to characterize and identify the differentiating cell with non-invasive imaging techniques. The classic strategy is to define the exact features and the thresholds that are characteristics of a particular cell type. This approach is based on our understanding on how the cell looks in vivo: this was demonstrated in decades of RPE differentiation studies in vitro (Thumann et al., 2013), where pigmentation, cell shape, and autofluorescence can be quantified and compared to the pre-set quality criteria thresholds (da Cruz et al., 2018; Schaub et al., 2020). The evolution of this approach involves better understanding of the thresholds as well as introduction of new imaging techniques that can detect new features—multispectral fluorescent and non-fluorescent imaging, optical coherence tomography (Browne et al., 2017; Capowski et al., 2019), and others. An alternative strategy is machine learning that is also highly dependent on the modality by which the information is collected. However, the information is processed in a different way: it does not require any predefined criteria for assessment—the CNN learns how to find and extract the most relevant features from the data by itself, provided that the program “has seen” enough samples to learn it from. Machine learning becomes particularly valuable when there are multiple criteria and definitions or when they are not very well established. In this case, the training of computer algorithm occurs with the help of experts, who would classify or rank the training set of images, i.e., cats vs. dogs (Krizhevsky et al., 2012), early vs. late diabetic retinopathy (Pratt et al., 2016), etc. This technology becomes extremely powerful when it is possible to use an orthogonal approach, “other modality,” to make a decision on what class the object belongs to: molecular profile (Waisman et al., 2019) or functional response (Schaub et al., 2020). This is the exact case of retinal differentiation using 3D organoid strategy: there are limited accurate criteria distinguishing good retinal organoids and bad ones with BF imaging, especially on the early stage of their development, although the availability of reporter cell lines allows to determine retinal differentiation with high accuracy. Here we showed that such discrimination is possible with a convolutional neural network which could predict early retinal differentiation based only on universal bright-field imaging. One of the major questions in the area of deep learning is the role of individual features in image recognition. It is not clear which parts of the image are most important for the algorithm to classify an object. This issue of potential unpredictability becomes more important when the action is solely based on artificial intelligence decision. Also, by extracting individual features that are most important in predicting cell behavior, it may be possible to identify novel biological processes and identify the actual determinants of retinal formation in the embryoid bodies. By using SHAP value approach, we were able to show the importance of translucent neuroepithelium-like structures in decision-making (Supplementary Figure S2), although we were not able to show the actual causality of these structures in the decision-making process. The program clearly outperformed the experts in a classification task (Figure 5) and was able to predict eye field induction better than a human performing a morphological observation of organoid with bright-field microscopy: 0.84 vs. 0.67 ± 0.06 accuracy for CNN vs. human, respectively. This additionally illustrates that the criteria for the selection of retinal organoids at this stage are subjective. Furthermore, the good performance of the CNN-based classifier shows that the morphology of the organoids, even on a very early stage, contains sufficient information to predict retinal differentiation, and the program can extract this information. Moreover, the approach does not require any complicated imaging, fluorescent reporters, or dyes for analysis; so, it can be easily implemented in almost any laboratory or manufacturing setting. Therefore, our method offers a robust and universal non-invasive approach for the assessment of retinal differentiation. As we have stated the problem as a classification task, we assume from the beginning that there should be some threshold which would distinguish retinal and non-retinal organoids. However, there are many organoids which are “on the border”—these organoids we called “satisfactory” organoids; these are hard to separate in two distinct classes with the single fluorescent reporter. Moreover, for different applications, different thresholds may be needed: for example, for disease or development modeling, the quality of the organoid should be prioritized to get the proper physiology and morphology of the retina, but for cell production, the yield may be a priority and a lower threshold can be applied for enrichment. Moreover, for drug discovery, using retinal organoids can be problematic as the amount of retinal differentiation varies between different organoids, and having a method to grade organoids can be helpful to interpret the assay results. Therefore, having an ability to select a threshold according to the task can be rather important for different applications. Thus, one of the further directions to be considered is a statement of the regression problem for grading retinal organoids. This would significantly expand the possible applications of the approach. However, this task would require a reliable quantification method to assign “ground truth” values for the network training. One of the possible metrics which can be utilized is not only the simple quantification of total fluorescence in the organoid (Vergara et al., 2017) if using fluorescent reporters but also the localization and the shape of retina-like structures might be important parameters which should be taken into account, as well as the physiological and the metabolic state of the retinal neurons (Browne et al., 2017). We have used mouse embryonic stem cells with Rx reporter in this work. Using this gene expression as a specific indicator of eye field induction, we were able to predict differentiation using CNN. We consider that the approach that we established can be easily translatable not only to other mouse reporter cell lines but also for human organoids. This is due to the fact that the method relies only on the morphology of the organoids during development, and Sasai’s differentiation protocol has been shown to be effective on human embryonic stem cells (Nakano et al., 2012). Moreover, here are multiple retina-related human PSC reporter cell lines available, which target different cell types and differentiation stages: Six6 (Sluch et al., 2018) and Rx (Nakano et al., 2012) for early optic vesicles, Brn3a (Sluch et al., 2015) for retinal ganglion cells, Crx (Nakano et al., 2012) for photoreceptors, or Nrl (Phillips et al., 2018) for rods specifically. Therefore, our approach with training the CNN to predict the differentiation can also be utilized for human cells and possibly for later differentiation stages. However, to achieve the best results on human cells, additional training for mouse-pretrained neural network may be required to adjust to possible morphological differences between mouse and human organoids. Moreover, as we have shown that CNN can accurately predict retinal differentiation based only on simple bright-field images of the organoids, we suppose that not only microscope images can be utilized for the CNN training. For example, probably this approach can be incorporated in the large-particle flow cytometer machines as an alternative to fluorescence. Data Availability Statement The datasets generated for this study are available on request to the corresponding author. EK, AN, and PB conceived the experiments. EK designed and performed the differentiation experiments, interpreted the results, and wrote the manuscript with the help of AN. AN and EAK trained the neural networks and performed the comparison of different CNN architectures. EK and PB developed an idea and annotated fluorescent and bright-field images. All the authors discussed the experiments and the manuscript. EAK, PV, and PB provided funding for this study. PB revised and corrected the manuscript. All the authors read and approved the final version of the manuscript. This work was supported by NIH/NEI U24 grant EY029893, BrightFocus Foundation (PB), Gilbert Family Foundation, Research to Prevent Blindness Grant (PB), NIH National Eye Institute core grant P30EY003790, and Russian Academic Excellence project 5-100. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. We would like to thank Dr. Julia Oswald and Dr. Monichan Phay for their help with the classification of organoids based on the bright-field images. Also, we would like to thank Gennady Fedonin and Andrei Sonin for their advice. We want to thank RIKEN Cell Bank and Dr. Yoshiki Sasai for providing us with RxGFP mES cell line. The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncel.2020.00171/full#supplementary-material. Baranov, P., Lin, H., McCabe, K., Gale, D., Cai, S., Lieppman, B., et al. (2017). A novel neuroprotective small molecule for glial cell derived neurotrophic factor induction and photoreceptor rescue. J. Ocul. Pharmacol. Ther. 33, 412–422. doi: 10.1089/jop.2016.0121 Browne, A. W., Arnesano, C., Harutyunyan, N., Khuu, T., Martinez, J. C., Pollack, H. A., et al. (2017). Structural and functional characterization of human stem-cell-derived retinal organoids by live imaging. Invest. Ophthalmol. Vis. Sci. 58, 3311–3318. doi: 10.1167/iovs.16-20796 Capowski, E. E., Samimi, K., Mayerl, S. J., Phillips, M. J., Pinilla, I., Howden, S. E., et al. (2019). Reproducibility and staging of 3D human retinal organoids across multiple pluripotent stem cell lines. Development 146:dev171686. doi: 10.1242/dev.171686 Ching, T., Himmelstein, D. S., Beaulieu-Jones, B. K., Kalinin, A. A., Do, B. T., Way, G. P., et al. (2018). Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface 15:20170387. doi: 10.1098/rsif.2017.0387 Chollet, F. (2017). “Xception: deep learning with depthwise separable convolutions,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1800–1807. Cowan, C. S., Renner, M., Gross-Scherf, B., Goldblum, D., Munz, M., Krol, J., et al. (2019). Cell types of the human retina and its organoids at single-cell resolution: developmental convergence, transcriptomic identity and disease map. SSRN Electr. J. doi: 10.2139/ssrn.3438371 [Epub ahead of print]. da Cruz, L., Fynes, K., Georgiadis, O., Kerby, J., Luo, Y. H., Ahmado, A., et al. (2018). Phase 1 clinical study of an embryonic stem cell-derived retinal pigment epithelium patch in age-related macular degeneration. Nat. Biotechnol. 36, 328–337. doi: 10.1038/nbt.4114 Decembrini, S., Koch, U., Radtke, F., Moulin, A., and Arsenijevic, Y. (2014). Derivation of traceable and transplantable photoreceptors from mouse embryonic stem cells. Stem Cell Reports 2, 853–865. doi: 10.1016/j.stemcr.2014.04.010 Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). “ImageNet: a large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, 248–255. Eiraku, M., Takata, N., Ishibashi, H., Kawada, M., Sakakura, E., Okuda, S., et al. (2011). Self-organizing optic-cup morphogenesis in three-dimensional culture. Nature 472, 51–58. doi: 10.1038/nature09941 Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature 542, 115–118. doi: 10.1038/nature21056 Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., et al. (2016). Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. 316, 2402–2410. doi: 10.1001/jama.2016.17216 Hallam, D., Hilgen, G., Dorgau, B., Zhu, L., Yu, M., Bojic, S., et al. (2018). Human-induced pluripotent stem cells generate light responsive retinal organoids with variable and nutrient-dependent efficiency. Stem Cells 36, 1535–1551. doi: 10.1002/stem.2883 He, K., Zhang, X., Ren, S., and Sun, J. (2016). “Identity mappings in deep residual networks,” in Computer Vision—ECCV 2016, eds B. Leibe, J. Matas, N. Sebe and M. Welling (Cham: Springer International Publishing), 630–645. Hiler, D., Chen, X., Hazen, J., Kupriyanov, S., Carroll, P. A., Qu, C., et al. (2015). Quantification of retinogenesis in 3D cultures reveals epigenetic memory and higher efficiency in IPSCs derived from rod photoreceptors. Cell Stem Cell 17, 101–115. doi: 10.1016/j.stem.2015.05.015 Huang, G., Liu, Z., van der Maaten, L., and Weinberger, K. Q. (2017). “Densely connected convolutional networks,” in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2261–2269. Krizhevsky, A., Sutskever, I., and Hinton, G. E. (2012). “ImageNet classification with deep convolutional neural networks,” in Proceedings of the Advances in Neural Information Processing Systems, 1097–1105. Leach, L. L., Croze, R. H., Hu, Q., Nadar, V. P., Clevenger, T. N., Pennington, B. O., et al. (2016). Induced pluripotent stem cell-derived retinal pigmented epithelium: a comparative study between cell lines and differentiation methods. J. Ocul. Pharmacol. Ther. 32, 317–330. doi: 10.1089/jop.2016.0022 LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444. doi: 10.1038/nature14539 Lundberg, S. M., and Lee, S. I. (2017). “A unified approach to interpreting model predictions,” in Proceedings of the Advances in Neural Information Processing Systems, 4766–4775. Mandai, M., Watanabe, A., Kurimoto, Y., Hirami, Y., Morinaga, C., Daimon, T., et al. (2017). Autologous induced stem-cell-derived retinal cells for macular degeneration. N. Engl. J. Med. 376, 1038–1046. doi: 10.1056/NEJMoa1608368 McCauley, H. A., and Wells, J. M. (2017). Pluripotent stem cell-derived organoids: using principles of developmental biology to grow human tissues in a dish. Development 144, 958–962. doi: 10.1242/dev.140731 Medina-Martinez, O., Amaya-Manzanares, F., Liu, C., Mendoza, M., Shah, R., Zhang, L., et al. (2009). Cell-autonomous requirement for Rx function in the mammalian retina and posterior pituitary. PLoS One 4, 1–7. doi: 10.1371/journal.pone.0004513 Meyer, J. S., Howden, S. E., Wallace, K. A., Verhoeven, A. D., Wright, L. S., Capowski, E. E., et al. (2011). Optic vesicle-like structures derived from human pluripotent stem cells facilitate a customized approach to retinal disease treatment. Stem Cells 29, 1206–1218. doi: 10.1002/stem.674 Nakano, T., Ando, S., Takata, N., Kawada, M., Muguruma, K., Sekiguchi, K., et al. (2012). Self-formation of optic cups and storable stratified neural retina from human ESCs. Cell Stem Cell 10, 771–785. doi: 10.1016/j.stem.2012.05.009 Perepelkina, T., Kegeles, E., and Baranov, P. (2019). Optimizing the conditions and use of synthetic matrix for three-dimensional in vitro retinal differentiation from mouse pluripotent cells. Tissue Eng. Part C Methods 25, 433–445. doi: 10.1089/ten.tec.2019.0053 Phillips, M. J., Capowski, E. E., Petersen, A., Jansen, A. D., Barlow, K., Edwards, K. L., et al. (2018). Generation of a rod-specific NRL reporter line in human pluripotent stem cells. Sci. Rep. 8:2370. doi: 10.1038/s41598-018-20813-3 Pratt, H., Coenen, F., Broadbent, D. M., Harding, S. P., and Zheng, Y. (2016). Convolutional neural networks for diabetic retinopathy. Proc. Comput. Sci. 90, 200–205. doi: 10.1016/j.procs.2016.07.014 Regent, F., Morizur, L., Lesueur, L., Habeler, W., Plancheron, A., Ben M’Barek, K., et al. (2019). Automation of human pluripotent stem cell differentiation toward retinal pigment epithelial cells for large-scale productions. Sci. Rep. 9:10646. doi: 10.1038/s41598-019-47123-6 Schaub, N. J., Hotaling, N. A., Manescu, P., Padi, S., Wan, Q., Sharma, R., et al. (2020). Deep learning predicts function of live retinal pigment epithelium from quantitative microscopy. J. Clin. Invest. 130, 1010–1023. doi: 10.1172/JCI131187 Simonyan, K., and Zisserman, A. (2014). “Very deep convolutional networks for large-scale image recognition,” in Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track, 1–14. http://arxiv.org/abs/1409.1556. Sluch, V. M., Davis, C. H. O., Ranganathan, V., Kerr, J. M., Krick, K., Martin, R., et al. (2015). Differentiation of human ESCs to retinal ganglion cells using a CRISPR engineered reporter cell line. Sci. Rep. 5:16595. doi: 10.1038/srep16595 Sluch, V. M., Chamling, X., Wenger, C., Duan, Y., Rice, D. S., and Zack, D. J. (2018). Highly efficient scarless knock-in of reporter genes into human and mouse pluripotent stem cells via transient antibiotic selection. PLoS One 13:e0201683. doi: 10.1371/journal.pone.0201683 Thumann, G., Dou, G., Wang, Y., and Hinton, D. R. (2013). “Chapter 16—Cell biology of the retinal pigment epithelium,” in Retina, Fifth Edition, ed Stephen J. Ryan (Elsevier), 401–414. Ueda, K., Onishi, A., Ito, S., Nakamura, M., and Takahashi, M. (2018). Generation of three-dimensional retinal organoids expressing rhodopsin and S- and M-cone opsins from mouse stem cells. Biochem. Biophys. Res. Commun. 495, 2595–2601. doi: 10.1016/j.bbrc.2017.12.092 Vergara, M. N., Flores-Bellver, M., Aparicio-Domingo, S., McNally, M., Wahlin, K. J., Saxena, M. T., et al. (2017). Three-dimensional automated reporter quantification (3D-ARQ) technology enables quantitative screening in retinal organoids. Development 144, 3698–3705. doi: 10.1242/dev.146290 Völkner, M., Zschätzsch, M., Rostovskaya, M., Overall, R. W., Busskamp, V., Anastassiadis, K., et al. (2016). Retinal organoids from pluripotent stem cells efficiently recapitulate retinogenesis. Stem Cell Reports 6, 525–538. doi: 10.1016/j.stemcr.2016.03.001 Waisman, A., La Greca, A., Möbbs, A. M., Scarafía, M. A., Velazque, N. L. S., Neiman, G., et al. (2019). Deep learning neural networks highly predict very early onset of pluripotent stem cell differentiation. Stem Cell Reports 12, 845–859. doi: 10.1016/j.stemcr.2019.02.004 Zagozewski, J. L., Zhang, Q., Pinto, V. I., Wigle, J. T., and Eisenstat, D. D. (2014). The role of homeobox genes in retinal development and disease. Dev. Biol. 393, 195–208. doi: 10.1016/j.ydbio.2014.07.004 Keywords: deep learning, convolutional neural networks, stem cells, retinal organoids, mouse embryonic stem cells Citation: Kegeles E, Naumov A, Karpulevich EA, Volchkov P and Baranov P (2020) Convolutional Neural Networks Can Predict Retinal Differentiation in Retinal Organoids. Front. Cell. Neurosci. 14:171. doi: 10.3389/fncel.2020.00171 Received: 25 February 2020; Accepted: 20 May 2020; Published: 03 July 2020. Edited by:Lin Cheng, University of Iowa, United States Reviewed by:Ming Zu Zhang, Sun Yat-sen University, China Stephanie C. Joachim, Ruhr University Bochum, Germany Ling Zhang, Pall Inc., United States Copyright © 2020 Kegeles, Naumov, Karpulevich, Volchkov and Baranov. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Petr Baranov, [email protected] † These authors have contributed equally to this work
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00285.warc.gz
CC-MAIN-2023-14
54,271
142
https://www.wesedholm.com/auto-import-your-wordpress-blog-into-your-facebook-account-along-with-pictures-and-a-real-link-to-your-website/
code
I wanted my FaceBook account to automatically import my WordPress blog… this was easy to set up however it did not work with photos and instead of linking directly to my blog with a snippet about what my post was about FaceBook would link to a page with only the title of my blog post and a small hidden link that nobody was finding. So of course nobody could see any pictures or read my full blog post which made the whole thing pointless. I lived with it for a couple months and tried something new today. The plug-in is called “Feed FaceBook Leave FaceBook” and the website is here:Â http://www.keyvan.net/code/feed-facebook-leave-facebook/ I guess we will find out if it works soon enough. This only works with self hosted WordPress blogs.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00671.warc.gz
CC-MAIN-2022-27
750
2
https://edu.epfl.ch/coursebook/en/design-and-optimization-of-internet-of-things-systems-EE-733
code
EE-733 / 4 credits Remark: Next time: Fall 2022 Every 2 years This course provides an overview of the relevant technologies and approaches for the design and optimization of Internet-of-Things (IoT) systems. It covers architectures of edge computing platforms, wireless communication options, cloud computing backend and different machine learning applications. The goal of the proposed course is to provide a complete overview of the most relevant subfields related to the design and optimization of complete IoT systems. The course will last for one full semester and will feature a number of different activities: - Lectures: each day will feature lectures and discussions around various research themes. Each session will include in‐depth talks and theoretical lectures with processors on different aspects of ultra-low power wearable wireless systems and their applications. A Q&A discussion will follow each of these sessions. - Hands‐on labs: the course will integrate each day hands‐on with the theoretical classes. Thus, the lab sessions will provide hands‐on experience on real devices with the topics covered in the morning lectures The evaluation will be done through the correction of the exercise sessions and one group project (in pairs of students) that will be developed at the end of the semester. The course is divided into three different modules: Internet-of-Things (Iot) platforms and cloud computing backend, communication, and signal processing and applications. The first part of the course is titled "IoT architectures" and will be dedicated to cover the different design of ultra-low power and smart edge computing platforms, as well as the existing options of cloud computing infrastructures for IoT backend platforms, with sub‐topics ranging from components of different IoT paltform architectures and power/performance optimization principles. There will be a number of development tools and IoT sensors (TI SensorTag, Huawei Watch 2, etc.) proposed for hands‐on labs will be presented during this first module of the course. The participants will get familiar with all the instruments that will be using during the following modules of the course. Then, this module will cover how to design complete ultra-low-power IoT platforms that can be powered with minimal energy, and system-level software management for low‐power at hardware and OS level. We will also cover the state-of-the-art and the key design options to design the related IoT cloud computing infrastructure to store and process the data coming from the IoT platforms. The hands‐on lab of this module is focused on showing how to define a secure communication between the IoT edge platforms and the cloud computing backend. The main topic of the second module is entitled “communication”: lectures will cover the main issues and challenges related to new IoT wireless communication protocols, management and optimization of communication for IoT networks. We will describe the essential concepts and transmission schemes behind current standards and introduce the basics of future emerging communication technologies and signaling schemes relevant to wireless sensor networks and future 5G based IoT communication. Physical layer issues such as the challenges of the propagation environment and modulation and coding for massive IoT as well as key physical and MAC layer design considerations will also be addressed. The hands‐on exercises related to this module will be focused on the several design trade‐offs between high‐level (like ZigBee) and low level protocols, as well as Lora and other new IoT standards for mid- and large-range IoT communication. The third module of the course is application-oriented lectures of IoT systems, with focus on the actual needs in smart wearables for sport and clinics. It includes dedicated body worn sensors and signal processing, feature extraction and machine learning approaches, sensors fusion and data recording in wearable systems. The participants will have the opportunity to learn the state of the art and advances pervasive monitoring in heath and disease. The importance of outcome measures obtained through wearable systems and their validity is emphasized. Field measurement, daily activity recording as well as tools for analyzing long-term monitoring are presented through example in health and disease. The hands-on exercises of this module will cover practical issues about signal acquisition and software tuning and optimizations for physical mobility analyzing using wearable technology. THIS COURSE WILL TAKE PLACE IN ROOM ELG 022. Internet of Things (IoT), low power design and optimization, smart wearables, embedded systems, wireless communication, cloud computing. By the end of the course, the student must be able to: - Expound the basis of IoT sensor nodes and architectures - Expound low-power design options at system-level design - Select appropriately the wireless communication protocol based on a required energy and performance requirements - Expound the basis of wearable architectures and bio-signal processing analysis In the programs - Number of places: 23 - Exam form: Oral presentation (session free) - Subject examined: Design and Optimization of Internet-of-Things Systems - Lecture: 24 Hour(s) - Exercises: 24 Hour(s) - Practical work: 8 Hour(s)
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303868.98/warc/CC-MAIN-20220122164421-20220122194421-00310.warc.gz
CC-MAIN-2022-05
5,352
26
https://www.glassdoor.nl/Sollicitatiegesprek/business-intelligence-sollicitatievragen-SRCH_KO0,21.htm
code
Sollicitatievragen voor business intelligence gedeeld door sollicitanten During technical interview: What are the domains of Machine Learning? What do you know about Statistical Learning Theory? Why are you interested in Deep Learning (I had applied for Deep Learning vacancy)? What are SVMs (he spent a lot of time on SVMs)? Why are they better than NNs and why are they worse? What is Backpropagation in NNs? Why is Deep Learning emerging only now? What are types of NNs? What is the difference in backprop between RNN and ANN? Do you know OOP? What is inheritance? What is multiple inheritance? What are its problems and how do you solve it? Are you familiar with Scrum? Why are you here? Please share experiences when you led a team How do you perceive this role? How would you use your experience for this role? Please guide me through your resume
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989012.26/warc/CC-MAIN-20210509183309-20210509213309-00260.warc.gz
CC-MAIN-2021-21
852
3
https://learn.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc775854(v=ws.10)?redirectedfrom=MSDN
code
Choosing a Regional or Dedicated Forest Root Domain Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 If you are applying a single domain model, then the single domain functions as the forest root domain. If you are applying a multiple domain model, then you can choose to deploy a dedicated forest root domain, or select a regional domain to function as the forest root domain. Dedicated Forest Root Domain A dedicated forest root domain is a domain that is created specifically to function as the forest root. It does not contain any user accounts other than the service administrator accounts for the forest root domain, and it does not represent any region in your domain structure. All other domains in the forest are children of the dedicated forest root domain. Using a dedicated forest root provides the following advantages: Operational separation of forest service administrators from domain service administrators. In a single domain environment, members of the Domain Admins or built-in Administrators groups can use standard tools and procedures to make themselves members of the Enterprise Admins and Schema Admins groups. In a forest that uses a dedicated forest root domain, members of the Domain Admins or built-in Administrators groups in the regional domains cannot make themselves members of the forest-level service administrator groups by using standard tools and procedures. - Because a domain is not a security boundary, it is possible for a malicious service administrator, such as a member of the Domain Admins group, to use nonstandard tools and procedures to gain full access to any domain in the forest or to any computer in the forest. For example, service administrators in a nonroot domain can make themselves members of the Enterprise Admins or Schema Admins group. Protection from operational changes in other domains. A dedicated forest root domain does not represent a particular region in your domain structure. For this reason, it is not affected by reorganizations or other changes that result in the renaming or restructuring of domains. Serves as a neutral root so that no region appears to be subordinate to another region. Some organizations might prefer to avoid the appearance that one country/region is subordinate to another country/region in the namespace. When you use a dedicated forest root domain, all regional domains can be peers in the domain hierarchy. In a multiple regional domain environment in which a dedicated forest root is used, the replication of the forest root domain has minimal impact on the network infrastructure. This is because the forest root only hosts the service administrator accounts. The majority of the user accounts in the forest and other domain-specific data is stored in the regional domains. One disadvantage to using a dedicated forest root domain is that it creates additional management overhead to support the additional domain. Regional Domain as a Forest Root Domain If you choose not to deploy a dedicated forest root domain, then you must select a regional domain to function as the forest root domain. This domain is the parent domain of all the other regional domains and will be the first domain that you deploy. The forest root domain contains user accounts and is managed in the same way that the other regional domains are managed. The primary difference is that it also includes the Enterprise Admins and Schema Admins groups. The advantage to selecting a regional domain to function as the forest root domain is that it does not create the additional management overhead that maintaining an additional domain creates. Select an appropriate regional domain to be the forest root, such as the domain that represents your headquarters or the region that has the fastest network connections. If it is difficult for your organization to select a regional domain to be the forest root domain, you can choose to use a dedicated forest root model instead. In a Windows Server 2003 environment, global availability of the forest root is not as important as it is in Windows 2000 because forest-wide application partitions automatically replicate the forest-wide locator record zone to all domain controllers that are running DNS. Any domain controller can be used to write updates to the forest-wide locator records zone. In a Windows 2000 environment, DNS does not use a forest-wide application partition; therefore, it is recommended that a dedicated forest root be used to make the zone containing the writable copy of the forest-wide locator records highly available. For more information about DNS and the forest-wide application partition, see "Designing a DNS Infrastructure to Support Active Directory" later in this chapter. Forest Root Alternatives With Single Global Domain Model This model consists of a forest that contains a dedicated forest root domain and one additional global domain. The global domain contains all the user, computer, and group accounts for the entire organization. Figure 2.20 shows an example of a forest with a single global domain with and without a dedicated forest root. Figure 2.20 Single Global Domain Model With and Without Dedicated Root Forest Root Alternatives With Multiple Regional Domains This model consists of a dedicated forest root domain or a regional domain that is designated to be the forest root, and multiple regional domains that are children of the forest root, as shown in Figure 2.21. Figure 2.21 Multiple Regional Domains With and Without a Dedicated Forest Root In the case of multiple regional domains, a third alternative is to make each regional domain a separate tree in a single forest. While this design is fully supported, it is not recommended because of the complexity of the DNS deployment. Figure 2.22 Forest with Multiple Trees
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817729.0/warc/CC-MAIN-20240421040323-20240421070323-00351.warc.gz
CC-MAIN-2024-18
5,864
25
http://www.peachparts.com/shopforum/384467-post1.html
code
I just acquired my first car - a 1981 Merc 200 (W123-model). It has belonged to my dad since new, and now it's my own The thing thats bothering me is the central locking. A few years back it started leaking (i think!) cause after a couple of minutes locked it wouldnt unlock. Then we relaced the drivers door "switch", the three "valves" in the bonnet, and 4-way tube (looks like a cross-roads, dunno what its called). But that didnt solve anything. Then we blocked a vacuum pipe which fed the rear right door, the fuel cap and the luggage boot and the rest three doors work fine without any leaks. So the problem must be in the boot/fuel cap/rear right door. what should I do now? How can I track the fault? Thanks a lot
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718423.65/warc/CC-MAIN-20161020183838-00386-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
721
5
https://coderpad.io/blog/announcements/chatgpt-coderpad-interview-embracing-modern-developer-toolsets/
code
ChatGPT in CoderPad Interview: Our Commitment to Embracing Modern Developer Toolsets ChatGPT has absolutely dominated the conversation in software development and many other industries since its release at the end of last year. Two major questions follow from most discussions in our industry on the topic: - How are ChatGPT and similar tools going to impact software development? - How does this change the hiring process for software developers, if at all? How ChatGPT impacts software development For software developers, I view ChatGPT as a natural and amazing progression in the tools we use to become more efficient in our day-to-day work. Previously, it was a chore to find API documentation, examples of code for various programming languages and frameworks, and usable example code to get developers started. Now ChatGPT makes all of this information available through a more efficient interface. When I started writing code in the 90s, important information for everyday software development was gated off. It was only available through academic institutions and textbooks, and was commonly limited to a small group of developers within specialized communities or businesses. In the 2000s, this information became democratized in the form of mass-published books. Democratization of this information exploded once the internet took off – you could start to find API documentation and cookbooks on integrating different technologies. Fast forward to now. ChatGPT is just the latest advancement in a developer’s toolset that has been evolving for longer than most of us have been working as software developers. The continued democratization of information, like what we see with ChatGPT and similar tools, is something we should all celebrate. How should the hiring process change (if at all) with the introduction of ChatGPT? ChatGPT and similar tools will become essential in helping developers to be more productive on a daily basis. This raises the question: if we all agree that ChatGPT is a useful tool for developers to make parts of their job simpler, should we encourage its use in our interview process? Developers often use tools like Google, StackOverflow, API documentation, and IDE tooltips during interviews because they are tools we expect every developer to use to perform their job effectively. So, why not include ChatGPT in this category? At CoderPad, we’re committed to providing companies with the tools available to assess candidates’ skills, and giving candidates the opportunity to showcase their skills within our platform. 🔖 Related read: How to Embrace ChatGPT in Technical Interviews ChatGPT in the CoderPad Interview Our whole team has been buzzing with excitement since we revealed this internally. I’m equally excited to finally share this publicly. CoderPad will include a ChatGPT feature in our Interview product. All CoderPad subscribers currently have access to ChatGPT in an interview pad. Anyone who signs up for a free trial will also have access. This means candidates can use ChatGPT during a technical interview to do all the things they’d do in their day-to-day work—looking up method signatures, asking for clarity on how an API works, and so on. - Ability to query ChatGPT in the Interview product for both candidates and interviewers. - Full access to play back every query to ChatGPT and all responses after the interview is complete. You can also play back all code that was written and run in parallel to the ChatGPT playback. As you can tell, we’re really excited to roll out ChatGPT in our Interview product. But this is just the beginning. We’re even more excited about what’s to come. We have a lot planned on the roadmap that we’re convinced customers, interviewers and candidates will be equally excited for.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00032.warc.gz
CC-MAIN-2023-50
3,797
22
http://www.mylot.com/post/2605076/is-textcashnetwork-a-genuine-site-or-scam-anybody-using-it
code
is 'TextCashNetwork' a genuine site or scam ? anybody using it ? November 28, 2011 11:43pm CST i have heard about 'TextCashNetwork'. and it pays you good money . so is it true ? i m confused about it. anybody using it . if yes please mention something about it.i want to know more about it.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00217.warc.gz
CC-MAIN-2018-26
290
3
http://forum.xda-developers.com/moto-x/moto-x-qa/issues-messaging-sending-mms-t2783003/post55803906
code
I rarley follow this forum as my wife wont let me touch her moto x, so this is a non-rooted unlocked addition bought direct from Motorolla and used with T-mobile US. Her issue has been intermediate and not always reproduceable, sometimes when she sends MMS they arent recieved but about 70% or more are. and about 99% of here SMS are recieved when sent. Issue has been present since day one (about 6 months ago now) and she only now has gotten fed up and wants a fix. She mostly uses hangouts but issue is present when using defult android messaging app as well. APNs look correct, T-mobile blames Motorolla and Motorolla blames T-mobile. I was hoping someone would know if this is a common issue or know of a fix or know if there is someone who really is at fault (other than me for only doing minimum searching before posting this thread - at work now but pregnant wife needs a fix now...) any help would be much appreciated!
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049274119.75/warc/CC-MAIN-20160524002114-00072-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
927
6
https://experts.feverbee.com/t/how-to-group-content-prevent-large-topics/6534
code
One of the things that I have been pondering for quite some time now is how to properly organise discussions/topics on community sites/forums. Most forums end up having HUGE threads about specific topics. An example on experts.feverbee.com is: A lot of replies, small discussions which quickly get drowned in new introductions. Lots of suggestions, discussions about those suggestions but it’s hard to keep track because everything is ordered chronologically. Threaded discussions would be a logical solution but I’ve never really seen that work. Lots of small forums (i.e. an ‘introductions’ forum on Feverbee) would lead to an explosion of categories. Splitting topics is cumbersome and hard to do because lots of discussions are strongly intertwined in those topics. On my site we have a lot of ‘central’ topics. I.e. a topic about Game of Thrones, about suggestions for Christmas presents, about cute kitten photo’s, etc, etc. they work, but it’s impossible to have a proper discussion in one of them or to really keep track of followup posts. I did see that more modern software (like Discourse) enables you to refer to a previous posts in a smoother way than by quoting the reply (like vBulletin and the likes do). But that’s still not exactly optimal in my opinion. What do you think? Do you recognise this problem? Have you solved it?
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00273.warc.gz
CC-MAIN-2019-09
1,360
12
http://ttlnews.blogspot.com/2016/02/
code
Tomcat has many parameters for performance tuning (see http://tomcat.apache.org/tomcat-7.0-doc/config/http.html), but for some attribute descriptions it is not a 100% clear how some properties affect each other. A pretty good explanation regarding acceptCount and maxThreads can be found here: http://techblog.netflix.com/2015/07/tuning-tomcat-for-high-throughput-fail.html But that article is missing a ... picture! That's what I tried to create here, with the numbers 1-5 indicating the steps described in the above article: Just in case should the Netflix URL ever get lost and for easy reference, here's the 5 steps: The Tomcat attribute maxThreads worked best in my case with value 50, as it won't saturate the machine/CPU. (due to too many workerthreads, many context-switches) To set maxThreads when using Spring Boot + embedded Tomcat: http://stackoverflow.com/questions/31432514/how-to-modify-tomcat8-acceptcount-in-spring-boot - Could not get acceptCount set via Spring Boot + embedded tomcat.... Tried by configuring my own embedded Tomcat container (e.g as in here and here): - This was for a pure Tomcat setup, no Apache webserver in front of it. - Special care when providing your own executor, see a.o: https://www.packtpub.com/books/content/overview-tomcat-6-servlet-container-part-2 - Performance monitoring & profiling was done with tools like: JVisualVm, Java Mission Control (JMC + Java Flight Recorder), JProfiler. - Useful calculation that also shows just increasing maxThreads doesn't mean performance will get better: http://stackoverflow.com/questions/12600826/increase-number-of-concurrent-connections-in-tomcat-7 - Some more tuning tips: http://www.genericarticles.com/mediawiki/index.php?title=How_to_optimize_tomcat_performance_in_production - Extra explanation of the attributes, also why certain NIO/BIO settings are recommended: http://stackoverflow.com/questions/25356703/tomcat-connector-architecture-thread-pools-and-async-servlets - To disable/replace the default connector: http://stackoverflow.com/questions/28050354/spring-boot-replace-default-embedded-tomcat-connector - Did not find the thread architecture at this obvious place, or any other search: http://tomcat.apache.org/tomcat-7.0-doc/architecture/requestProcess.html - Similar diagram but for a webserver; none-the-less interesting diagram: http://www3.nd.edu/~dthain/courses/cse30341/spring2009/project4/pool.gif
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120092.26/warc/CC-MAIN-20170423031200-00251-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
2,411
16
https://www.runeaudio.com/forum/what-s-the-difference-with-runeaudio-t7119.html
code
I just received a new Raspberry Pi on which I was planning on installing Raspyfi, but realized it had forked in two projects (Volumio and RuneAudio). Could you tell me what are the differences between these two projects so I can know which one fits best my needs? I mean, if TsunAmp never saw the light is probably because some developers wanted to bring it to different directions, which I suppose each respective project now followed. So, if it’s easier than saying what are all the differences between the two, what are those two directions each project followed? BTW, I asked the same question on RuneAudio forum but did not receive a clear answer about my sonnerie Thank you for your answer and for this project.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141747887.95/warc/CC-MAIN-20201205135106-20201205165106-00325.warc.gz
CC-MAIN-2020-50
719
5
https://unix.stackexchange.com/questions/451732/opening-many-tunnels-typing-the-password-only-once-all-accounts-in-gateways-hav/486141
code
I want to open many tunnels at once, they all have the same long password. ssh -fN -p 22 usr1@gate1 -L 10001:ip1:22 ssh -fN -p 22 usr2@gate2 -L 10002:ip2:22 ssh -fN -p 22 usrn@gaten -L 1000n:ipn:22 I can open the tunnels in background, which allows me to run them all together and then just type consecutively the password as many times as the number of tunnels I am opening ( Given that what I type is the same, I would like to find a way to type it just once, but still do it in a secure way.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816535.76/warc/CC-MAIN-20240413021024-20240413051024-00721.warc.gz
CC-MAIN-2024-18
494
6
https://forum.restic.net/t/backup-script-error-logging-and-unlock-placement/1892/5
code
Hi. Loving restic so far. I’m using it to backup a webserver directory to a business Google Drive. The backup script runs without issue every day. I have a prune script which runs once a week, which occasionally throws an error (locking) and fails to backup. The lock seems to be from the previous prune process (i.e. its dated approx 168 hours ago). I have a couple of questions about this. The script logs the output of the prune process to a text file, and then emails it to me. The relevant line is: /usr/bin/restic --repo rclone:mydrive:directory forget --prune --verbose --keep-daily 6 --keep-weekly 10 >> $LOGFILE This works great when restic is behaving itself. However when it receives a lock error (or, I’m guessing, other errors), it doesn’t log this to the text file. There is NO output. It would be useful to have this information to see at a glance that something has gone wrong, and, optionally, also be able to use this information to make decisions in the script. eg. If lock detected with ‘check’ then run ‘unlock’, and send me an alert. How do I make the output of restic errors appear in my logfile? I understand the correct way to fix the lock error is to run the restic ‘unlock’ command. First of all is there any downside to running it before I run each weekly prune? If there is no issue, then I might as well append it to the front of the script before pruning. Or would it be better to run it at the end of the script, to ensure the locks are released at the end of the run?
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322170.99/warc/CC-MAIN-20190825021120-20190825043120-00361.warc.gz
CC-MAIN-2019-35
1,518
6
https://liliputing.com/vlc-0-1-0-media-player-for-windows-8-1-released/
code
The developers behind open source media player VLC have released a second major version of their Windows Store app. The update brings a new user interface and performance and stability improvements. The team is also working on bringing VLC to Windows RT so it can run on tablets with ARM-based chips like the Microsoft Surface 2 and Nokia Lumia 2520. But that version’s not ready to go yet. VLC 0.1.0 requires Windows 8.1 or later. It won’t run on Windows 8. And it clearly won’t run on older versions of Windows which don’t include the Windows Store at all. You can still install the desktop version of VLC on those systems… or on Windows 8.1. Among other things the latest version of VLC uses the libVLC 2.2.0 core, uses Winsock instead of WinRTsock for networking, and uses Universal Windows app code so that this version can eventually be ported to run on Windows Phone as well as Windows 8.1 and Windows RT. The new user interface includes a dual-panel layout that makes it easier to switch between categories including home, videos, music, and removable devices. The app also supports Microsoft’s Windows 10 technical preview, which allows you to run VLC in full-screen or windowed modes.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510707.90/warc/CC-MAIN-20230930181852-20230930211852-00713.warc.gz
CC-MAIN-2023-40
1,206
7
https://answers.informer.com/134993/photo-transfer-with-photo-vault
code
If the photos are in a Photo Vault used by another application, then you need to unlock them in case they're password protected. After this procedure, you can simply copy the pictures to PC and transfer them to the new phone. Alternatively, connect o the phone with pictures to PC and create a backup. When it finishes, connect the second phone and use Restore from Backup. This operation requires iTunes to be installed.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806225.78/warc/CC-MAIN-20171120203833-20171120223833-00326.warc.gz
CC-MAIN-2017-47
421
2
http://askubuntu.com/questions/tagged/notify-send+libnotify
code
I've heard about notify-send of libnotify-bin, but it only seems purposed for GUI desktops. Is there a simpler counterpart that's just for consoles? Similar to the warning/notification we get when ... A script of mine shows notifications via notify-send. When I'm actively using my system they disappear after some seconds, but when f.i. I'm not using my mouse or keyboard, or my screen is turned off, ... sending notifications with notify-send in lubuntu notify-send -i error -t 1000 "Error" "error notification" I can send only 21 of them, after that no more notifications sent, the only way to ...
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642523/warc/CC-MAIN-20140305060722-00083-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
600
3
https://www.cfd-online.com/Forums/openfoam-solving/59634-official-cvssvn-repository-print.html
code
Is there an official cvs or sv Is there an official cvs or svn repository with the up to date version of OF, so that I can merge my code with that? I have found one here SVN Rostock (user=gast, no password) and something here. And maybe there are other sites with their own repository. But, how can I merge now my code with the repository? How and where can I do that? I don't want to do that at different sites. And maybe every year with an new OF version. You should contact Henry Welle You should contact Henry Weller if you want your solver to be in the officiale OpenCFD release that I think. With kind regards, |All times are GMT -4. The time now is 14:53.|
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280310.48/warc/CC-MAIN-20170116095120-00373-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
663
6
https://answers.sap.com/questions/3518428/how-to-list-sales-orders-based-on-creation-date-an.html
code
How can we list Sales Orders based on creation date and delivery priority. I tried using vl10a transaction code, but there we can see sales order based on delivery date. we need to list all sales order based on delivery priority and sales order creation date. can any one of you tell me which standard report gives such kind of report. Your suggestions will be highly appreciated.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500813.58/warc/CC-MAIN-20230208123621-20230208153621-00177.warc.gz
CC-MAIN-2023-06
380
4