url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://gitlab.kitware.com/third-party/visit/-/raw/bf97e2b86c7979931561c768035729a06c2061c9/help/relnotes1.1.1.html?inline=false
code
VisIt 1.1.1 Release Notes Welcome to VisIt's release notes page. This page describes the important enhancements and bug-fixes that were added to this release. Features added in version 1.1.1 - The views toolbar has a new icon that lets you save your current view and use it again later so that you can easily save several views and choose between them. - The Color Table Window now allows you to change the size of discrete color tables, such as the color table used with the Subset plot, so that they can contain as many colors as you want up to 200 colors. - VisIt's CLI can now be run without a display when the -nowin option is provided. - VisIt's compute engine now loads its plot, operator, and database plugins on demand so it starts up twice as fast on average. - The Subset window now has "turndowns" so that it is easier to expand subsets. - The window toolbar now has a clear window icon. - The Main Window's file panel has been modified so that when you click on a file that you have opened before, that file becomes the active file. This means that to re-open a file, you have to click on the ReOpen button. Bugs fixed in version 1.1.1 - VisIt's GUI sometimes crashes with Qt errors when exiting. - Stopping playing animations from VisIt's GUI is much more responsive. - Printing the window causes VisIt to crash if you have not set up a printer. - The viewer waits for the engine to open a database. - Cloning a window containing plots that use a time-varying database causes the viewer to crash. - VisIt crashes when attempting to save a window with no plots in it. - The rubber-band line flickers when drawing a lineout. - Not all file formats honor the family flag, set in the Save options window, when creating a filename. - VisIt's documentation has been brought up to date so that it is compliant with VisIt 1.1. - VisIt crashes if you do a lineout and the Curve plot and Lineout operator plugins are not loaded. - VisIt sometimes unnecessarily redraws windows when locking views. - The Reflect operator has a parallel bottleneck. - Pick does not work after the compute engine has crashed or exited. - The vis window does not immediately redraw when inverting the background and foreground colors using the invert background toolbar button. - Globally accessible host profiles now contain the path to VisIt. - The toolbar for windows created via lineout now updates after the window is created. - VisIt's viewer no longer prints out a warning message when launching a metadata server on a machine where it has not yet been launched. - VisIt's launch program, which is used on Windows, has been changed to filter other versions of VisIt from the path so multiple versions of VisIt can be installed and run on the same computer. - Pick points are now cleared from the vis window when the animation timestep changes. - Changes that you make to system-wide host profiles are now preserved in your local configuration settings. Click the following link to view the release notes for the previous version of VisIt: VisIt 1.1 Release Notes. Click the following link to view the release notes for the next version of VisIt: VisIt 1.1.2 Release Notes.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419593.76/warc/CC-MAIN-20200601180335-20200601210335-00265.warc.gz
CC-MAIN-2020-24
3,162
36
http://ciqqqqt.xyz/archives/138
code
Novel–Monster Integration–Monster Integration Chapter 2147: Whittle interrupt bottle Aboriginal American Weaving The ghosts acquired seeped inside me and came at me, and for some strange explanation, all around 90% of my defenses did not find them, and that 10% that had detected them, they can be crus.h.i.+ng them individually, it won’t be long before the earliest ghost originated at me. Chapter 2146: Towards Grandmaster II Immediately after I faded, one more range of ghostly fingers handled my s.h.i.+elds and faded inside me. The next ghost came up at me, half another as soon as the initially ghost, and also as it joined inside my armour, I needed mailed it into the quirn. The Alternative Hero While I was upset at it for which it have to individuals, I realized the truth and realized I used to be powerless versus it. This can be a Grandmaster, and i also am status towards it having its infiltration with out passing away is definitely a miracle. The seed acquired finally been able to suck the Grimm vidette absolutely, not leaving behind a speck, and also it did start to bring good adjustments towards the seed, those I was wishing for. If I had been bogged down in life and loss of life predicament, I might view it while not missing a defeat, however right now I could possibly not. The Iliad of Homer It can what its brand states that it inhibits the s.p.a.ce or could be applied contrary results. I needed used it earlier in opposition to Grimm Vidette when i caught it inside my key without one, there was clearly a very high opportunity the violet vidette can have split with the s.p.a.ce of my center with its electrical power. I had been again relocating to hundred or so several prospects on handling the ghosts when my gaze fell on the sterling silver leaf, plus a daring strategy came into my thoughts. These ghosts are devouring curse wraiths they are the best of levels, but they are still extremly unsafe. They are manufactured from the fact of Grimm Monsters and diminished with the magic formula strategy, which from the things i obtained read through, is pretty gruesome. It got truly arrived at its breaking up point. An individual after one other, ghosts originated, and i also mailed them straight into my quirn, they are forthcoming at extremely fast velocity, so i am giving all of them in the quern even though weakening my aura and even staggering a little bit I am certain it is sensing something occurring to the ghost, plus i don’t need to get any action in opposition to me, therefore the acting. It possessed truly attained its splitting position. the glister hair dryer All those encounters of individuals, monsters, and Grimms that happen to be found are beings this Parrotman had diminished to make these ghosts it did not additional their own kin for these kinds of grisly murder. The gold leaf is often a legacy cherish I had become from your pyramid, and also it seemed to be perfectly made in my situation. Raa Raa Raa The quern materialized inside my core, at the same time, the silvery leaf flew over it and included it featuring a metallic s.h.i.+ne. I needed under a minute by using a sterling silver leaf before its strength was put in I really hope I should be able to handle it before that. hugo boss outlet Raa Raa Raa The gigantic disfigured ghosts descended me wailing loudly, and they also reached me they had finished stranged points. They halted and slowly migrated their hands and fingers and claws toward my s.h.i.+elds. When I obtained understood exactly what they were actually, I had turned on more protection, and yes it immediately slowed on the ghosts, nonetheless it only slowed down down. I needed barely had the opportunity to obtain myself a second, however, these mind-boggling quantity of ghosts which can be coming toward me a single after another smash me. I became again shifting to hundred or so several prospects on addressing the ghosts when my gaze declined around the silver leaf, along with a bold program came into my mind. I used to be again transferring to one hundred unique choices on handling the ghosts when my gaze dropped in the metallic leaf, along with a bold prepare arrived into my thoughts. In the same way I completed my preparations, the initial ghost by using a monster deal with shown up around my armor, plus i immediately sent it to my central without the need of letting it touch my heart and soul. If the program falls flat, I am going to be passing it on an even better pathway to my soul, exactly where it will encounter alongside no strength devouring it. The seed possessed finally had the opportunity to suck the Grimm vidette entirely, not departing a speck, and it also began to bring wonderful improvements for the seed, the ones I had been wishing for. If I had been caught up in everyday life and fatality circumstance, I might see it without having absent a conquer, however I was able to not. When I acquired comprehended whatever they were, I needed activated a few more defenses, also it immediately slowed around the ghosts, but it really only slowed downward. I needed barely been able to purchase myself another, nevertheless these tremendous variety of ghosts which might be approaching toward me 1 after yet another grind me. little masterpieces of science 1902 For any ghosts, the leaf is not only suppressing the s.p.a.ce but will also strengthening the impressive spatial formations I had etched onto it. For your ghosts, the leaf is not merely suppressing the s.p.a.ce but additionally fortifying the effective spatial formations I needed carved in it. These ghosts are devouring curse wraiths these are the best of levels, but are still extremly harmful. They are constructed from the fact of Grimm Monsters and diminished from the key method, which from the things i got read through, is rather gruesome. what destroyed earth in midnight sky The quern materialized during my center, as well as at the exact same instant, the silvery leaf flew over it and protected it featuring its silver s.h.i.+ne. I needed under a minute by using a metallic leaf before its ability was spent I really hope I will be able to deal with it before that. Those faces of individuals, monsters, and Grimms which can be viewed are beings this Parrotman possessed diminished to create these ghosts it failed to spare its unique kin for such grisly murder. The ghosts experienced seeped inside me and emerged at me, along with some odd purpose, about 90Per cent of my protection failed to recognize them, and also that 10% which had identified them, they may be crus.h.i.+ng them one by one, it won’t be well before the earliest ghost arrived at me. Novel–Monster Integration–Monster Integration
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00799.warc.gz
CC-MAIN-2022-40
6,720
40
https://devpost.com/software/datax-5y78dn
code
With the world migrating into a more technology-oriented era, the amount of data we use has naturally grown exponentially as well. With additional data, the need to interpret, analyze, and manipulate the data is as prevalent. The idea originally came as one of our teammates had previously used a visualization software called Gephi. However, in his time using it, the software was limited to a 2D scope and had to use characteristics like size and color to represent other dimensions making it harder to interpret. Additionally, with many data points, the plot soon seemed to look cluttered making it hard to interpret. To make this visualization and interpretation easier, we created DataX. What it does Cutting to the chase, our application does 3 things: - Uses Three.js to create a multi-dimensional representation via the interpretable format previously created. - Implements dat.GUI for customizable manipulation on the data set such as restrictions and styling. How we built it We divided the work into 3 phases: - Initially, we split into two teams: one that focused on developing the home page of the website and the other that - Familiarized themselves with three.js. - Development of the visualization screen. - Integration between the screens. Challenges we ran into - Conversion from the CSV to a readable format like lists and JSON objects. - Originally, we were planning on using Vue + three.js, however, we later decided against it because while the Vue + three.js library was well done, it was not complete and it lacked enough functionality that we would have had to spend a lot of time implementing it ourselves. - Finding data that worked well visualized in multiple higher dimensions was difficult as well. Accomplishments that we're proud of We are proud of implementing a working product in the time frame. In only 24 hours we created a tool that is functional, sleek, and has application. Also, we are proud of the way we implemented a topic that was completely unfamiliar to all of us in such a short time. What we learned Adam - I learned how to use three.js for 3D rendering and dat.GUI for customized controls. David - I learned very basic spreadsheet manipulation in LibreOffice Calc and I learned how the CSV format works. I learned how to use the CSV package in python. What's next for DataX We would like to add in even more functionality in the form of display data restrictions, better visual scaling, and better customizable scaling for the data itself (logarithmic scaling, clamping, linear scaling, etc.). There are additional images of sample visualizations in the slideshow above.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100550.40/warc/CC-MAIN-20231205073336-20231205103336-00815.warc.gz
CC-MAIN-2023-50
2,620
24
https://wn.com/Dot_plot_(bioinformatics)
code
Slashdot (sometimes abbreviated as /.) is a news website that originally billed itself as "News for Nerds. Stuff that Matters". It features news stories on science and technology that are submitted and evaluated by its users. Each story has a comments section attached to it. Slashdot was founded in 1997 by Hope College student Rob Malda, also known as "Commander Taco", and classmate Jeff Bates, also known as "Hemos". It was acquired by DHI Group, Inc. (i.e., Dice Holdings International, which created the Dice.com website for tech job seekers.) Summaries of stories and links to news articles are submitted by Slashdot's own readers, and each story becomes the topic of a threaded discussion among users. Discussion is moderated by a user-based moderation system. Randomly selected moderators are assigned points (typically 5) which they can use to rate a comment. Moderation applies either -1 or +1 to the current rating, based on whether the comment is perceived as either "normal", "offtopic", "insightful", "redundant", "interesting", or "troll" (among others). The site's comment and moderation system is administered by its own open sourcecontent management system, Slash, which is available under the GNU General Public License.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141711306.69/warc/CC-MAIN-20201202144450-20201202174450-00607.warc.gz
CC-MAIN-2020-50
1,240
3
https://forum.sketch.com/t/locate-a-layer-or-symbol-that-is-using-a-specific-font/1105
code
Hello there! I am working on a design system with two other designers and I happened to notice that Roboto font suddenly appeared in one of our sketch documents. I found this out when I attempted to save the document and Sketch asked if I wanted to embed the font in the file. We don’t use Roboto in our design system so I think it must have been accidentally pasted in there. The problem is no one knows what layer or symbol is using it. Is there a plugin or some other manner of finding out where a font is being used? I have Automate but didn’t see anything in there that would cover this. Thanks! Woo hoo! It works. And also, this plugin showed me that Roboto was actually meant to be in the design system, ha! We have a youtube video symbol where all the text is in Roboto. I forgot we had that. Oddly enough though, that symbol has been there for a long time and it has never before asked me to embed the font.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655143.72/warc/CC-MAIN-20230608204017-20230608234017-00572.warc.gz
CC-MAIN-2023-23
920
4
http://www.rightscale.com/library/server_templates/Radiant-CMS-on-IBM-DB2-Express/16950
code
About this template Using this template you can create a Radiant CMS blog powered by Ruby on Rails and DB2, in a matter of minutes. This template installs everything that is required to have a fully functional Radiant CMS blog that works with DB2 on a micro Amazon instance (which is currently free for a year for new customers): * Ubuntu 10.04 LTS Server Edition (32-bit) * nginx + Passenger * Rails framework * IBM Ruby driver and Rails adapter for DB2 and IDS * IBM DB2 Express-C database server A sample Radiant CMS blog will be available at the URL corresponding to your public DNS on the standard port 80. To customize it, append /admin to the URL and enter admin as the username, and radiant as the password. If you wish to use this template server in production mode, we highly recommend that you: * Change the password for the Linux users db2inst1, dasusr1, db2fenc1 from password to more secure ones; * Update the /opt/www/blog/config/database.yml file with the password you just changed for the user db2inst1; * Change the Radiant CMS admin password from radiant to something much more secure. This template includes a number of operational scripts that you may find useful in managing your DB2 databases. To see the scripts and to learn how to use them see the "Scripts" tab for this server template. MultiCloudImage: Radiant CMS on IBM DB2 Express-C [rev 4] RightScript: Start nginx + Passenger [rev 1] RightScript: SYS Monitoring install v9 [rev 12] RightScript: Start DB2 Administration Server [rev 4] RightScript: Start DB2 [rev 1] RightScript: Restart nginx + Passenger [rev 1] RightScript: Start nginx and DB2 [rev 4] RightScript: Stop DB2 Administration Server [rev 4] RightScript: Stop DB2 [rev 1] RightScript: Stop nginx + Passenger [rev 1]See More Revision 4 | Jan 28, 2011 Added support for the US-West and AP AWS clouds. Revision 3 | Jan 27, 2011 Added support for the European AWS cloud. Revision 2 | Dec 17, 2010 Added operational scripts.
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450581.71/warc/CC-MAIN-20151124205410-00114-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
1,965
21
https://mycrafts.com/diy/how-to-make-bunny-with-fuse-bulb-and-cotton/
code
IN THIS VIDEO I SHOW YOU HOW TO MAKE BUNNY USING FUSE BULB AND COTTON. 3. A FUSE BULV IF YOU HAVE ANY QUESTION ABOUT IT CONTACT WITH ME 1996peeutiwari@ ... MY OTHER VIDEOS How to make twist and popup card by cre by Janice Mae by Crafts And Drawings
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964360951.9/warc/CC-MAIN-20211201203843-20211201233843-00392.warc.gz
CC-MAIN-2021-49
248
7
https://www.designernews.co/comments/197845
code
Designer News is where the design community meets. almost 6 years ago from Max Lind, sometimes Maxwell I hope all the posts flagged as spam are being entered into some sort of naive bayes classifier so in the end we won't have to flag them ourselves ;) Where the design community meets. Designer News is a large, global community of people working or interested in design and technology. © 2021 Designer News Ltd.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00347.warc.gz
CC-MAIN-2021-43
414
6
https://neurostars.org/t/dataset-for-tractography/17079
code
I am a beginner in the field of neuroimaging. For the past few weeks I have read various research papers related to Tractography. I am looking to implement some algorithms. For that I have been trying to collect some datasets but I am getting confused of what is actually needed to implement and validate the tractography algorithms. In some datasets I get only .nii files and whereas in some i get .trk files, .fib files and various masks. Does anyone have any dataset arranged in a structured manner with implementation as it will be very useful in getting me started for implementation. Hi. I also want to add that I have one file with extension .nrrd files. I would suggest processing provided datasets with available tutorials. These will help you understand the processing pipelines: With regards to file extensions: Raw data from a scanner is typically in DICOM format (though proprietary Philips PAR/REC files are sometimes seen). While some DICOM files have the extension .dcm, many have no file extension. Since each vendor describes diffusion data differently, DICOM is not one format but many. One typically converts these images to NRRD format (.nrrd or .nhdr, if you use Slicer) or NIfTI (.nii, other tools). Initial processing typically removes artifacts and noise (e.g. degibbs, dwidenoise) and spatially undistorted images (e.g. TOPUP/Eddy). Then the DWI images are fitted as tensors (e.g. either as a 3x3 matrix or V1, V2, V3), or more sophisticated models of ODF. The fitting also provides scalar values like TRACE, MD, MK, etc. These Voxelwise values are typically saved in NIfTI format. Some tools store these in custom formats (e.g. DSIstudio .fib). Optionally, voxels can be connected together using streamlines. Popular formats include TCK, TRK, BFloat, DAT, and VTK (confusingly also saved as .fib). Hi @Chris_Rorden. Thanks for the response. I have some queries. I have been reading about the fiber reconstructions like representing DWI using tensors (DTI), q ball, CSD, etc., In one paper titled Deep Tract (By Itay Benou) I saw that an RNN model has been used instead of these above models to form a fODF. In the above snippet it provides what is required to implement the algorithm. I have downloaded some datasets like ISMRM 2015 challenge, Fiber Fox. But in these datasets I have the .nii files, and the ground truth of the 25 fiber bundles. As per the Github repository for the above paper the data format is Here in the above data format I am not sure about where to take this labels, mask and wm_mask. You might want to check out the examples at dipy.org. Several of the examples automatically downloaded data and process it from the raw data all the way to tractographies. For example : https://dipy.org/documentation/1.2.0./examples_built/tracking_introduction_eudx/#example-tracking-introduction-eudx Hi @Ariel_Rokem thanks for the link. Hi @RS_A , Can you mark this thread as solved ? It helps sort the different issues. Thank you in advance Hi @abore. How to do that? I am not sure about the process. You should see a button “Solution” on each reply that has been done. You can click on the one under Ariel answer. Thanks @abore for helping out.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104354651.73/warc/CC-MAIN-20220704050055-20220704080055-00122.warc.gz
CC-MAIN-2022-27
3,188
19
https://www.retroveteran.com/2022/10/23/1k-space-doors-v2022-09-28-pico-8-game/
code
1k Space Doors by SkyBerron is an arcade shooter made for PICO-1K Jam 2022. The goal of the game is quite simple. You move your spaceship in a vast, abstract space. Avoid all the oncoming rocks and doors that would smash you in a second. It is an endless space, so score as high as possible.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474690.22/warc/CC-MAIN-20240228012542-20240228042542-00383.warc.gz
CC-MAIN-2024-10
291
2
https://mixomics-users.discourse.group/t/dividing-dataset-to-create-model/501
code
Dear mixOmics developer, I create a model using your R-package, I use PLS-DA function. This function provide m-folds cross validation. In this case, when we have a small sample dataset, does it need to be divided into training and test set? Considering that in the function it is cross-validated already.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363211.17/warc/CC-MAIN-20210302003534-20210302033534-00537.warc.gz
CC-MAIN-2021-10
304
2
http://ab-rtfm.blogspot.com/2007_08_05_archive.html
code
Anyway, back to ASLEAP... Here are some of the features that ASLEAP has to offer (Check out http://asleap.sourceforge.net/ for a complete list, plus PPTP support). - Recovers weak LEAP passwords (duh). - Can read live from any wireless interface in RFMON mode. - Can monitor a single channel, or perform channel hopping to look for targets. - Will actively deauthenticate users on LEAP networks, forcing them to reauthenticate. This makes the capture of LEAP passwords very fast. - Will only deauth users who have not already been seen, doesn't waste time on users who are not running LEAP. - Can read from stored libpcap files, or AiroPeek NX files (1.X or 2.X files).
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645405.20/warc/CC-MAIN-20180317233618-20180318013618-00070.warc.gz
CC-MAIN-2018-13
669
8
https://shop.wrkshp.tools/courses/startup-ideas/90234-module-3-the-problem-you-solve
code
Let’s face it. The point of view you have defined in the previous model is just a nicely packaged stack of assumptions, since they have not been tested against the real world. You need to start understanding what the problem you want to solve and your point of view mean to the people that have the problem in the first place: your customers. What do they think? Learn to fall in love with the problem.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585177.11/warc/CC-MAIN-20211017113503-20211017143503-00534.warc.gz
CC-MAIN-2021-43
404
3
https://community.nxp.com/thread/490871
code
Is it possible to switch a CAN socket of basic to extended ? To make me able to accurately answer your question, please specify what exactly NXP device do you mean. Have a great day,Artur I am working with i.MX 6ULL and using only one CAN port for my application(say CAN0), now my question is 1. If I configure the CAN port baudrate as 500kbps, will it possible for me to change the baudrate as 1Mbps on the fly I am expecting datas's from some sensors @ a baudrate of 500kbps, once receive bunch of data's I have to send the same to another device with CAN baudrate of 1Mbps.So the switching between the baudrate is possible? and how can I do that? To change the FlexCAN baud rate, you have to stop the FlexCAN controller (put it to the Freeze mode) and re-program the FLEXCAN_CTRL1 register according to the new baud rate. How can I set CAN Rx filter so tat I can avoid receiving dats from other unwanted nodes those are connected in the same BUS? Retrieving data ...
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00257.warc.gz
CC-MAIN-2020-34
969
9
https://clairefuller.wordpress.com/my-policy-on-moderation-on-this-blog/
code
Of course I believe in freedom of speech, but I also believe in maintaining strong boundaries and safe spaces for people who are learning and growing and expressing tender, inner parts of themselves, distress about the world, and hopes and visions they one day hope to manifest in the world. I intend to do all those things on this blog and to be quite vulnerable. So I plan to maintain this blog with one bias – the same single bias I intentionally maintain in my personal life. That is, that any philosophy, theology, idea, opinion, argument, worldview, assumption, or otherwise which at its core instills apathy or cruelty I reject outright. Anyone who wants to express opinions of this nature can start their own blog and copiously quote me and have at anything I’ve written. But they can’t post on here. I intend to keep this a safe space for myself and for other people who, like me, are searching for reason to hope and change and to be bold enough to express their small, timid intuitions and convictions creatively in the real world. So no meanness, no pettiness, and no uncalled for brashness allowed on here. Go elsewhere with it, and have a hay day. (My decision was informed by the genius and blog policy of Harriet Jacobs, which can be found in the comments part of an amazing post that can be found here.)
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593302.74/warc/CC-MAIN-20180722135607-20180722155607-00344.warc.gz
CC-MAIN-2018-30
1,326
4
https://apple.stackexchange.com/questions/91051/how-to-have-the-same-gradient-fill-background-on-all-slides
code
How do you change the background gradient fill for all slides. I created a nice gradient fill for the first slide and I need it to come across all of the slides for the presentation. thank you 1) You have to edit a "Master Slide" (menu: View -> Show Master Slides). 2) Then Edit the master slide (you can duplicate it if you don't want to touch the original). 3) This automatically applies to all the slides. See how the Slide 1 has now a gradient, even tho I have modified a Master Slide. Play around (and Google) Master Slides in Keynote and you will likely understand how it works.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585120.89/warc/CC-MAIN-20211017021554-20211017051554-00227.warc.gz
CC-MAIN-2021-43
584
6
https://www.smspower.org/forums/15470-SC3000TapeSoftwareRestorationProject
code
Sega Master System / Mark III / Game Gear SG-1000 / SC-3000 / SF-7000 / OMV Home - Forums - Games - Scans - Maps - Cheats - Credits Music - Videos - Development - Hacks - Translations - Homebrew |Goto page 1, 2 Next SC-3000 Tape Software Restoration ProjectPosted: Thu Apr 16, 2015 4:35 am As Tycho has been urging me to convert the Softgold games to run on the SC-3000 multicart, I finally kicked off something I've been meaning to do for the past couple of years. The SC-3000 Tape Software Restoration Project. The idea is to make clean digital copies of the tape audio available as 16-bit 44.1KHz mono WAV files. They work on original SC-3000 hardware and emulators like MESS, and they are highly compressible - in the order of 50:1 to 100:1. eg. the Vortex Blaster audio compresses down from approximately 14.5MB to around 172KB if you use RAR. So this solves both the problem of degraded source audio, and the space problem as 60 seconds of normal audio is around 5MB in size and is generally not compressible by any significant margin. I've always had a long term goal to clean up and start releasing the tape software. That's actually how the SC-3000 multicart development started - as a sideways excursion whilst looking at how to convert tape software to disk images. And after building the multicart I know most of the custom tape load routines pretty well, so I can use a few different techniques to remaster them. I also wanted a place to record some of the neat things about the loader routines and copy protection used by the old tape software for future reference. Anyway, here are the first two releases: The Secret of Bastow Manor (text / graphic adventure by Softgold) (vertical scrolling shooter with digitized speech by Trident Technological Systems) Note - I'm going to try to limit myself to a couple of releases per month as some of the tapes require a bit of thought (eg. the Mike Boyd ones), and all of them take time to write up. |Posted: Thu Apr 16, 2015 4:49 am Really thank you! I want to ask you some (probably trivial) things. Are all tapes sega basic code? Or are there games that are a binary which is just loaded through basic and then executed? |Posted: Thu Apr 16, 2015 6:03 am Some tapes like Bastow Manor are almost totally or totally written in basic. A text / graphic adventure does not need the performance of machine code and the Sega basic language was actually very advanced and contained a lot of useful things like drawing routines and a character set that is biased towards drawing charts, shapes, and patterns. Bastow Manor only uses embedded machine code for a bit of copy protection and to trigger the ROM load routines. Other games like Gold Miner have a significant basic component for the convenience of drawing the screens and displaying instructions and setting up sprites but uses machine code for all the animation. Then you have 100% machine code games like Vortex Blaster and Michael Boyd's games like Burglar Bill and Sir Roderick's Quest in which the only basic command is a CALL statement to jump to the machine code. All of the screen drawing, animation etc is handled in machine code in those games. (I haven't looked at Vortex blaster recently but I'm pretty sure the basic content is minimal) Vortex Blaster is a single load tape ie all the machine code is hidden after the initial basic program. Michael Boyd's games use multi part loaders where the screen code and vram data is loaded direct from tape as a binary block. |Posted: Thu Apr 16, 2015 7:12 am |If you understand the tape encoding then can you determine if TZX and similar formats can archive the data losslessly? |Posted: Thu Apr 16, 2015 7:47 am I am very, very happy you are doing it. - A few bits: would you be able to provide higher-resolution scans? The 1024 wide scans are a bit tight in this day and age. The minimum I would say would be twice that, and larger would be better for preservation. - Along with each "processed" release can you make sure to preserve a raw unprocessed audio dump of the original tape? We never know and we they come in handy in the far future. - Would you be able to record the original audio length of each cassettes/sides if applicable? For reference. (PS: I have increased your quota to 500 mb here and shall it ever be a problem we'll increase it again.) |Posted: Thu Apr 16, 2015 10:07 am :) I can see more of my life slipping away on a hobby project. I have had a quick look through the TZX specifications. I think the answer is probably yes, although I'm a bit busy to do a proof of concept, sorry. However, I wonder whether the TZX style formats are necessary with this approach. Currently, no SC-3000 emulators support the TZX format, so far as I know. But there is good emulator support for actual WAV files. And using this technique allows us to shrink the WAV files down to the point where they are 'small enough'. Having said that, if you did want to use a tape format like TZX, then the SC-3000 tapes all have a relatively simple representation. All of the ones I stepped through when building the multicart (about 70 or so) use the Basic IIIB ROM load / save routines, or at least the atoms within the routines. For instance, even the Michael Boyd games use the SyncBits routines (that outputs 3600 ones) at the start of data blocks, then the WriteByteToCassetteOut routines from Basic IIIB ROM. So the waveforms themselves are always predictable. ie. you have sync bit blocks of 3600 one bits (2400 Hz cycle) followed by a zero (1200 Hz cycle) followed by 8 data bits and two stop bits for each data byte. ie. a given tape may or may not follow the Basic program structure of Leader Field, Key Code, FileName, Program Length, Parity, Dummy Data, 1 second silence, Leader Field, Key Code, Program Data, Parity, Dummy Data. But the important atoms of the Leader Field (ie. the sync bits) and the format of each individual data byte are observed by all the tapes I've looked at. As a side note, Michael Hadrup did publish LSV (Load / Save / Verify) extensions in the Sega computer magazines which provided more reliable routines (I think he said they were based on the Spectrum routines from memory). But no commercial software used those. |Posted: Thu Apr 16, 2015 10:21 am Thanks Bock. I'm happy to help. I'll just try to put a limit on how much time I spend on this each month. The original multicart build took over my life for a few months :) I've been working off Aaron's excellent collection of material. I'll just check if he minds if I use his scans. My thinking with those zip files was to make them small and self contained. So they have high quality, but highly compressible audio, a text file with information about the title, and a smallish jpg of the cassette cover. I figured that gives them the best chance of being widely distributed and 'living' forever :) But point taken on the larger scans. I'll see what I can do. I certainly won't get rid of the original audio recordings. But I'm not sure if there is much point making those publicly available given they are much larger (and can't be compressed) and often require processing to work with emulators / real hardware. I'll try to remember to jot down the audio length in future. |Posted: Thu Apr 16, 2015 10:32 am I would like us to extend our database and host the high quality scans here, if you or Aaron or who has been scanning them is happy with that. So I understand it is preferable to keep a smaller .JPG in the package. If you want to attach or e-mail us the bigger ones we'll post them on the site. I realize it's not a very exciting page as is but we can build from there similarly or how we built on for the cartridge games. One key is to get people to use the emulator (MESS right now, and really another emulator should include support) to motivate the creation of screenshots, maps, etc. 300-600 KB is an acceptable size. It's important that the package are versioned (as you did so) also to convey the fact that they are manual work and may not be perfectly representative of the real data, not in a bad way mind you. I also wish we could have a catch-all binary format but that's a lot of work and your solution is reasonable. Do you mind if we start including your packages in our pages? (ZIP contents will be unaltered). |Posted: Thu Apr 16, 2015 12:47 pm Original noisy audio should compress better with dedicated algorithms (e.g. FLAC), although still not as well as the generated data. Can the clean audio be reduced in bit depth and sampling rate without affecting the results? That will save a lot of space. It ought to work at 4.8kHz with as low a bit depth as you can get. |Posted: Thu Apr 16, 2015 7:32 pm The short answer is I think releasing 44.1KHz, 16 bit audio that compresses well is still the preferable solution. The lower bit depths and rates compress a bit more, but from memory not significantly more than when the source has a perfect waveform. ie. you're talking about the difference between compressing to say 200Kb vs 125Kb or something like that if I recall correctly. Also MESS and other waveform sampling / conversion tools seem to work better on higher bit rate inputs. So if the compression on the 44KHz / 16 bit is acceptable I'd rather stick with that. Now the longer answer :) Interesting - I just ran some FLAC tests. That gives slightly better compression than RAR on the original audio, but still only in the order of 10-15% max. And interestingly the remastered audio only compresses around 10-15% with FLAC compared to 97% + with RAR or Zip. The second question is more interesting. Yes, technically you can use both a lower bit rate and lower bit depth. Theoretically you can use 8-bit and 4800Hz. When I first started looking at recording files about 15 years ago I remembered that bit of encoding theory too - something like you only need to sample at twice the maximum frequency you want to reproduce (and the tape encoding works on 1200Hz / 2400Hz tones). In fact, doing that has the benefit of effectively running a low pass filter over the sampled audio by removing higher frequencies. I think in my first attempts 15 years back I settled on 8 bit and 8KHz or something like that. Unfortunately I lost all those recordings in a hard drive crash back in about 2003-4 :) However... after playing around with this stuff for a couple of years, I prefer working with the CD Audio bit rate, or maybe 22050Hz as a minimum. Waveform sampling / conversion tools (like the MESS tape emulation and other third party tools) tend to work better when they have more data points to work with. I found MESS was a lot less predictable on audio loading when the bit rate dropped below 22050Hz and the signal was degraded somewhat. And as soon as you edit the waveforms in a sound editor (like when splicing multiple blocks together to make a full tape image), the waveforms all get shunted sideways a little bit. At higher bitrates you get more sample points in the peaks of the waveforms. If you think about it, at 44.1KHz you get umm.... each peak at 2400Hz is 1 / 4800th of a second wide. So 44100 / 4800 = 9.2 samples per half waveform. If you drop that to 22050Hz, that is about 4.6 samples per half waveform. At 4800Hz sample rate, that is only one sample per half waveform. So although it may work, you get more vulnerable to slight shifts in the waveform during editing as you drop the sample rate. Note - the 8 bit vs 16 bit probably won't make much difference to the 'accuracy' of the recording as the exact amplitude is less important and 256 levels is probably fine. But MESS standardizes on 16 bit when writing so again I've stuck with that, especially if I'm using MESS to generate the output. I have a bit more flexibility when using SegaWavWriter as that will output different bit rates. I just need KerrJnr to make a couple more mods to that for me :) I'll write those different techniques up when I use them for reference on the restoration project page. |Posted: Thu Apr 16, 2015 9:06 pm |I experimented and found about the same things. Ultimately there is very little unique data in there and while you could reduce it to a stream of 1s and 0s for each encoded bit, it's somehow more "real" to have all the square waves. |Posted: Thu Apr 16, 2015 9:18 pm Just checking if Aaron is happy for me to pass on his scans. I don't have all the original covers, and I'm not sure I can face dragging my stuff out to rescan :) Great. Pretty much everything should end up in that size range and I can add a comment about being a manual work. I may add an MD5 Hash on the WAV files or similar. Yes, fine to include the zip packages in your pages. I want them widely distributed as that gives them the best chance of staying around long term. Thanks for setting up the tape pages. I think you gave me edit access a while back, but unfortunately I didn't have the time to figure out how to create a new section / topic and edit it :) Ok - I'd better get some work done and try to stick to my "couple of releases a month" target :) |Posted: Fri Apr 17, 2015 2:51 am do you know the voltage output by the original SR-1000? I would like to connect an mp3 player to the sc-3000, but I am worried in damaging it, so I would like to set the volume such as the voltage is the same |Posted: Fri Apr 17, 2015 3:00 am I don't know what voltage it outputs, but I wouldn't worry about it. If your MP3 player output is suitable for headphones then it will be fine. You often have to try different volume levels until you find one that works nicely (and depending on the recorded volume level of each tape, although that should not be an issue for the remastered tapes - ie. once you find a level that works for one then it should work for all). Start at about 35% volume. If that doesn't work, try 50% etc. Move up and down a bit until it works. I usually use a mono audio cable directly from the headphone jack of my laptop to play audio into my SC-3000 cassette-in port. Note - I don't know if you need a mono cable or not. It is possible a stereo cable will work as the WAV files are mono and the SC-3000 will only have the mono connector points, so will *probably* just pick up one channel off a stereo cable anyway. Second note - if your MP3 player has some sort of built in real time graphic equalizer or sound modification (eg. adding 'concert hall' reverberation, bass boost etc.) that *might* affect the SC-3000's ability to recognize the tones. So turn off any sound processing if you can. |Posted: Fri Apr 17, 2015 7:58 am Idea: applying some thresholds to the original recordings so they become the same clean values as the artificial ones, would preserve any inconsistent timings in the recording. That is arguably a more representative version, and should work for any encoding. I think I have a bunch of scans from Aaron in my backlog. I also think they are not very high res. |Posted: Fri Apr 17, 2015 10:58 am It'll be great if we can archive as many tapes as possible. They degrade worse than cartridges unfortunately, plus weren't as mass produced as (most) cartridges |Posted: Fri Apr 17, 2015 3:03 pm Great minds think alike (or stealing my thunder, one of the two). [ was going to confirm these recordings worked on real hardware (like a verified dump [!]) before releasing them, but I haven't got around to it yet. Also collating 600dpi scans, some of which are on segaretro already, but also intended to be hosted here too |Posted: Fri Apr 17, 2015 6:30 pm Very nice job :) I'd noticed you had been filling out more of the Sega Retro games pages over the past couple of years. They look good. Are those all original recordings, or have you done some cleanup work on them? The two main advantages of remastering the recordings are 1. compressibility down to a small size and 2. reliability. But that doesn't preclude making the original recordings available if you can find somewhere to host them (and as Bock said it is very useful to have them available for future reference). So don't let my project stop you :) Have you thought about how to distribute them? I'm guessing that depending on your record settings then when you put them all together you could well have a couple GB of non-compressible data. So you may need to distribute them as a torrent or via dropbox or a cloud drive service or similar as that may be a bit large to put up on a permanent website. If you want to try remastering yourself, then many of the tapes are trivial to remaster using the MESS save technique I cover in By pasting that basic code into MESS, you save back an exact copy of the program you just loaded, including any special characters hidden in the filename as a copy-protection measure (eg. Help by Michael Howard) or as a convenience to help launch the program (like the 'Run' loaders for the Michael Boyd games). ie. steps 5 & 6 5. Paste the following into MESS. This copies the file name we just loaded from $82A3 across to the SAVE file name location at $83A3 A=0:FOR X=0 TO 15:A=PEEK(&H83A3+X):POKE &H82A3+X,A:NEXT X (then press CR) 6. Paste the following into MESS * Saving start will appear, and press record on your tape image (see MESSUI Devices menu) That works for 1. ALL single-load tapes 2. Any multi-load tapes that use a full basic header / data block for subsequent loads. The easiest way to recognize these is if you hear the beep for Loading start followed by beep beep for Loading End. You can then manually LOAD / SAVE the blocks. You need more advanced techniques for remastering multi-part games that just load arbitrary blocks of data (like the Michael Boyd games). I will cover some of those next month. But the above technique will let you do a lot of your tapes if you want to. |Posted: Fri Apr 17, 2015 7:07 pm As a side note, I *really* should get around to asking Francesco about the bitstream format he developed years ago for the tape images hosted on the SC-3000 Survivors website :) That was flexible enough to quickly encode something as complex as Moonbase Alpha in a compact file. I'll take a more detailed look at the old Survivors utilities and the example bitstream file I have for Moonbase Alpha somewhere if I can find it. I hadn't pursued that option previously because I assumed there was no emulator support for it. Francesco wrote a flash utility to play back audio from the bitstream files. Take a look - it really is very cool. A little picture of a SR-1000 data recorder pops up and you can play back the audio. I'm *guessing* the encoding is based on something like this: In any case, I'll take a closer look at that. It could be a good complement to the audio remastering project. |Posted: Fri Apr 17, 2015 7:28 pm Oh... wow. The latest version of MESS seems to support the bitstream format. My bitstream copy of Michael Howard's 'Help' just loaded perfectly in MESS v0160b... The .bit files are 230Kb in size and zip down to about 30Kb... Let me spend a couple of days looking at the implications of this and see if I can reproduce the encoding from the perl scripts. |Posted: Sat Apr 18, 2015 11:34 am Ok... well, the short answer is the bitstream format is very simple and MESS does seem to support it now. The bitstream format is a text file with a .bit extension with one byte per bit. The "0" (ascii 0x30) character is a square wave of 1200Hz, "1" (ascii 0x31) is 2400Hz and a "space" ( ascii 0x20 ) is a 833.3 us (the length of the 1200 or 2400 Hz wave ) of silence. So it isn't particularly space efficient or flexible. ie. it is not a generic format like TZX that can support different types of encodings. But it should work perfectly for all SC-3000 tapes that use the Basic IIIB ROM routines which is well... just about all of them :) So I may look at adding bitstream files to the tape restoration process along with the WAV files since MESS supports them now. It looks like Francesco started to ask MESS contributors to add support for the bitstream format to MESS around 2009, and somewhere between v0142 and v0160 someone did add support for the bitstream format to MESS. You can see some of his initial requests here The standard Sega Basic ROM load / save routines and encoding is (more or less) based on the Kansas City Standard. That is 300 baud and the SC-3000 is 1200 baud. But otherwise it is pretty much the same. Several years ago, the original SC-3000 Survivors (probably mostly Francesco, I'm guessing), figured that out and asked Martin Ward for assistance in modding his FFT audio analysis script to also handle the SC-3000 signal. Martin's tape-read perl script does indeed work for FFT analysis and outputting a simple bitstream which loads in MESS, at least it does for sufficiently clean files. But in any case I can generate a bitstream file myself if I need to. If anyone wants to play around with the tape-read script, then this is approximately what you want on the command line for an SC-3000 tape. perl tape-read.pl baud=1200 lo=1200 hi=2400 bit=Y wav_filename.wav That will output a .bit file and a couple of text files with tape data blocks in them. Just be aware you kind of need to know what you are doing to decide if you have a good .bit file or not, and this version of the script does not include the silence markers automatically. Anyway, I've figured out enough of the problem that I can forget about it for a few days now, hopefully. Time for bed :) |Posted: Sat Apr 18, 2015 12:39 pm |Yeah, they're just the raw .wav files, nothing fancy. I'll have to look into converting to the different formats at some point. Need to verify the working ones. Some I know are probably duds, as I couldn't get them to load on real hardware, but I recorded the wav file anyway. I was planning on trying to collate them into folders with their scans as well and eventually load them somewhere/torrent them for people to download and work on, just haven't got around to it yet. Got distracted with another ongoing preservation project for Mega LD/Laseractive Laserdiscs. |Posted: Sat Apr 18, 2015 8:13 pm A lot of the old recordings start out at duds. The trick is to use an audio editor to do stuff like applying a high pass 1Khz filter to get rid of low frequencies, low pass filters to try to get rid of high frequencies, and a technique I can't remember the name of (compression?) to equalize the volume across the recording. Sometimes you have to splice bits of part A and B together to get a good recording. And sometimes you will get an audio recording that will load on MESS but not real hardware and sometimes the other way around and save bits back from MESS or the Sega :) You just have to mix and match 'em up until you get a good result. I think we had to do all of that to repair Michael Howard's 'Help' a few years back. From memory there is a good spectrum page on cleaning up audio around somewhere if you search around. You just need to be careful the data you get back is correct. Anything that uses the full Sega load / save routines has a checksum built in so it won't load if the checksum doesn't match. But some multi-part loader games like Michael Boyd's ones don't have any checksums. Most of the time a 1KHz high pass filter is sufficient to fix the recordings. The Laserdisc project sounds cool :) Ok - here is an example bitstream file for a Hello World program 10 REM Hello World Program 20 PRINT"Hello World" That has 1 second of silence at the start and the end. Remember, a space char is 1/1200th of a second of silence, a '0' char is a single 1200Hz tone, and a '1' is two 2400Hz tones. You can try it with MESS v0160, and just look at it in a Hex Editor or text editor to see the content. |Posted: Sun Apr 19, 2015 1:57 am I think you're talking about normalizing, but I'm not 100% on this. If you do this, try to keep it to -6dB to -10dB I believe. At least when you make songs this is an acceptable range. Any higher it might peak and get some digital clipping. |Posted: Sun Apr 19, 2015 10:32 am That's the difference between normalisation and compression, in a nutshell. Anyway, good audio editors are designed in a way to prevent clipping while normalising, by scanning the audio beforehand to find out the peak intensity and scale the volume changes accordingly, I know Goldwave does, and I think other editors do too. This doesn't prevent you from overriding that value of course but doing so would guarantee you to get clipping. With compression you can indipendently change the volume of signals with a different loudness (e.g. make weak signals stronger and make strong signals weaker); this doesn't magically dodge clipping per se, and it's inherently harder to use compared to normalisation, but it makes wonders if the source is recorded with different peak levels throughout the place, where normalisation is almost guaranteed to ruin everything. So yes, I think compression would be more appropriate for a task like this, given that you know what you're doing. Not that clipping a pure square wave would harm that much, but still... |Posted: Mon Apr 20, 2015 8:13 am Thanks for the clarification, Tom :) I didn't have to use compression or normalization much, and it was 4 or 5 years ago when I did, so excuse my initial vague description. The problem is the age of the tapes. They are 30 years old and on some of the tapes the volume level varies a lot just because the tape media has deteriorated. So originally the volume would have been fairly consistent across the recording. But 30 years later and you can end up with something like this: This is the waveform from an original tape recording of Aerobat. You can see the variation in volume levels near the middle of the recording. I seem to recall I used either compression or normalization to repair a couple of tapes, but they generally weren't necessary - low / high pass filters and splicing parts from side A / B together usually did the trick. |Posted: Fri Apr 24, 2015 12:00 pm I may be getting bashed for pointing this out but... for the most faithful preservation and emulation of these (or any) cassettes, wouldn't the recordings have to be in Stereo, which the emulator should then convert them to Mono when processing them? In any case, there's no harm in saving them in Stereo as they can always be converted later to Monaural by whoever prefers them (but not the other way around). |Posted: Fri Apr 24, 2015 2:12 pm |They're most likely mono tapes. |Posted: Sat Apr 25, 2015 3:44 pm |That would be a relevant distinction to make when preserving this kind of stuff... How can it be verified? |Posted: Sat Apr 25, 2015 7:17 pm I don't think there was any realistic reason to save a bitstream on a stereo cassette, since it is by definition a succession of tones on a single channel. If they saved two bitstreams in parallel on two separate channels to double the bandwidth, that would be an entirely different matter. By reading a mono stream with a stereo head you're actually reading an altered version of the data, in case the left and right heads are calibrated slightly different, or if the magnetic tape isn't uniformly magnetised. I know that personally I'll never get the appeal of useless stereo for mono files (as I said in the vgm topic), but I think this is a prime case of data which is natively monophonic. |Posted: Sat Apr 25, 2015 8:42 pm Let's put it this way... Imagine that one stereo cassette has the game recorded only on the left channel. A mono audio file would not be an accurate representation of that tape. That would be the most extreme example, but you see what I mean. |Posted: Sat Apr 25, 2015 10:07 pm |Computer tape recorders were often strictly mono. That means the head has a single wide reader, rather than two smaller ones with a gap in the middle. This results in more ability to read the data from the tape without errors. You would need specialist equipment (or some cunning with the alignment adjustment) to detect what was actually on a tape. |Posted: Sun Apr 26, 2015 1:41 am In that case I'd advice towards always recording them in Stereo, as it can be converted to Mono anyway and... just won't do any harm, in any case. |Posted: Sun Apr 26, 2015 9:05 am |Recording using a mono device should produce the best results - two partial parts of the tape is worse then the whole thing. Maybe a new(ish) stereo device is better than a 30 year old mono one, though..? |Posted: Mon Apr 27, 2015 10:55 am I don't get why you're saying this. Combining both parts should give the whole thing, while you can't separate both parts from the whole... Recording in Stereo is just better for preservation. |Posted: Mon Apr 27, 2015 4:08 pm |A stereo recording has two 0.02" tracks separated by a 0.016" guard area. A mono recording could magnetise the guard area, and thus reading it back with a head spanning the guard area could give you a better signal to noise ratio than if you effectively ignored it by using a stereo head. If there's no data there (as seems to be the case in the spec) or if a mono head ignores the guard area then a stereo recording (with good calibration of the tracking) would be no worse than mono, maybe better as you would have more data. |Posted: Wed Apr 29, 2015 12:09 am Thanks for the discussion on the use of stereo vs mono recording techniques. I think this is a case where provided you can get a verifiable and repeatable recording that loads (possible with anything that has a checksum, requires multiple record passes for comparison if not), then that is good enough. Certainly the original recordings would have all been with mono datasets (and the SC-3000 only has a mono output). But I found the discussion interesting and I'll keep it in mind in future. Disassembled Sega SC-3000 Tape Load Routines from Basic IIIB ROMPosted: Wed Apr 29, 2015 12:12 am Last edited by honestbob on Wed Apr 29, 2015 12:33 am; edited 3 times in total Here's something else I've been meaning to publish for future reference, but I hadn't got around to it. This is very useful for anyone who wants to write tape emulation for the SC-3000 (or just step through a custom tape load routine) as it explains how the Sega SC-3000 actually interfaces with the data from the cassette-in port, and it shows how the main routines that all the tape software use actually work. See attached zip file for the spreadsheet and the full text version of the file with commented disassembly of the tape load routines. Yes, god forgive me, I did start this off in a spreadsheet rather than a text file. I was young and foolish :) Here's an excerpt explaining the important bits. Sega Basic Level IIIB ROM Load / Save Routines Commented Disassembly This is a disassembly and partial commenting of the Sega Basic Level IIIB ROM. In particular, the Tape Load routines are well commented. I did this to learn how to repair and remaster Sega SC-3000 tape recordings and this work also led to the great Sega SC-3000 Survivors Mk II Multicart that can play SC-3000 tape games. From memory the Tape Save routines may only be partially documented. You are more likely to find mistakes in my comments there too. The Load routines should be well commented and mostly correct. IMPORTANT: THERE ARE SOME ERRORS IN THIS DISASSEMBLY. The original disassembly tool I used had some bugs in it, so not all of the instructions translated correctly. I have manually repaired those in the commented sections, but no guarrantees for the rest of the file. Most of this was done back around 2010-2012 Overview of Sega SC-3000 tape input architecture The Sega SC-3000 has a custom HIC-1 chip which is connected the 8255 PPI inside the Sega SC-3000. The PPI is mapped to various I/O ports. The HIC-1 chip essentially converts the voltage level observed on the tape cassette input into either a high or low signal which sets the value of port $DD, bit 7. If the cassette input voltage level is 'high', then bit 7 of port $DD will be set (ie. one). If the cassette input voltage is low, then bit 7 of port $DD will be reset (ie. zero). VERY IMPORTANT: THE BITS READ FROM PORT $DD ARE NOT DATA BITS. THEY REPRESENT THE CURRENT HIGH OR LOW STATE OF THE CASSETTE INPUT. The Basic IIIB ROM Load routines work by sampling the value of port $DD bit 7 thousands of times per second, timing how long the input stays high or low, and then converting that to a data bit (zero or one) when the input has stayed high / low for long enough. As a side note, that means you can digitize speech and music through the cassette-in port with a little bit of assembler magic - anyway, back on topic :) A 'ONE' data bit is represented by two 2400Hz tones. ie. port $DD bit 7 is high for 1/4800th sec, low for 1/4800th sec, high for 1/4800th sec, low for 1/4800th sec. A 'ZERO' data bit is represented by one 1200Hz tone. ie. port $DD bit 7 is high for 1/2400th sec, low for 1/2400th sec. The Basic IIIB routines have some tolerance for how wide those tones can be to allow for tape slippage etc. But noise spikes or inconsistent volume levels across the tape tend to result in tape read errors. Cassette Input Example (ie. sampling the high / low voltage level from the cassette input) is mapped through port $DD, bit 7. in a,$DD ; Read from PPI. Bit 7 is the high / low voltage state of the Cassette Input from the HIC-1 chip and $80 ; Mask out the one bit we are interested in - ie. bit 7 ; so and $80 results in either $80 if bit 7 was set, or $00 if it was not Cassette Output Example (ie. setting the cassette output to a high / low voltage level) can be done by writing to the control register at port $DF eg. you can set the cassette output high with ld a, 9 out ($DF), a you can set the cassette output low with ld a, 8 Overview of main Sega SC-3000 Basic IIIB ROM tape load / save routines All of the known commercial Sega SC-3000 tape software uses at least the Detect/Write Sync_Bits routines and the Load/Save_Byte_To_A routines from the Basic IIIB ROM. The only known exception to that would be Mike Hadrup's Load / Save / Verify routines. But I haven't studied those, and it was not widely used. The main Basic IIIB ROM routines are: Documented entry point is $3A00 which jumps to $3ACB This is one of the two main load atoms All known cassette loaders use sync bytes to mark the start of a data block (apart from Mike Hadrup's LSV extensions) The Sync Bits are 3600 'one' bits in a row (ie. 3600 * 2 * 2400Hz square wave cycles). This matches save routines and published documentation. Repeatedly sample the cassette input until we have found 255 valid 'ones' in a row We can ignore the rest of them because Load_Byte_To_A looks for the first zero Documented entry point is $3A06 which jumps to $3A8E This is one of the two main load atoms Read a single byte from the cassette input and place in register A Look for the first Zero bit which marks the start of a data byte Bit "1" is two cycles of 2400 Hz in a 833.3 micro Hz window Bit "0" is one cycle of 1200 Hz in a 833.3 micro Hz window Each byte is encoded with 11 bits Start bit = 0, then bit 0-7 actual data with LSB first (8 bits), followed by two stop bits = 1 Because data bytes are always bounded by a 0 bit at the start and 1 bits at the end we just search for the first 0 bit We determine that the given bit was a zero by checking that # of iterations was > &H1D This is the main entry point for load If you specified a filename in LOAD "filename" then the basic parser will already have initialized &H82A2 with string length and &H82A3 with the 16 character string If you didn't, then &H82A2 / &H82A3 will be nulled out This takes care of the full load including *Loading Start, the Header block with file name, file length, parity check, and the dreaded Tape Read Error message etc. Documented entry point is $3A12 which jumps to $3A15 This is one of the two main save atoms Write the byte in register A to the cassette output Bit "1" is two cycles of 2400 Hz in a 833.3 micro Hz window Bit "0" is one cycle of 1200 Hz in a 833.3 micro Hz window Each byte is encoded with 11 bits Start bit = 0, then bit 0-7 actual data with LSB first (8 bits), followed by two stop bits = 1 Documented entry point is $3A0F which jumps to $3A4D This is one of the two main save atoms All known cassette loaders use sync bits to mark the start of a data block The Sync Bits are 3600 'one' bits in a row (ie. 3600 * 2 * 2400Hz square wave cycles) This is the main entry point for save The filename you specified in SAVE "filename" will already have been stored by the basic parser at &H82A2 (string length) and &H82A3 (16 character string) This takes care of the full save including *Saving start, the Header block with file name, file length, parity check etc. (see the attached zip file for the disassembled routines) Idea for simplified tape loading emulationPosted: Wed Apr 29, 2015 12:26 am And the above post leads on to an idea for simplified tape loading emulation if anyone wants to add it to Meka :) The above article details how the hardware works. And MESS has a very cool mechanism for using FFT analysis on WAV files to translate the audio to voltage levels and update the value of port $DD, bit 7. But from an emulation perspective, all you need to do to emulate tape loading that works for all of the known SC-3000 commercial tape software is: 1. The Bitstream format. As discussed earlier, you can represent all of the known SC-3000 commercial software as one of three states: - silence for 1/1200th of a second - 1200Hz tone for a 'zero' bit - 2 * 2400Hz tones for a 'one' bit The Bitstream format is not space efficient (it uses one byte to represent each bit of data). But it compresses well, is simple to use, and we have other tools to work with it already. 2. Tape Emulation that runs in realtime / emulated clock time or cpu cycles / however you manage timing inside Meka that translates - silence to 1/1200th of a second of 0 at bit 7 of port $DD - a 'zero' bit to 1/2400th of a second of 1 at bit 7 of port $DD followed by 1/2400th of a second of 0 at bit 7 of port $DD - a 'one' bit to 1/4800th sec of 1 at bit 7 of port $DD, then 0 for 1/4800th of a second, then 1 for 1/4800th sec, and 0 for 1/4800th sec ie. you skip all the complex WAV file analysis 3. Tape Player controls in Meka (or keyboard shortcuts) that allow you to mount a tape, press Start and Stop, and a time counter on screen somewhere That's pretty much it. This simplified version can not be used to save data to a Bitstream file (Bitstream is a playback format only). Anyone keen to take a look at that idea for Meka? Sorry Bock :) |Posted: Wed Apr 29, 2015 7:49 am Based on your description, a few things come to mind... 1. There's no reason why you couldn't have arbitrary encoding formats ("turbo loaders") which would make the "bitstream" format fail. 2. The bitstream format could totally be written to by an emulator. 3. Truncating recordings to min/max loses no information that is perceptible by the system. 4. Emulators can easily support waveform data by simply picking the current sample whenever the port is read, no FFT analysis is needed. But it will be mostly real-time (subject to emulator speedup controls). 5. But an emulator can also take a high level approach and when it sees the API calls, parse the data natively, push it to emulated RAM and advance the virtual tape, in an instant. I might have a go myself when I get a computer... |Posted: Wed Apr 29, 2015 9:49 am Correct. The bitstream format only works for 1200Hz and 2400Hz tones like the IIIB ROM routines use. However that covers pretty much *all* of the available tape software. No commercial software I have seen uses any other encoding format. So it isn't a case of Bitstream being a 'good' format. It is a 'mostly good enough' format :) And it is already supported by one other emulator (MESS) and I have some tools that let me create and edit that format. Aside from that, I'm not especially attached to it. I haven't yet seen an example of something saved by Michael Hadrup's Load / Save / Verify routines but I'm pretty sure the encoding is totally different and wouldn't work. I'm happy to be proven wrong :) The Save routines work the opposite way to the Load routines. ie. you set the Cassette Out port to voltage hi / lo for arbitrary periods of time. The actual period of time is controlled by loops in the Basic IIIB code. So it is kind of the 'turbo loader' argument in reverse. There is no guarantee that what you write out to the emulated HIC-1 would conform to 1200Hz / 2400Hz pulses expected by the Bitstream format. But it would be very easy to write a WAV file from underneath the HIC-1 emulation. I should probably clarify writing a control byte to port $DF. The cassette output feed is from the 8255 PPI 'Port C' bit 4 (from memory the 'Port C' is a description you will find in the 8255 PPI datasheet / documentation - it just refers to a group of pins on the 8255 chip). This particular pin is I/O mapped to the SC-3000 port $DE bit 4. Note - I *think* you can write directly to $DE bit 4 if you want to, although the Basic IIIB ROM uses the port $DF bit set / reset feature. So stick with that for now. Port $DF is the (write only) control register for the 8255 PPI. If bit 7 (msb) of the control byte is set, then it sets up the PPI mode (done on startup). That is not what we want here. If bit 7 of the control byte is NOT set, then we send a bit set / reset command to apply to ONE of the 8255 PPI Port C bits. Then bits 1 to 3 of the control byte select which of the PPI Port C outputs we should write to, and bit 0 selects the value that should be set. (Bits 4 to 6 are not used in this Bit Set / Reset format). So for instance: ld a, 8 ; %00001000 out ($DF), a ; ie. set bit 4 of PPI Port C to zero ; bit 7 is zero, so this is a bit set / reset command ; here bits 1 to 3 are %100 - ie. = 4, ; which is the cassette output pin / bit 4 in PPI Port C ld a, 9 ; %00001001 out ($DF), a ; ie. set bit 4 of PPI Port C to one I think I agree with this, but you may need to clarify your point :) Certainly as far as the SC-3000's HIC-1 chip output is concerned, you only have a voltage high state and voltage low state based on the cassette audio input. You would need an oscilloscope and a signal generator to infer more exactly how the HIC-1 chip output changes in response to a given audio input. But I'm pretty sure you could accurately represent any SC-3000 recording with a bit depth of 1 bit per sample - ie. high voltage / low voltage. Certainly for an emulator and a perfect waveform I think that is true. If you want to play it back to a real SC-3000 then this is true up to a point. I'm pretty sure you will get read errors if your 'max' level is too loud or too quiet when you play it back into the cassette-in port even with a 'perfect' waveform. But you would just compensate for that by adjusting the volume control on your headphone jack. Yes, I think you are correct. FFT analysis is necessary to look at an audio signal and try to convert it to bits / bytes (like that tape-read.pl script referred to earlier in the thread). But you are right. That is not how the HIC-1 chip and Basic IIIB code works - they sample in real time as the audio is playing. So yes you can probably do something like just pick a threshold and treat a value over that threshold as high voltage, and something below that threshold as low voltage. I should admit I haven't actually read the MESS tape emulation code so maybe it does work that way. That would be worth trying anyway :) Good thinking. A more flexible implementation might do some preliminary analysis on a WAV file to determine what the high / low voltage threshold levels should be for that recording to allow for differences in recording volume. That would be especially important for interpreting real audio recordings from original tapes.. But for a first pass I would start with the remastered waveforms for 'Bastow Manor' that I uploaded earlier and use that for testing with. (Edit - I may have misinterpreted your point 5, so the following may be off topic - not sure. I thought you meant you could look for the main LOAD entry point in the Basic IIIB ROM, and then try to do an instant fast load of all the data into emulated RAM then resume program execution after the Basic IIIB LOAD call) That would be very cool. I'm not sure if you can write a generic technique that will work in all cases though. You could probably make it work for the full LOAD / SAVE commands as the data would be written to / read from predictable addresses. But then you would have to skip a whole pile of Z80 code and adjust register values etc. to allow for the code you have just skipped. I'm not sure that would work for anything that calls the Basic IIIB routines in unexpected ways, and unfortunately the different tapes tend to jump into the IIIB ROM at different places. The most generic approach is of course to hide everything in HIC-1 emulation under the port writes. MESS allows you to speed up the emulation, so if you have a 5 minute tape to load you can run MESS at say 10 times normal speed and load the tape much faster. But everything runs 10 times faster then - CPU, VDP, everything. Awesome - good luck. It would be nice to bring Meka full circle on this. I think I probably first started to annoy Bock about this over 15 years ago :) I just didn't have enough information to be useful back then. |Posted: Wed Apr 29, 2015 12:44 pm The problem with "good enough" is that, whenever a higher standard is set up in the future, there's going to be a need for redumping every single game. It's the difference between "just emulating" and "preserving". This has happened a while ago with ISO+MP3s for CDs, and is already happening with cartridges for certain systems (dumping chip by chip against having the whole thing in one file). These redumping processes could have been avoided, had a higher stardard been set up in the beginning. I'll bring up again the example of a cassette that only plays data on the right channel. A mono audio file would not be a good representation of that tape. Unless the tapes themselves specify somewhere that they're indeed monaural cassettes, please record these in stereo after adjusting the azimuth so that the sound plays as crystal clear as possible. |Posted: Wed Apr 29, 2015 6:10 pm Here's an excerpt from the final Sega Computer magazine from August 1988 for reference. It covers the 8255 PPI operation and ports. In the posts above I just summarized the bits you need for tape emulation. But the following might be useful too. Note - from memory it helps to read the actual 8255 PPI datasheet as well as the Port A / B / C groups and different modes make more sense then. Basically, the 8255 PPI was a general purpose device for connecting all sorts of different devices into your system and then giving you an easier way to I/O map them and control them. The SF-7000 uses one as well. And yes, I know I should really upload those some time :) It looks like the last four issues are still missing from the Scans section. |Posted: Wed Apr 29, 2015 6:56 pm The original compact cassette specification actually states that there are two stereo channels on the tape, and for mono you just make them identical, so for archival purposes it makes sense to record in stereo. Any non standard use of the guard area to improve reliability would be lost though. It makes no sense at all to distribute these, any more than you would want to distribute 1200dpi PNG scans - it's nice to archive, but not necessary for users to get what they want and costly given current storage and bandwidth prices, especially given the degradation issues (which seem very easy to "remaster", by the way, without all the hassle you refer to, with a little bit of code) which mean the raw recording may not even work on a real system as-is. |Posted: Wed Apr 29, 2015 9:03 pm Actually doing the remastering is fairly straight-forward in most cases. It is all the validation checks, testing in emulator / real SC-3000, writing up a text file to go with it containing instructions / other information, finding a cassette cover scan etc. and packaging it all up that takes all the time. That and engaging in friendly banter on the forums :) But I assume you are thinking something along the lines of a routine that reads the original audio WAV and converts it to a 'cleaned' WAV by just outputting hi / low states depending on whether the original audio hit a volume threshold or not. That might work quite well in a large number of cases and would be a useful cleanup tool (I suspect you will find your code will work with some recordings and not with others). It is essentially what the real HIC-1 chip does and I assume what the tape emulation in MESS does except you get to immediately save the 'cleaned' signal back out to a WAV file. It is something I would add to the restoration toolset to use in conjunction with other remastering techniques (and as a check on other remastering techniques). Just for general interest, here are two wave form pictures to illustrate the type of data you potentially have to analyze. The first is taken from an original tape audio recording that was around 25 years old. You can see the waveform is not particularly 'square', and even within this tiny section of tape audio (approximately 11 / 1200th of a second / 11 bits), the 'loudest' waveforms are approximately twice as loud as the quietest waveforms. The second is from a recording I took directly from a real SC-3000 cassette out port (with microphone boost on my laptop as the original signal level was *very* low). You can see that the SC-3000 is capable of outputting something very close to a square wave. |Posted: Thu Apr 30, 2015 6:02 am If you only want to load the games, you can just make a tiny TZX file from the sound source, which can also be converted to a clean WAV file any time. But we're also talking about preserving the contents of the cassettes "as is", imperfections and all. |Posted: Thu Apr 30, 2015 6:39 pm I think that is the core of the argument (and perhaps my choice of the words 'remastering' and 'restoration' have clouded the issue. Ultimately, the original recordings are an old and damaged representation of digital data. My point is to preserve the software in a compact and easy to distribute form which allows use in both emulators and on original hardware. ie. an exact audio representation of the original 'ideal' audio waveforms either in a WAV (highly compressible) or in a compact file that can be used to regenerate that waveform (eg. bitstream, TZX etc.) The exact original recordings are of some historical interest, and are necessary when checking for any errors in conversion / remastering. But as Maxim points out, none of the glorious quirks / static / stretching inherent in the original recording represent actual data. They represent chances to bang your head into a wall with a tape read error as I did many, many times as a child :) Because the SC-3000 software is relatively rare, and so few people have it, it might as well not exist. So I want to make it accessible in the hopes it will live for a good long time yet. I actually have a plan B for that based on the multicart. You might see that appear over the coming weeks. Shhh :) That gets us back to the earlier question of whether or not TZX is an appropriate format for use with the SC-3000 and indirectly to my earlier point about trying not to spend too much time on this. It seems I can't help myself. Never mind :) The TZX spec seems very flexible and allows you to represent a wide range of pulse width encoding techniques. It is built on a couple of concepts. 1. Multiple different types of blocks. Each block type allows you to store a slightly different style of data encoding. 2. Pulse width encoding. We still assume that the audio maps to hi / low voltage states. So the question is how long each hi / low state should be (and how many of them) represent a single bit of data. 3. T States. The pulse widths are represented in T States which are specific to the processor speed of the system. eg. ZX Spectrum T state is 1 / 3500000th of a second. Amstrad CPC has 4MHz clock so T state is 1 / 4000000th of a second. Sega SC-3000... can't remember... but I do seem to recall the CPU clock is slightly different for NTSC and PAL variants, so that might be an interesting timing related detail. 4. Specialized data blocks specifically for known Spectrum ROM standard encoding and turbo encoding modes (block 10 and 11). That allows a much simpler representation because you only have to store the binary data for the audio block, and the emulator / WAV generator fills in the extra pilot pulses and start / stop bits etc. automatically. ie. you need toolset support for that. 5. ID 15 Direct Recording block. This one could be good. This is a representation of what states hi / low to play after a given number of T States, so it would slot under HIC-1 emulation well and should be reasonably compact. And you might be able to use existing third party encoding tools to generate a TZX in this format if you can pass a definition of the SC-3000 pulse widths to the encoding tools. 6. ID 19 Generalized Data Block. This is allows you to define multiple pulse widths, sync tones etc. then define the data block. I think the most interesting / relevant block types for the SC-3000 are ID 15 and ID 19. I'll see if I can find some time to play around with the available encoding tools for ID 15 and / or generate an example ID 19 block on a simple Hello World. Anyway - off to work. |Posted: Thu Apr 30, 2015 7:05 pm I guess the hope is that the existing tools (most of which seem to be from >10 years ago) might take a WAV and emit something compact, usable and compatible with existing emulation code. My feeling is that the tools are quite specific to the systems in question... The clock difference between PAL and NTSC is small enough that it's well within the tape routine tolerance for detecting the frequency. Somewhat related, I also came across Spectrum tools that reencode to a very fast turbo loader, assuming the audio is played from a modern device with low noise, no degradation over time and no tape stretching. It might be interesting to develop something similar... |Posted: Thu Apr 30, 2015 9:01 pm Well, bitstream is supported by MESS and the generated audio output works with original hardware. It is a very simple format to work with. It is not flexible in that you can't represent different pulse width encodings, but it represents 99% of available SC-3000 tape software so is a good start point. And as discussed above it should only require a (reasonably) simple emulator routine to read it and generate the high / low states for port $DD bit 7 for adding to Meka. It looks like TZX files based around ID 15 or ID 19 blocks should represent a SC-3000 signal correctly, so I will see what existing conversion tools can do with that. Some of them may allow you to set the pulse widths to search for in Direct Recording mode. And for something like MESS which I assume has TZX support in the code base you should be able to use the existing TZX code with only minor modifications - *probably* just choosing the correct CPU clock to give the T state is sufficient if it supports those ID 15 / ID 19 blocks. But I still need a proof of concept to check. And of course MESS has good WAV support, and as discussed above that *may* be reasonably straight-forward to add to MESS (although I suspect that if you want it to work with 'original' audio recordings you might need to make it a bit smarter - eg. prescan the WAV to figure out average volume levels and be prepared to adjust your edge thresholds on the fly for different parts of the WAV). Those remastered square wave WAVs are highly compressible so those are distributable and they also work with real SC3000 hardware. So we have three viable avenues to pursue, two of which currently work in MESS (WAV and Bitstream) |Posted: Fri May 01, 2015 7:02 am |Emulators ought to try to work with raw audio (including live playback from a tape), but the bitstream format seems like a mistake to me. TZX is complicated, for good reasons, but it is flexible in ways the bitstream format isn't. |Goto page 1, 2 Next
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476432.11/warc/CC-MAIN-20240304065639-20240304095639-00836.warc.gz
CC-MAIN-2024-10
56,208
372
http://www.goldstar.com/events/long-beach-ca/spamalot
code
Musical Theatre West Brings Spamalot to Long Beach * Additional fees apply. All offers for Spamalot have expired. The last date listed for Spamalot was Sunday July 15, 2012 / 2:00pm. Currently at Carpenter Performing Arts Center, CSULB: - Full Price: - Our Price: Garrison Keillor put it best when he said: "I want to hear them singing the rest of my life." He was referring to Robin and Linda Williams, the husband-and-wife folk duo from Virginia's Shenandoah Valley. It's hard to know what's more evocative -- their charged lyrics or their brilliant harmonies, polished to perfection after decades of performance. Regardless of the precise nature of their magic, the couple has enchanted audiences everywhere they go, including appearances on A Prairie Home Companion, performances at The Grand Ole Opry and tours with the likes of Mary Chapin Carpenter and Emmylou Harris. Now they come to the Carpenter Performing Arts Center for a distinctive concert of their very best folk, bluegrass and country music. Learn More Reviews & Ratings Featured review from Goldstar Member view more less of this review We bought these tickets for my in-laws. They said they were blown away by the professionalism and that it was the best musical they have ever seen. They are not an easy sell! The tickets they received from goldstar were in the center, they said perfect seats! If you want to eat before the show eat on Naples. Good restaurants not far away. star this review starred report as inappropriate Cast was outstanding, singers were spot on, few technical glitches with microphones but this was the preview night so that's expected. We were in stitches through the whole performance and the cast received a standing ovation when it was over. You’ve never laughed so hard as you will during Monty Python’s take on the legend of King Arthur and his knights of the round table. Musical Theatre West is proud to present the regional theater premiere of this absolutely outrageous, hilarious, Tony Award-winning comedy. With a bevy of beautiful show girls, cows, killer rabbits and French people — not the mention the sets and costumes directly from the Broadway production — we know that you’ll love Spamalot as much as we do. Directed by Steven Glaudini Musical Direction by John Glaudini Choreography by Billy Sprague Jr.
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010115284/warc/CC-MAIN-20140305090155-00055-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
2,327
18
http://stackoverflow.com/questions/576303/message-queue-windows-service?answertab=oldest
code
I wish to write a windows service in .Net 2.0 that listens to and processes a Message Queue (MSMQ). Rather than reinvent the wheel, can someone post an example of the best way to do it? It only has to process things one at a time, never in parallel (eg. Threads). Essentially I want it to poll the queue, if there's anything there, process it, take it off the queue and repeat. I want to do this in a system-efficient way as well. Thanks for any suggestions!
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296951.54/warc/CC-MAIN-20150323172136-00279-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
458
4
https://getstarted.sh/just-react/rendering-a-list
code
Rendering a List Usually we will have to render a list of things. So, here's how to do that in React: Isn't it looks confusing? Okay. Stay with me. I'll show you what's going on here. It's somewhat weird, but you'll understand it.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359082.78/warc/CC-MAIN-20211201022332-20211201052332-00213.warc.gz
CC-MAIN-2021-49
230
4
https://docs.engflow.com/bia/how_to_extend/adding_suggestions.html
code
Adding Suggestions to the Bazel Invocation Analyzer¶ This guide walks you through the steps to add new suggestions to the Bazel Invocation Analyzer by way of an example. This example covers most of the component types in the Bazel Invocation Architecture, with the notable exception of a provider that consumes external data (such as the Bazel Profile), which is an advanced topic. If you intend to create a provider that consumes a new data source, please study the Bazel Profile as a template. You should be familiar with the Bazel Invocation Architecture. This will give you the big picture of how the various components fit together. Note that this walk-through shows you the major aspects of the code. Some of the more mundane (but important) details such as error handling are left out for clarity. You can see these details in the actual classes the analyzer uses which are linked in the appropriate sections below. The first step to adding a suggestion is to identify a pattern in the data contained in Bazel profiles that indicates an opportunity for improvement. In this example, we will use the garbage collection data contained in Bazel profiles to provide a suggestion if the garbage collection events are taking longer than expected. The Bazel profile contains data about each garbage collection event, including timing: In this guide, we will create all the components necessary to extract the data and report on excessive Java garbage collection in Bazel. For our analysis, we need the total duration of major garbage collection events in the BazelProfile. We need to define a Datum to represent this value. Datums implement the Datum interface and simply hold the data they represent: Next, we'll need a Data Provider which extracts our value from the Bazel profile and makes our Datum available. Data Providers extend the DataProvider abstract base class and implement the getSuppliers() function to register the Datum they provide: Here we're using a builder to create the DatumSupplierSpecification based on the Datum class we're providing, as well as using a class function to return the actual data. We're wrapping it in the optional memoized helper, which caches the resulting Datum so we don't recalculate it every time it gets requested by various other components. This is optional: If, for example, the Datum was a large stream of data we might not want it memoized. To extract the data we're providing, we need to retrieve the Bazel profile from the Data Manager and extract the data we need from it: Here we're retrieving the events from the garbage collection thread in the profile, filtering them down to just the major garbage collections (by event name and category), and summing the durations. (See the actual implementation for complete details with error handling.) Now that we have the total major garbage collection time available, we can add the actual suggestion in a Suggestion Provider. Suggestion Providers extend the abstract SuggestionProviderBase class and implement the getSuggestions virtual function. First, we'll retrieve the GarbageCollectionStatsDataProvider we created above from the Data Manager in the same way we retrieved the BazelProfile, check the data to see if it warrants a suggestion, and if so create the suggestion and return it. If the suggestion is not relevant, we simply return an empty list of suggestions: We have a Suggestion Provider Utility class that helps us build suggestions and all the related fields. The createSuggestion helper takes the following elements of a suggestion: - Title - short title for the suggestion - Recommendation - this is the body of the recommendation itself, including details - Potential Improvement (optional) - how much faster could this invocation be if this suggestion is implemented - Rationale (optional) - why this suggestion is being made based on the profile analyzed - Caveats (optional) - any stipulations about why this suggestion was made, or other information that could have been useful in validating or improving the suggestion For this example, we'll simply suggest increasing the Java heap size to give Bazel more memory to work with before garbage collection is necessary. We want to give as much detail as possible, including the Bazel flag ( --host_jvm_args) that is used to adjust this size. We also want to give the rationale for why we're making this suggestion. To provide information about the potential improvement, we want to compare the time in garbage collection to the total duration of the invocation to put it in perspective. Luckily there's already a Datum that contains this value; TotalDuration. We don't need to know what Data Provider provides it, we just need to retrieve it from the Data Manager: We can use another function in the Suggestion Provider Utility class to help us create the potential improvement. createPotentialImprovement takes the message we want to present as well as the potential reduction percentage. We'll also use functions in Duration Utility class to both format the times we have, as well as to calculate the reduction percentage. You can see this all come together, along with error handling and an additional suggestion about reducing memory usage in the actual Running the profile analyzer on a Bazel profile with major garbage collection events will produce output similar to: If you still have questions, see potential improvements, or would like to provide feedback you can get in touch with us by:
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510924.74/warc/CC-MAIN-20231001173415-20231001203415-00032.warc.gz
CC-MAIN-2023-40
5,475
40
http://www.appszoom.com/android_applications/personalization/the-game-of-life-wallpaper_wyas.html?nav=related
code
The Game of Life Wallpaper by: Development Mill • 3 Live Wallpaper based on The Game of Life also known simply as Life. The "game" is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. - Any live cell with fewer than two live neighbours dies, as if caused by under-population; - Any live cell with two or three live neighbours lives on to the next generation; - Any live cell with more than three live neighbours dies, as if by overcrowding; - Any dead cell with exactly three live neighbours becomes a live cell, as if by reproduction. set dead, living or gride color; load default patterns; create custom pattern; remove or rename custom patterns; set delay update. Immerse yourself in the game of life!
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607242.32/warc/CC-MAIN-20170522230356-20170523010356-00442.warc.gz
CC-MAIN-2017-22
870
14
http://www.shapeways.com/product/VRST5X4KK/geary-master-pentultimate-part-3?li=more-from-shop&optionId=6550443
code
About this Product This puzzle is the result of putting internal gears in a master pentultimate, making opposite sides turn in opposite directions for all turns. This puzzle measures approximately 40mm to an edge. To order this puzzle, you must order it in three parts. Part 1: http://www.shapeways.com/model/729226/geary_master_pentultimate_part_1.html Part 2: http://www.shapeways.com/model/755022/geary_master_pentultimate_part_2.html Part 3: http://www.shapeways.com/model/754976/geary_master_pentultimate_part_3.html Additionally, assembling this puzzle requires 72 #4-40 x 3/8” screws, and super glue if you want to glue the caps in place. M3 screws of a similar length are likely to work as well. I can supply you with screws if you are unable to find ones that fit. Contact me if you are interested in a fully assembled, stickered, and dyed version of this puzzle. I have a sticker template available if you’d like to cut stickers yourself, but I can also cut you stickers for a small fee. For more information, see the thread on the Twistypuzzle forum: http://twistypuzzles.com/forum/viewtopic.php?f=15&t=24659 What's in the Box Geary Master Pentultimate Part 3 in White Strong & Flexible This model is 3D Printed in White Strong & Flexible: White nylon plastic with a matte finish and slight grainy feel. Last updated on 10/14/2014
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163512.72/warc/CC-MAIN-20160205193923-00091-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
1,345
14
https://my.ccsinsight.com/members/anon/new.html?network_id=863
code
Instant Insight: Amazon re:MARS Event, 2019 Concise analysis of Amazon's inaugural event dedicated to discussions about machine learning, automation, robotics and space. 06 Jun 2019 Instant Insight: Intel Data-Centric Innovation Day, 2019 Concise analysis of Intel's event to unveil several data-centric products. 03 Apr 2019 Instant Insight: Nvidia GPU Technology Conference 2019 Concise analysis of announcements at Nvidia's annual conference. 21 Mar 2019 Instant Insight: Intel Architecture Day, 2018 Concise analysis of the strategy update and news offered at Intel's event in Santa Clara. 13 Dec 2018 Instant Insight: Samsung Developer Conference, 2018 Concise analysis of news from Samsung Developer Conference 2018, including the unveiling of a foldable device. 08 Nov 2018 Instant Insight: Arm TechCon 2018 Concise analysis of Arm's conference and it's vision for the Internet of things. 22 Oct 2018 Instant Insight: Nvidia Pushes Artificial Intelligence at GTC Europe 2018 Concise analysis of announcements from Nvidia's GTC event in Europe. 11 Oct 2018 Instant Insight: Google Cloud Next 2018 Sees Company Move Closer to the Edge Concise analysis of Google's announcement of new initiatives for edge computing in the Internet of things. 26 Jul 2018 Instant Insight: Google I/O 2018 Concise analysis of announcements from the first day of Google's developer event. 09 May 2018 Instant Insight: Dell Announces Internet of Things Division Concise analysis of news from Dell's IQT Day 2017. 11 Oct 2017
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525587.2/warc/CC-MAIN-20190718083839-20190718105839-00050.warc.gz
CC-MAIN-2019-30
1,508
30
https://www.experts-exchange.com/questions/25094669/Windows-Server-2008-64Bit-activate-60-day-trial-using-OEM-key.html
code
When I got a new server I downloaded Microsoft’s 60 day trial of Windows 2008 Enterprise as I originally ordered the standard version of Windows and not the enterprise version – I thought when I get the key I will be able to activate it. But when I got the new disk the key it was a OEM version (as it was bought with the hardware) which is a different key version to the download version. I did attempt to ring Microsoft about what to do but they just put the phone down on me. If there is a way to activate the 60 day trial version of Windows using a OEM key ? Otherwise I will have to reinstall Windows. I’ve tried pressing the "change product key" option but this will not accept my key. I do have a genuine DVD copy of Windows Enterprise 64Bit (OEM). Current installed Version (60 day trial) Version 6.0 (Build 6002): SP2
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540508599.52/warc/CC-MAIN-20191208095535-20191208123535-00061.warc.gz
CC-MAIN-2019-51
832
9
https://joejoomla.joesonne.com/how-to-find-the-absolute-path
code
Have you ever been stumped in a configuration file or some other installation for a web based program when asked for an absolute path? The thing that you are installing will not work because the application insists on you entering the absolute file path yourself which you simply do not happen to know. There's no way around it, you have to get absolute path to the file or folder where items are going to be stored or used by the application. You are thinking to yourself 'why can't the stupid program figure it out on it's own?' That's not going to be helpful though, you just have to know how to find absolute paths. Step 5 in the installation process when installing Joomla! is the FTP configuration. If you want to 'Enable FTP file system layer' in this step it requires that you input the FTP root path. There is a button labelled 'Autofind FTP Path' but it doesn't always work. It has never worked for me when setting up a local install environment on my Mac using MAMP. Fortunately you don't need this step in order to get Joomla! to install, but if you want the FTP file system layer set up you are going to have to type it in the field required for it. Once you type the absolute path in the FTP root path field there is a button to 'Verify FTP Settings' which does verify the settings when they are correctly input. You can not get past this step with a relative path, it must be the absolute path. It can be really frustrating to find the absolute path. Web hosting services have all kinds of different set ups on how they serve domains on their servers. The path you need can literally be hidden from you. So what is the purpose of the absolute path? The absolute path is a path containing the root directory. The system can find all the other sub directories relative to the root directory once it knows its location. An example of an absolute path vs relative path on a web server would be: So this absolute path is what helps the application or system find it's way. Fortunately there is an easy way to figure out what the absolute path is. Joomla! is a php open source content management system. What we need is a php script that will help us determine where the absolute path is on the server. You can easily make this script yourself using the very handy free text editor TextWrangler application from Bare Bones Software. Don't use MS Word for this. You want a simple text editor that will not add, remove, or change characters you type in. <?php echo __FILE__; ?> Save the above file as findpath.php. It is important that you use '.php' (without the quotes) in the file name as you are making a php script file. You now need to upload this file to your web server. Put this file in the public_html folder of your web server. This will be the root directory of your domain. If you have sub-domains inside that root directory with additional Joomla! installs you will need to put it into the appropriate folder. For the most part, if you only have one domain being served by your web hosting service, it will go in the public_html folder. To invoke this script and find the absolute path for your Joomla! installation all you need to do is type in your browser window: Be sure to change 'yourdomainname.com' to whatever the correct URL is for your domain. When you type this in your web browser it will display the absolute path similar to this: This Is Your Absolute Path: The part you want is '/home/yourdomainname/public_html' If you were trying to find the absolute path for a MAMP localmachine installation you would put the 'findpath.php' file inside your Joomla! installation folder which resides in the htdocs folder of MAMP and the absolute path would look something like this: Of course you could put this file inside a folder in a sub-directory and get the path to that directory. This is very handy for sorting out path issues to bulletin board forums, photo albums, and other applications on your web server. This Is Your Absolute Path: When you are done it is VERY IMPORTANT that you DO NOT leave this file on your server. If you do it can be a big security risk for you if someone else comes across this file. It could be used to exploit a weaknesses of an application or program on your server that may be poorly written. Be sure to delete 'findpath.php' as soon as you have finished with it.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00691.warc.gz
CC-MAIN-2023-14
4,327
15
https://seostudio.tools/json-viewer
code
What is JSON? - A collection of name/value pairs (often realized as an object, record, struct, dictionary, hash table, keyed list, or associative array). - An ordered list of values (often realized as an array, vector, list, or sequence). It's commonly used for transmitting data in web applications (e.g., sending data from the server to the client, so it can be displayed on a web page, or vice versa). JSON is often used for serializing and transmitting structured data over a network connection. It is primarily used to transmit data between a server and web application, serving as an alternative to XML. What is a JSON Viewer Tool? A JSON Viewer Tool is a free online tool that displays and interacts with JSON data in a readable and structured format. It typically provides a clear, tree-like representation of JSON data, making it easier to understand and navigate. The main purpose of this tool is to format and beautify your JSON data to be easier for reading and debugging. JSON is a popular data format used for data interchange on the web, but it can often appear dense and difficult to read, especially when it's not formatted or when it contains complex nested structures. How Does the JSON Viewer Tool Work? The JSON Viewer Tool works by parsing the JSON data provided by the user. Here's a step-by-step explanation of its functioning: - Input: The user inputs a JSON string and clicks View. - Parsing: The tool parses this JSON string to understand its structure, including objects, arrays, and key-value pairs. - Display: It then displays this structured data in a more readable format, typically in a tree-like hierarchy. - Interaction: Users can expand or collapse different nodes in the tree to better understand the relationships and hierarchy within the JSON data. Benefits of the JSON Viewer Tool - Improved Readability: By converting JSON data into a structured, tree-like format, the tool makes it more readable and understandable, especially for complex data. - Data Navigation: Users can easily navigate through different levels of data, which is particularly useful for deeply nested JSON. - Debugging Aid: It helps in debugging by allowing developers to quickly identify errors in JSON structure or syntax. - Learning Tool: For those new to JSON, it provides an excellent way to learn JSON structure and syntax by visualizing how data is organized. - Efficiency: It increases efficiency in working with JSON, especially for tasks like data analysis, formatting, or editing. Example of What This Tool Does: - Output after clicking 'View': city: San Diego This output demonstrates the transformation of a compact JSON string into a structured and easily navigable format.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100016.39/warc/CC-MAIN-20231128214805-20231129004805-00776.warc.gz
CC-MAIN-2023-50
2,699
23
http://www.drdobbs.com/architecture-and-design/parasoft-releases-concerto-alm-tool/219200435
code
Parasoft has announced the release of Concerto, software that integrates and facilitates the software development lifecycle (SDLC) to ensure that software can be produced consistently and efficiently. Concerto complements an organization's existing technical infrastructure, connecting distributed components (e.g,. requirements management, defect tracking, source control management, etc.) to better facilitate natural human workflow. Concerto manages "what" needs to be accomplished within the context of "how" management expects those tasks to be accomplished. More importantly, the system allows managers to establish their working expectations via a policy. Policies are monitored "behind the scenes" within an unobtrusive, invisible infrastructure that only interacts with (nudges) staff when policies are not being followed as expected. Policies, which are monitorable in real-time, increase efficiency by reducing the need for long meetings and removing rework. Concerto is an Application Lifecycle Management solution that provides a comprehensive and objective view of SDLC tasks as well as application quality and project risks. It is designed to help organizations set expectations, govern workflow, manage tasks, and monitor compliance. For example, Parasoft Concerto's Report Center, which is one of the solution's integrated components, helps developers verify whether a project is on budget, validate that the required quality is achieved based on the policies, and determine when additional resources are needed (for instance, because the work has become more complex than expected).
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860557.7/warc/CC-MAIN-20180618125242-20180618145242-00365.warc.gz
CC-MAIN-2018-26
1,600
3
http://forum.psicode.org/t/transition-metal-optimization/1028
code
My intuition is that you’re not landing on the same SCF state each time. In other words, when Psi tries to find the molecular orbitals, it’s finding an excited set of molecular orbitals instead of the ones you want. This is a common transition metal problem. Look at your Hartree-Fock/DFT solutions and check how similar they are to each other. If subsequent iterations agree to within the third decimal place, your SCF is fine. If they only agree to the second decimal place, you may have an SCF problem. If they only agree to the first decimal place (or not even that) you definitely have an SCF problem.
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743105.25/warc/CC-MAIN-20181116152954-20181116174954-00328.warc.gz
CC-MAIN-2018-47
610
2
https://oasisrose.garden/knowledge-base/install-oasis-remote-signer-binary/
code
Install Oasis Remote Signer Binary You only need to install the Oasis Remote Signer binary if you intend to configure your Oasis node with a remote signer setup. An example of such a setup is described in Using Ledger-backed Consensus Key with a Remote Signer. It contains the logic for implementing various Oasis Core signers (i.e. Ledger-based signer, file-based signer or a combination of both via composite signer) and a gRPC service through which an Oasis node can connect to it and request signatures from it. The Oasis Remote Signer is currently only supported on x86_64 Linux systems. Downloading a Binary Release We suggest that you build Oasis Remote Signer from source yourself for a production deployment of an Oasis node with a remote signer setup. For convenience, we provide binaries that have been built by the Oasis Protocol Foundation. Links to the binaries are provided in the Network Parameters page. Building From Source Although highly suggested, building from source is currently beyond the scope of this documentation. See Oasis Core’s Build Environment Setup and Building documentation for more details. The code in the master branch might be incompatible with the code used by other nodes in the Mainnet. Make sure to use the version specified in the Network Parameters. oasis-remote-signer Binary to To install the oasis-remote-signer binary for the current user, copy/symlink it to To install the oasis-remote-signer binary for all users of the system, copy it to
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100518.73/warc/CC-MAIN-20231203225036-20231204015036-00125.warc.gz
CC-MAIN-2023-50
1,493
19
https://forums.theregister.com/post/reply/2253834
code
There are issues... ISPs should not really know what data is streaming into their portion of the network or where it originates. What is being communicated is none of the business of the carrier be they carrying postal mail or transmitting video. In this respect there is a net neutrality issue because in order to charge someone like NetFlix an ISP would have to be prying into private communications. Ultimately, the users of the network should pay for the network. Both the customer downloading content and the provider (such as NetFlix) streaming the content are 'users'. They should both pay to send or receive whatever they are sending or receiving to the backbone. They are 'peers' in this sense. NetFlix pays their bandwidth providers to push data up to the backbone and their customers pay *their* bandwidth providers to pull the bandwidth down from the backbone. It makes little sense for the same traffic to be constantly traversing the network. It *should* be cached as a matter of course. However, if NetFlix must encrypt streams for each individual recipient, it will not actually be the same data that is being sent. If a company like NetFlix is abusing the network by forcing the same enormous volumes across the network over and over, they should be made to bear some financial cost associated with any malfeasance such as refusing to allow caching. Until they bear the cost of some of the inefficiencies they introduce, they will have little incentive to correct their behavior. That being said, we should err on the side of maximizing neutrality with respect to sender/receiver and the nature of the data being moved. It seems to me that we come to problems like this because bandwidth is limited and much of that limitation is an enforced artificial scarcity to support the old-fashioned revenue models of the communications cartels. Where I live, Bell Canada still charges the unwary as much as $0.91 per minute for a long distance call within the province. [http://www.bell.ca/Home_phone/Long_distance_rates]. Even with our semi-crippled network infrastructure that is better than a 100000 per cent mark up; pretty good if you can get it. We are still treating EM spectrum, cable and telephone lines as if what they are carrying is pinned to how it is being carried. This has carved up bandwidth inefficiently and resulted in a lack of competition among the different modes of transport. Both result in higher costs for bandwidth. We need to get everyone on board to create enormous transparent backbone networks that are essentially public assets that are essentially free to use and to remove regulations artificially propping up differences between transport that no longer apply. We also need to have a conversation guided by people we trust. There is much confusion about all this and it is because the waters have been muddied by people who simply don't understand the network attempting to work against disinformation being supplied by ones who do understand it but have a vested interest in the confusion allowing them to stifle competition and charge more for things than they are worth. The confusion sown by both the genuinely confused and the network cartels means that we can never have a sensible conversation about prioritizing bandwidth. Some things, such as real-time responses to timing signals, keystrokes, etc require low latency. Some things, such as voice communications, require QOS so that there are no interruptions sufficiently long to interrupt communication. Some things require lots of bandwidth, some require very little. Some traffic, such as text messages require very little in terms of quality. EMail does not suffer much if there are longish delays in moving things about or constrained bandwidth. Real-time video conferencing across state lines requires fairly snappy response times and potentially lots of uninterrupted bandwidth. The value of different qualities of bandwidth differ. In order to maximize the economic efficiency of network investment, we need to be able to set different tariffs for bandwidth of differing value. Unfortunately, we cannot trust any of the incumbent network providers not to abuse such a thing and we cannot trust the system overall to protect the disadvantaged from being pushed out into a second class slow lane. The ideal would be to have nothing but ultra-low latency and essentially unlimited, uninterrupted bandwidth. That is, the ideal would be if there was only one single quality of bandwidth that was adequate for all needs. That is not likely to happen on a real network for the foreseeable future and hence we need to be realistic about how we charge for different types of bandwidth. There is much that requires improvement on our global network. I don't think that the status quo of ridiculous confusion is ultimately helping anybody. It certainly is not maximizing the greatest good.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400202686.56/warc/CC-MAIN-20200922000730-20200922030730-00731.warc.gz
CC-MAIN-2020-40
4,890
11
http://freecode.com/tags/bsd-three-clause?page=1&sort=name&with=370&without=
code
OutputFilter is a PHP library that can be used to filter the values of scalars, arrays, and data objects recursively. It can be easily integrated in frameworks. A solution for Zend Framework and Smarty template engine is already included. A program for monitoring JavaEE applications. Software that rips DVD to MOV, MP4, M4V, and iTunes.
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00042-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
337
3
https://premium.wpmudev.org/forums/topic/bp-social-theme-is-not-showing-articles-from-the-web-section/
code
I already have a bp-social theme v 184.108.40.206. I subscribed again to buy the latest version 1.5.3. Even in the most recent version, the home page doesn't show "Articles from the Web" section. It displays the articles/posts on Recent Posts widget. But Articles from the Web is not at all generating. Please help me ASAP. Otherwise, kindly refund my money back. I'll then prefer other stable theme.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401260.16/warc/CC-MAIN-20200529023731-20200529053731-00457.warc.gz
CC-MAIN-2020-24
400
1
https://xplaind.com/104874/cyclical-unemployment
code
Cyclical unemployment refers to the increase in total unemployment that occurs when an economy is in recession. It is represented by difference between the unemployment rate and the natural rate of unemployment. Unemployment rate is never zero, not even at the peak of economic booms. It is because some sources of unemployment such as the mismatch between available jobs and workers, exist during all phases of business cycle. The actual unemployment rate (ua) fluctuates around the natural rate i.e. it increases when the economy enters recession and decreases when it makes a recovery. Actual rate of unemployment (ua) can be defined as the sum of natural rate of unemployment (un) and the rate of cyclical unemployment (uc): ua = un + uc The rate of unemployment that prevails during all phases of a business cycle is called the natural rate of unemployment (un). It changes in response to non-cyclical factors such as demographic changes, changes in minimum wage, etc. $$ u_n=u_f\ +\ u_s $$ Frictional unemployment (uf) results from the time it takes in matching suitable candidates to jobs. Structural unemployment (us) occurs when at the prevailing wage there is a surplus of workers and the market is not able to reach equilibrium due to wage rigidity. It follows that the actual unemployment rate is the sum of rate of frictional unemployment, rate of structural unemployment and rate of cyclical unemployment: $$ u_a=u_f\ +\ u_s\ +\ u_c $$ The cyclical unemployment closely mimics the output gap i.e. the difference between actual gross domestic product (GDP) and potential GDP i.e. when the cyclical unemployment is high, output gap is high too and vice versa. This relationship is expressed by Okun’s law. Cyclical unemployment also features in the Phillips curve which shows that decreases in cyclical unemployment causes demand-pull inflation. When the cyclical unemployment is low, more people are employed, there is more income is to be spent on a given amount of goods and hence inflation rises. The following graph shows the relationship between actual unemployment rate and natural rate of unemployment: FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/ It is clear from the graph above that the actual unemployment rate (represented by the red line) has oscillated around the natural rate of unemployment (blue line). During recessions (represented by the grey areas), the actual rate has shot up abruptly which represents a steep surge in cyclical unemployment. During recoveries, on the other hand, the actual unemployment rate has gravitated towards the natural rate. In some instances, the actual unemployment rate is even lower than the natural rate which tells that the cyclical unemployment is negative in that period. Written by Obaidullah Jan, ACA, CFA and last modified on
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258451.88/warc/CC-MAIN-20190525204936-20190525230936-00009.warc.gz
CC-MAIN-2019-22
2,824
15
https://community.octoprint.org/t/enhanced-terminal/36631
code
Hey guys. Do you know if there is a plugin that can enhance terminal output? I am looking for something that could add something like - for instance dark background - a different color for printer responses (maybe light gray), removing "Recv:" - a different colorful commands, removing "Send:" - embedding the input field in the same window would be good too. - allowing multi-line inputs commands - enter to send Do you know anything with this capability?
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057371.69/warc/CC-MAIN-20210922163121-20210922193121-00666.warc.gz
CC-MAIN-2021-39
456
9
http://std.dkuug.dk/jtc1/sc2/wg2/docs/n2134.htm
code
ISO/IEC JTC1/SC2/WG2 N2134 Pawel Wolf of the Grabung Musawwarat at the Humboldt-Universität in Berlin responded to my exploratory proposal to encode Meroitic in the UCS in SC2/WG2 N2098 (http://www.dkuug.dk/jtc1/ sc2/wg2/docs/n2098.pdf). 1. Latin transliteration is generally satisfactory for Meroitic scholars. That's fine. We hope that the 23 transliteration characters and word divider mentioned can be represented in the UCS, but if they cannot we need more input. 2A. Palaeographic characteristics of texts are not considered. Palaeographic concerns could be handled by glyph variants in the fonts; this would still be advantageous to scholars wanting to use such codes for vocabulary lists which could be sorted and searched. Academics prepare documents for publication and teaching which may not be strictly palaeographical in nature. These may include grammars, dictionaries, teaching materials, examination papers, etc. 2B. Transliterated texts are written from left to right. That is perfectly normal for Latin transliteration. The question remaining to be answered is, if scholars were to use actual Meroitic characters in text, would they prefer to represent them right-to-left or right-to-left? Egyptologists usually prefer left-to-right presentation. Etruscologists have informed us explicitly that they prefer left-to-right presentation for Etruscan script, even though many Etruscan texts are right-to-left. (I think this means they even reverse photographs from time to time.) 2C. Some Latin transliteration characters are not generally available. Please be more specific. 3. Scholars prefer photographs to fonts. See 2A above. 4. Encoding Meroitic would only be useful for popular science One must recognize that there are indeed other users of the Universal Character Set other than academic users. Nevertheless, we do have a strong commitment to supporting the best scholarship (as we did for Ogham and Runic, already encoded in the UCS). We are very interested to learn if there are Latin transliteration characters which cannot be represented with the UCS. The UCS has to take into consideration many different user requirements. Runic and Ogham have both "serious" academic users and "popular" amateur users. Whether these latter can be served without compromising the academics is something we have to take into account. There is, however, no urgency to encode any immature proposal without the blessing of the academics. 5. If a standard were created it should "use the same internal coding like the one used for the transliteration fonts". Does this mean one-to-one mapping or font shifting? If the latter, it would mean that Meroitic characters were to be considered glyph variants of Latin characters, which would not be normal for UCS treatment of scripts. 6 & 7. A number of the characters do not have the correct shape. I would very much like to receive clear photocopies of the relevant parts of the three articles in question (Priese 1973, Hochfield & Riefstahl 1978, Hainsworth & Leclant 1978). These are not available to me in Dublin. I can make a font for these available. 8. Four additional characters should be added to the repertoire. More information on these would be welcome. 9. Cross references to Egyptian hieroglyphs in the character names is undesirable and they should be removed. Easily done. The parentheticals will be removed.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141674082.61/warc/CC-MAIN-20201201104718-20201201134718-00336.warc.gz
CC-MAIN-2020-50
3,377
24
https://pubs.lenovo.com/se450/install_rot_module
code
Install the Firmware and Root of Trust/TPM 2.0 Security Module See this topic to learn how to install the Firmware and Root of Trust/TPM 2.0 Security Module. About this task - Lower the security module until it is firmly seated on the system board. - Secure the security module to the system board with two screws.Figure 1. Installing the security module - Install the system board if necessary (see Install the system board assembly).
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473347.0/warc/CC-MAIN-20240220211055-20240221001055-00063.warc.gz
CC-MAIN-2024-10
435
6
https://latin.stackexchange.com/tags/predicate/hot
code
Bennett's New Latin Grammar (this link will take you to appropriate section) offers several helpful rules of thumb for the agreement of an adjective with multiple nouns. Although I recommend reading the above entry, which is fairly short, the basic principles are: Attributive adjectives agree with the nearest noun in both gender and number, e.g. "Filius ... Pinkster 2015 mentions the following observable trends regarding the omission of esse. it is more frequent with the 3rd person than in the 1st or 2nd; it is more frequent with present indicative forms; it is more frequent in simple nominal sentences etc. (see pp. 201-204 for more details). Stolz and Schmalz add that the omission of esse is regular in ... What you're calling a "predicate noun" is, in fact, the subject. In the Latin construction, unlike the English translation, the thing possessed is the subject, so the verb has to agree with it. E.g. in Puellis est rosa, even though this can be translated as "The girls have a rose", a literal translation would be something like "A rose is to/for the girls". ... If the adjective is plural and it refers to words of several genders, I seem to recall the masculine is used by default. But I believe a Roman author would indeed recast a sentence like this, especially because it also refers to a neuter word. If the adjective is singular, it should agree with the last noun mentioned. Yes to the first, usually no to the second. In Latin, esse can almost always be dropped if the meaning is clear. This is even true when it's connected to another verb form, like in a perfect passive captus [est] or a passive periphrastic delenda [est]. Linguistically, this is called zero copula, and also appears in e.g. Russian. Consider also English ... The thing being possessed is the subject in this construction. The verb agrees with the subject, but the subject in your example is not the girl. Do not confuse the plural nominative and singular dative, although they both end in -ae. Consider these examples (cases indicated in parentheses): Girl has a rose. Puellae (dat) est rosa (nom). Girls have a rose. ... συμβαίνω in the sense "happen to" (section A.III.b of the LSJ entry) takes the dative for the person something happens to. In this sentence, the dative it takes is the relative pronoun ᾧ: "to whom it happens". The copula εἶναι requires that its predicate should appear in the same case as whatever word it's being equated with; most often ...
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00618.warc.gz
CC-MAIN-2021-49
2,478
21
https://brownspace.org/
code
We Are Brown Space Engineering We are an undergraduate student group at Brown University. Our main focus is PVDX, a 3U CubeSat. PVDX's mission is to 1) to test novel Perovskite Solar Cell technology and 2) to create a way for anyone to interact with space! Our first satellite was EQUiSat, a 1U CubeSat which has a payload of 4 LEDs to create a beacon visible from earth and LiFePo4 batteries that have never flown in space before. Our primary mission is to prove the accessibility of space to people of all backgrounds. To accomplish this we are approaching the project with a DIY attitude; if we can make a part ourselves, we do. As a result we have an extremely low budget. Additionally we are open sourcing EVERYTHING. Our latest information can be found on our resources page!
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00665.warc.gz
CC-MAIN-2023-06
781
2
https://experienceleaguecommunities.adobe.com/t5/adobe-livecycle-questions/first-instance-in-a-subform-to-repeat-on-each-page/qaq-p/123817
code
I'm not very advanced in this software and I'm having a difficult time understanding the process. Where can I find the XML that shows the tree that you outlined? Also, where should I place the script? When I keep the binding as Global it make all the names the same. Thank you for your assistance, but can you please tell me how to bind it only to the first row? When I go under Object, Binding and use global binding and the other "name fields" in my table are the same. Since you always wanted the first intance to repeat on every page, why don't you create a subform in the Master Page? This subform should have a table with one row which can be binded to the first instance of the repeating section in the XML. In this case, your hierarchy would be something like this:
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594341.91/warc/CC-MAIN-20210422160833-20210422190833-00435.warc.gz
CC-MAIN-2021-17
773
4
https://fyeomans.com/2012/01/06/give-your-employees-unlimited-vacation-days/
code
I found the article Give Your Employees Unlimited Vacation Days | Inc.com. interesting, as I have had this discussion with people in the past. While I pretty much agree with what is said in the post, I have a few things I would like to say (as I do about almost everything!), and a couple of counterpoints. Going back a bit, vacation time has never been something I looked at closely when I was younger, as I hardly ever took vacation. Through the 90s, I think I went 5 or 6 years without ever taking more than a day or two of vacation. In fact, back in 1999 when I joined Whitehill Technologies, I never even thought to discuss vacation when we were negotiating the employment agreement. Some time after I started, I thought to ask my boss (the CTO) about it, and he said “Fred, you have as much vacation as you can find time to take!”. Note, it turns out I took no vacation in the first few years! I believe vacation time is important, though. Even when I was not taking vacation, and running myself into the ground, I believed it was part of my responsibility as a leader to ensure members of my team took their vacation time. This all stems from my belief that is a leader’s job to ensure their team stays healthy for the long haul. If you burn your team out during the first few games of the season, you will have nothing left for the playoffs. You have to protect your team, keep them healthy, protect them from the demands of the business, and in many cases (especially with young developer-types who think they are super-human) protect them from themselves. This is one of the problems I see with the “unlimited vacation days” model, which is often phrased as “take as much or as little vacation as you want”. Unless it is implemented very carefully, and managed by people who truly look out for their teams, there is a great risk of people not taking vacation and burning themselves out – not a good scenario for the staff or the business. The second issue I have with the “unlimited vacation days” model is that people may feel pressured to take fewer vacation days as they feel they will be viewed poorly for taking time off. This is especially true in a business where you are judged based (wholly or partly) on billable hours realization. There is pressure (real or perceived, implicit or explicit) to not take vacation in order to exceed your target – and you are frequently rewarded and cheered for doing so. Again, this is something that must be carefully managed if you want to ensure your employees maintain life balance. While this is a problem already with “defined vacation allowances”, since many people in North America already do not take the vacation allotted to them (see, for example, here). I think there is risk of the situation becoming much worse if the amount of vacation time is undefined, especially for more junior staff. Overall, I think it is a great idea, for the reasons stated in the article. But it is not without risk, and needs to be managed, like anything else.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224653071.58/warc/CC-MAIN-20230606182640-20230606212640-00079.warc.gz
CC-MAIN-2023-23
3,031
6
https://circexplorer2.readthedocs.io/en/latest/tutorial/assembly/
code
De Novo Assembly for Circular RNA Transcripts CIRCexplorer2 employs Cufflinks to carry out de novo assembly for circular RNA transcripts, and charaterizes alternative splicing based on the assembled results. So it is the key step before analyzing the landscape of alternative back-splicing and alternative splicing of circular RNAs. CIRCexplorer2 assemble -r hg19_ref_all.txt -m tophat -o assemble > CIRCexplorer2_assemble.log - It will use Cufflinks to assemble circular RNA transcripts with the alignment result ( tophat) of poly(A)−/ribo− RNA-seq (See Alignment). CIRCexplorer2 assemblewill create a directory assembleby default. All the assembly information of circular RNA transcripts will be created under the directory assemble. You could also check cufflinks.logfile for detailed logs of Cufflinks assembly. - See Assemble for detailed information about
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224648850.88/warc/CC-MAIN-20230602172755-20230602202755-00041.warc.gz
CC-MAIN-2023-23
865
10
https://www.dynamicsuser.net/t/chage-company/5972
code
S IT POSSIBLE FROM A COMPANIES, TO CREATE AN ORDER OF HE/SHE BUYS IN ANOTHER COMPANY AND TO REGISTER IT? THANK YOU. Yes it is. You can access a company from another company using the instruction CHANGECOMPANY of a record variable. For example, suppose you are working in the company X and you want create an order in the company Y; the code to write is: SalesOrder.CHANGECOMPANY(‘Y’); SalesOrder.INIT; SalesOrder.“Document Type” := SalesOrder.“Document Type”::Order; SalesOrder.“No.” := ‘’; SalesOrder.INSERT(TRUE); SalesOrder.VALIDATE(“Sell-to Customer No.”,‘10000’); SalesOrder.MODIFY; where SalesOrder is a record variable that “points” to table 36. Those lines just create an order header using the numbering defined in Sales Setup. After creating the lines (using CHANGECOMPANY), you can post the order in the usual way (send the header to codeunit 80 after setting the fields Ship and Invoice). Hope this helps you Best regards Marco Ferrari and in the codeunit 80, do I have to use the changecompany in all the table? Yes, I think so, because there’s not a way to change a company for the entire application area. In this case you have to change the company name for local variables too (and it’s not a good way of working). Best regards Marco yes i agree. in marco’s codeunit, when any validation is done it is done to the company that you are in currently opening. company1 → is the company you are opening currently company2 → is the external company, which you would like to insert some records with validation if you were to do any kind of validation on company2 from company1 by using CHANGECOMPANY(), it will be validated against all the tables in company1. the work around here is to create a batch job. this batch job has to be run from company2. when run from company2, it will pick up all the data from a temptable and start’s it processing. any kind of processing here will be validated against company2.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00057.warc.gz
CC-MAIN-2022-05
1,967
5
https://support.binance.com/hc/en-us/articles/360000543371-Nano-NANO-
code
Nano (formerly known as RaiBlocks-XRB) is a cryptocurrency that aims to deliver near instantaneous transaction speed and unlimited scalability. Under a Block Lattice infrastructure, each user has their own blockchain, allowing them to update it asynchronously to the rest of the network, resulting in fast transactions with minimal overhead. Transactions keep track of account balances rather than transaction amounts, allowing aggressive database pruning without compromising security. Nano’s feeless, split-second transactions make it the premier cryptocurrency for consumer transactions. Total Supply: 133,248,289 Circulating Supply: 133,248,289
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814249.56/warc/CC-MAIN-20180222180516-20180222200516-00011.warc.gz
CC-MAIN-2018-09
650
3
http://service.scs.carleton.ca/node/3446
code
Carleton University - School of Computer Science Technical Report TR-17-01 February 13, 2017 No passwords needed: The iterative design of a parent-child authentication mechanism Kalpana Hundlani, Sonia Chiasson, Larry Hamid Despite the fact that the vast majority of children are online, our exploration of the user authentication literature and avail- able tools revealed few alternatives specifically for authenti- cating children. We create an authentication mechanism that reduces the password burden for children and adds customiz- able parental oversight to increase security. With Bluink, our industry partner, we iteratively designed and user tested three parent-child prototypes, with each iteration addressing issues raised in the previous iteration. Our final design is a parent- child authentication mechanism based on OpenID and FIDO U2F which allows children to log in to websites without re- quiring a password and enables parents using their mobile device to remotely determine whether a login request should be granted. Back to 2017 Tech Reports
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510501.83/warc/CC-MAIN-20230929090526-20230929120526-00258.warc.gz
CC-MAIN-2023-40
1,062
7
https://andrewducker.livejournal.com/2009/10/14/
code
I was at a party a few years ago where the people split into two groups - the ones who were happily singing around a piano and the ones who were hiding in the kitchen, aghast that people would sing, in public, for fun. The split was clearly generational in nature - the older folks had clearly grown up singing together, the younger ones considered singing to be something that was done by musicians. And despite theoretically belonging to the second group I've generally felt that this was a bad thing. My parents used to sing on long car journeys, entertaining us when we were little, and it always seemed like a lot of fun. I can trace the point where I lost any interest in it to my first choir lesson in school, where we all lined up in rows and sang through something vaguely religious - and then afterwards the choirmaster told me that I should just mime along. This would have been twenty six years ago, but the memory still sticks with me. The idea that a pupil who wasn't good at something should be told to just _stop_ is something that shocks me in retrospect - it's a massive failure on the behalf of any teacher. And the idea that singing is something that should be done only by the trained - rather than a natural expression of our humanity is also something that bothers me deeply. There does seem to have been a resurgence recently - things like YouTube and Singstar/Rock Band seem to have encouraged people to put their own voice out there in the same way that blogs encouraged people to write. But I doubt very much that we're going to end up back at the point where sing-songs around the piano are common place again. Mind you - a lot of this is probably down to the fact that playing Grand Theft Auto is a more distracting and, dare I say it, fun way of spending the evening :-> All of this triggered by a quote here in an article on the long history of articles decrying technical progress in the "content industry" - starting with Sousa (the composer) worrying about the player piano and the gramophone: "Under such conditions, the tide of amateurism cannot but recede until there will be left only the mechanical device and the professional executant.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00504.warc.gz
CC-MAIN-2021-43
2,176
8
http://mycomsats.com/fibonacci-sequence-program-in-assembly/
code
Fibonacci Sequence Program in Assembly First we load all the nesscary header files required. Marroquineria . This program works on pc spim. But if you want you want can change the header files to whatever software youre using. The main logic remains the same. Fibonacci Sequence Program in Assembly Code .dataspc: .asciiz " ".text.globl main Now we initialize the constants that are required for the loops. main: li $t0,1 li $t1,1 li $t2,9 li $t3,1 we are moving the required constants into the syscall control register. If your using AVR studio for burning a code. Skip this and just display the data on the pins directly move $a0,$t0 li $v0,1 syscall la $a0,spc li $v0,4 syscall move $a0,$t1 li $v0,1 syscall la $a0,spc li $v0,4 syscall Now comes the main logic. We will add the two ones. Replace the result in one register, and swap the place of the other. For example, Consider registers R10 and R11 contain the numbers 1 and 1. We add R10 and R11 and replace the R10 with R11 and the contents of R11 with the result. So we have 1 and 2 now. Again adding them gives us 3 and if we keep swapping like before we will get 2 and 3 in the registers. armband kinderen . This will start to create the sequence 1,1,2,3,5,8,13 and so on. loop: add $t4,$t0,$t1 move $a0,$t4 li $v0,1 syscall la $a0,spc li $v0,4 syscall Move $t0,$t1 Move $t1,$t4 addi $t3,1 bne $t3,$t2,loopexit: li $v0,10 syscall
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211167.1/warc/CC-MAIN-20180816191550-20180816211550-00228.warc.gz
CC-MAIN-2018-34
1,389
10
https://arduino.stackexchange.com/questions/26911/how-can-i-convert-pid-correction-values-to-an-pwm-brushless-command
code
Please I'm working on a quadcopter, and I have to made my own flight controller based on arduino uno. First of all please how can I get a filtred angular velocity and linear acceleration from the MPU6050, is the following code give us the angular velocity in deg/sec and the linear velocity in m/s² ? accelgyro.getMotion6(&ax, &ay, &az, &gx, &gy, &gz); and is this pid value in this following code is right ? accel_reading = ax; accel_corrected = accel_reading - accel_offset; accel_corrected = map(accel_corrected, -16800, 16800, -90, 90); accel_corrected = constrain(accel_corrected, -90, 90); accel_angle = (float)(accel_corrected * accel_scale); err = angle_setpoint - accel_angle ; P = Kp * err ; I += (Ki * err) ; D = Kd * (err - errp); pid = P + I + D;
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00707.warc.gz
CC-MAIN-2021-31
760
5
https://guides.library.barnard.edu/SOCI/pandemic
code
These resources may be helpful for students looking for previous scholarly work on pandemics/epidemics to help them contextualize our current crisis. This is a work in progress - please contact me if you'd like to suggest additions to this or any other part of this guide! Some links lead to e-books purchased by Barnard/Columbia Libraries or available through our online journal subscriptions. Other resources were freely available from the publisher at the the time they were added to the guide. Note that these may become less freely available over time. You can use these COVID-19 data/stats together with other demographic and health data/stats available here.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663021405.92/warc/CC-MAIN-20220528220030-20220529010030-00525.warc.gz
CC-MAIN-2022-21
665
3
https://www.dkmp.wtf/git-gud-005-making-stuff-invisible
code
Git Gud 005 - Making stuff invisible! A very cool thing you can do in OpenRCT2 is making things invisible! Now why would you want to make things invisible? Wouldn't you want people to see the things you create? Well yes, but you can create some really nice effects with it! In every park I build, I will make hundreds of objects invisible! I often make my paths invisible. When you do that, it will look like your guests are walking on whatever is under the path! You could use this to make nice bridges, or to let your guests walk on the grass! You can also make parts of your ride tracks invisible, and overlay them with different tracks! Or you could make it look like your coaster is jumping over a gap! There are several ways to make things invisible, which I have explained in a tutorial video!
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710710.91/warc/CC-MAIN-20221129164449-20221129194449-00383.warc.gz
CC-MAIN-2022-49
800
3
https://academia.stackexchange.com/questions/159485/which-is-a-better-combination-for-phd-recommendation-letters
code
My field is language-related and I have been conducting research in it with two professors in the past couple of years. Is it better for me to request recommendation letters from these two professors for my PhD application or should I rely on a more diverse approach? For instance, combining letters from one of these professors with those coming from professors who taught me during the MA years (which, by the way, I graduated from several years ago) or maybe even my MA thesis advisor. I should also mention that it is not very likely for me to apply to universities in the US. Generally, doctoral programs are designed to produce researchers and academics in a given field. Given this, I suggest that you request letters of recommendation from the faculty members who can best speak to your ability to produce high-quality research. It sounds like you've been working with two professors who might be able to do that. There are some caveats, though. You want the best possible letters of recommendation you can get. If these two professors with whom you have conducted research are willing to provide excellent LORs, then I'd suggest that you ask them for LORs. But if your relationship with them is strained or if you under-performed while working with them, then perhaps you'll have to look elsewhere. I'm uncertain why a more "diverse approach" would be advantageous to you. Your LORs should speak to your ability to excel in a PhD program, your capacities for research and original thinking, the likelihood that you'll contribute to the department, and other qualifications.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100476.94/warc/CC-MAIN-20231202235258-20231203025258-00133.warc.gz
CC-MAIN-2023-50
1,582
4
https://www.techconnectworld.com/World2017/showcase/?action=viewtechnology&technology=Cyber%2C%20AI%2C%20Data%2C%20Software&maturity=Prototype
code
Enhanced Information Security Monitoring using Analog Signals This technology serves as a measurement framework that uses micro-benchmarks to capture and catalog program-dependent signals at specific frequencies. The end-goal of these inventions lies in preventing potential attackers from penetrating and obtaining sensitive/secret information within the computer. Plant Phenotyping and Modeling-based Crop Productivity Prediction Software This technology includes image-based plant phenotype analysis algorism, crop productivity prediction algorism, and plant phenotype-environment interaction analysis algorism. This technology can be operated as smart phone (cell phone) application SW connected with cloud server. We are testing this technology in tomato, sweet pepper and strawberry. A Scalable Personalized Thermal-Comfort Platform for Building Energy Conservation Buildings commonly choose conservative temperature setpoints. This leads to huge energy wastes. We develop scalable personalized thermal-comfort (SPET) platform that can quantitatively estimate thermal comfort of any occupants in daily operations.Consequently, our platform can provide building operators a proactive temperature setting mechanism that optimizes both thermal-comfort and energy conservation.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00505.warc.gz
CC-MAIN-2020-40
1,279
6
http://androgenre.blogspot.com/2005/11/outside-bookstore-with-sign-announcing.html
code
Outside the bookstore, with the sign announcing me. Jennifer met the man who made the sign, and it looks very nice. We'll have to work on the spelling of "Halloween," but otherwise he seems to have done a good job. There are no foreign lands. It is the traveler only who is foreign. - Robert Louis Stevenson
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00288.warc.gz
CC-MAIN-2019-43
307
3
https://luxurygaragesale.com/products/grey-python-tassel-lucrecia-handbag
code
Diego RochasGrey Python Tassel "Lucrecia" Handbag Bag has top zipper closure. Comes in dust bag and retails for $2,680. Made in: Unknown Color: Grey Fabric Content: Python Condition: Good. Light scratching. Measurements & Sizing - Total Height: 11" - Total Length: 17.5" - Width: 5.5" - Handle Drop: 6" Returns & Shipping Luxury Garage Sale guarantees that all items sold on our site are 100% authentic. All products we accept are inspected for brand-specific guidelines and are put through a rigorous authenticity test process. LGS is not affiliated with Diego Rochas. We guarantee this Diego Rochas item to be authentic. Diego Rochas® is a registered trademark of Diego Rochas. Item #: 54426
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105961.34/warc/CC-MAIN-20170820015021-20170820035021-00326.warc.gz
CC-MAIN-2017-34
694
12
http://www.gog.com/forum/general/new_wasteland
code
OK im likely being very stupid but ive been awake for 17 hours and had 3 hours sleep in the last 48 so im probly not doing "too" bad. Where is the address to send faults to? I had a crash and it asked to send the logs but I didnt see where to send it to. "CenterCode and Documentation Though I continue to tell the programmers not to put bugs in their code since they will just have to remove them later, we know there will be many a pesky bug. With that in mind we’re launching our CenterCode bug reporting site now as well. Each eligible backer will be receiving an invitation email to your Ranger Center primary email. Follow the link to register on CenterCode, providing detailed information on your PC as you do so. The site itself is pretty straightforward; you can report bugs and input suggestions which go straight into our internal bug reporting system, where we deal with duplicates and assign them to the appropriate developer. Simply describe your problem or suggestion, go into as much detail as possible (especially on how to reproduce bugs) and hit submit. The other important function of CenterCode is the “Resources” tab, which we’ll use to provide technical FAQs, system requirements, troubleshooting info and more as we progress in the beta. I must also give thanks and praise to the hard working team here at inXile. I am fortunate to work with such a bright and passionate group of people who are hyper focused on making a game we can all be proud of. Thanks, everyone! We’ve all been pouring our heart and soul into Wasteland 2, and I hope it shows. I’ll never forget the elation from those first two days of our Kickstarter campaign and how happy I was to get this chance. Now, after the long journey, I am filled with excitement and"
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164903523/warc/CC-MAIN-20131204134823-00016-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,770
7
https://www.cbib.u-bordeaux.fr/prestations-en/
code
The CBiB offers a broad range of services: - analysis and integration of "omics" data, - development and implementation of specialized analyses, - access to its information management system, - implementation of bioinformatics methods and pipelines. Our expertise is particularly acknowledged in algorithms, data integration, analysis of high-throughput biological data and their visualization. Whether for large-scale projects or for specific analysis, our team has the expertise, the equipment and the ability to study the complexity of biological systems. The following services are offered : - Standard and custom biological data analyses. - Deployment of "Big data" approaches for data analysis. - Development and maintenance of biological databases. - Access and development to high-performance bioinformatics software infrastructure, in particular for NGS data. - Custom data analysis and consulting services for bioinformatics and cheminformatics projects. - Establishment of research collaborations with bioinformaticians and experimental scientists from different departments. - Hands-on tutorials and workshops on a wide variety of bioinformatic topics.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100184.3/warc/CC-MAIN-20231130094531-20231130124531-00325.warc.gz
CC-MAIN-2023-50
1,164
14
https://fodok.jku.at/fodok/publikation.xsql?PUB_ID=62787
code
S. McCallum, Franz Winkler, "Resultants: Algebraic and Differential" , Serie RISC Report Series, Johannes Kepler University Linz, Austria, Nummer 18-08, RISC, JKU, Hagenberg, Linz, 8-2018 Resultants: Algebraic and Differential Sprache des Titels: This report summarises ongoing discussions of the authors on the topic of differential resultants which have three goals in mind. First, we aim to try to understand existing literature on the topic. Second, we wish to formulate some interesting questions and research goals based on our understanding of the literature. Third, we would like to advance the subject in one or more directions, by pursuing some of these questions and research goals. Both authors have somewhat more background in nondifferential, as distinct from differential, computational algebra. For this reason, our approach to learning about differential resultants has started with a careful review of the corresponding theory of resultants in the purely algebraic (polynomial) case. We try, as far as possible, to adapt and extend our knowledge of purely algebraic resultants to the differential case. Overall, we have the hope of helping to clarify, unify and further develop the computational theory of differential resultants. Sprache der Kurzfassung: RISC Report Series, Johannes Kepler University Linz, Austria
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710931.81/warc/CC-MAIN-20221203111902-20221203141902-00646.warc.gz
CC-MAIN-2022-49
1,334
8
https://dev.to/williammatz/comment/leca
code
In college, I started building a website for people to showcase the cool projects they build. While this is my coolest project by far, I have a portfolio of other projects I've built that's hosted on the site! (inception?) 😂 Here's the link to the projects I've built which is hosted on the coolest project I've built: joinhelm.com/portfolio/5d6c7f9c19e... Cool, thanks for sharing! 😃 We're a place where coders share, stay up-to-date and grow their careers. We strive for transparency and don't collect excess data.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00188.warc.gz
CC-MAIN-2020-29
522
5
https://docs.infrascale.com/dr/cfa/management-console/boot/active
code
The Active tab lists VMs currently booted on the CFA. Information on the Active subtab is presented in the table format with the following default columns: |Column name||Column description| |Status||Current status of the VM| |Name||Name of the VM| |VNC Address||VNC address to access the booted VM| |VNC Password||VNC password to access the booted VM| |Boot Group||Name of the boot group, to which the VM belongs| |Preview||Preview thumbnail of the booted VM| All special actions on the Active subtab are available via the toolbar on the top left, or the context menu of a VM |Action name||Action description| |Power On||Boot the VM| |Power Off||Shut down the VM| |Reboot||Restart the VM| |Reset||Force restart without the guest OS warning (can result in data loss). Useful is guest operating system isn’t responding| |Pull Plug||Force shutdown without the guest OS warning (can result in data loss). Useful if guest operating system isn’t responding| |Settings||View boot settings of the VM| |Create Backup Job||Create a backup job from the current state of the VM 1. The job can then be restored or archived as usual| |Delete||Delete the current state of the VM and remove it from the tab| Interact with a booted client You have two options to interact with a booted VM. Using browser-based VNC viewer To connect to a booted VM using the browser-based VNC viewer: click the boot detection screenshot in the Preview column, or click the address link in the VNC Address column. Using VNC client software Run TightVNC Viewer. In the New TightVNC Connection window: In the Remote Host box, enter the IP address (or the host name) of the CFA where the VM is booted, and the port to use for connection, in the format ip::port.If you connect to a VM booted on the primary (local) CFA, you can find both the IP address (or the host name) and the port on the Active subtab (the VNC Address column).If you connect to a VM booted on the secondary (cloud) CFA, only the port is shown on the Active subtab (the VNC Address column). To get the address of the CFA (in the format ep-XXX.inf-YYY.myinfrascale.net), please contact Infrascale Support. In the VNC Authentication window, enter the password for VNC connection. You can find the password for VNC connection on the Active subtab (the VNC Password column). Backup jobs created from the Boot tab can’t be archived with the dehydrated archive option. ↩
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00020.warc.gz
CC-MAIN-2020-16
2,402
36
https://viralnews.literationclub.com/cirkus-trailer-review-is-the-comedy-really-that-funny/
code
The greatest launch and the end of Bollywood’s fiscal year will certainly be #cirkus starring #ranveersingh generally function and also this extravaganza will certainly be directed by #rohitshetty. A complete on industrial pot boiler from #bollywood that additionally stars #poojahegde #jacquelinefernandez #johnnylever #sanjaymishra #deepikapadukone and also ties right into the #golmaal franchise. I wish you like the Cirkus Trailer Review Songs Provided by: NoCopyrightSounds Enjoy: https://www.youtube.com/watch?v=q1ULJ … Free Download & Stream: http://ncs.io/feelgood What did you assume of the VIDEO? Dont neglect to smash the like switch as well as Subscribe to the Channel for Weekly Content! Remain tuned for more in advance! Follow me on instagram:-. Anmol Jamwal: https://www.instagram.com/jammypants4/. Like our Facebook Page:-.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499819.32/warc/CC-MAIN-20230130133622-20230130163622-00397.warc.gz
CC-MAIN-2023-06
844
10
http://sdn.sys-con.com/node/3169141
code
|By Business Wire|| |September 3, 2014 07:00 AM EDT|| F5 Networks (NASDAQ:FFIV) today introduced new Software Defined Application Services™ (SDAS™) for its physical and virtual BIG-IP® solutions, extending the company’s access, security, and acceleration technologies for the highest levels of scale and performance. Aligned with the F5 Synthesis™ architectural vision to provide customers with a comprehensive application delivery fabric to orchestrate IT capabilities, BIG-IP® version 11.6 software enables adaptable services that assure resources are deployed efficiently. This announcement extends the reach of F5’s Application Delivery Controller (ADC) portfolio and SDAS, augmenting capabilities across BIG-IP® product modules and providing further enhancement opportunities via F5’s growing technology partner ecosystem. With F5’s flexible application delivery platform, customers are empowered to advance their deployments in a number of compelling new ways, including: - Bolstering Defenses Against Sophisticated Threats Today’s organizations face a variety of attacks targeting multiple areas of the OSI stack (e.g., DDoS, zero-day, application-level attacks). Due in part to the vast number of applications in production environments, the range of specific deployment scenarios, and the vital role apps play for enterprises and services providers, application-related vulnerabilities continue to be a popular target for hackers. F5 delivers the industry’s strongest zero-day threat protection and the most complete bot defense capabilities, providing organizations with a programmable platform that can be tailored to specific infrastructure and business priorities. F5’s WebSafe™ module is now available, offering advanced, real-time protection against online fraud for every user, every device, and every browser—preventing attackers from spoofing, disabling, or otherwise bypassing security checks. - Future-Proofing Application Delivery To capitalize on the increasing performance, productivity, and efficiency advantages of next-generation applications, organizations must scale their infrastructures while identifying cost-effective ways to support new protocols. With the latest version of BIG-IP®, customers achieve simpler, faster, and more secure application performance—while delivering “hyperscale” DNS capabilities (up to 200x the scalability of competitive offerings) to extend and secure infrastructures by mitigating outages and keeping DNS services available. F5 is the only ADC vendor to support HTTP 2.0, providing up to 50% faster page load times than competitive solutions. F5 also offers 10x the scalability of other access solutions, empowering businesses with access capabilities to meet the IT challenges associated with the exponential growth of Internet-connected devices. - Accelerating Cloud Migration F5 is committed to helping customers take advantage of the efficiency and agility afforded by cloud solutions. With F5 solutions offered in both physical and virtual form-factors, SSL processing duties can be offloaded from F5’s virtual editions to hardware products. BIG-IP® now provides maximum SSL performance for hybrid deployments with the first and only hybrid crypto offload solution, removing a potential roadblock for organizations migrating to the cloud. Improved protection for multi-tenant ADC environments is also provided with highly secure Virtual Clustered Multiprocessing™ (vCMP®) instances through enhanced resource isolation methods, guarding against tenant-to-host and tenant-to-tenant threats. In addition, F5 promotes accelerated, secure cloud migration with industry-leading REST API capabilities for web application firewall solutions. “The move to F5 Synthesis architecture has meant we are able to offer our customers a higher quality of experience,” said Barry Kezik, General Manager Network Planning and Engineering, Vodafone Australia. “The DNS services in the BIG-IP platform allowed us to offer an improved service with a reduction in latency, due to F5 DNS’s caching capabilities.” “Enterprises are feeling the need to adapt quickly to new protocols, applications, and cloud technologies in the aim of streamlining operations, adding efficiencies, and gaining competitive advantage,” said Karl Triebes, CTO and EVP of Product Development at F5. “Simultaneously, they’re being asked to scale infrastructures to support mobile devices and all types of content delivery—even in the face of more sophisticated security threats. These demands combined illustrate the need for a holistic approach to application delivery. With BIG-IP v11.6 supporting a comprehensive suite of services across data center, cloud, and hybrid environments, customers maintain maximum flexibility without sacrificing performance or security.” BIG-IP® version 11.6 is available now, with features that enhance performance and security across F5’s product module portfolio. F5’s fraud protection solutions are currently available in the Americas and EMEA (Europe, Middle East, and Africa) regions, with availability in APJ (Asia Pacific and Japan) to follow. Please contact a local F5 sales office for additional product availability information pertaining to specific countries. - BIG-IP Product Portfolio Details - F5 Synthesis: Hybrid to the Core – DevCentral Blog Post - Accelerating the Transition to Cloud – DevCentral Blog Post F5 (NASDAQ: FFIV) provides solutions for an application world. F5 helps organizations seamlessly scale cloud, data center, and software defined networking (SDN) deployments to successfully deliver applications to anyone, anywhere, at any time. F5 solutions broaden the reach of IT through an open, extensible framework and a rich partner ecosystem of leading technology and data center orchestration vendors. This approach lets customers pursue the infrastructure model that best fits their needs over time. The world’s largest businesses, service providers, government entities, and consumer brands rely on F5 to stay ahead of cloud, security, and mobility trends. For more information, go to f5.com. F5, F5 Synthesis, BIG-IP, WebSafe, Software Defined Application Services, SDAS, Virtual Clustered Multiprocessing, vCMP, and DevCentral are trademarks or service marks of F5 Networks, Inc., in the U.S. and other countries. This press release may contain forward-looking statements relating to future events or future financial performance that involve risks and uncertainties. Such statements can be identified by terminology such as "may," "will," "should," "expects," "plans," "anticipates," "believes," "estimates," "predicts," "potential," or "continue," or the negative of such terms or comparable terms. These statements are only predictions and actual results could differ materially from those anticipated in these statements based upon a number of factors including those identified in the company's filings with the SEC. SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will exhibit at the 18th International CloudExpo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, New York, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. May. 27, 2016 04:15 AM EDT Reads: 2,263 SYS-CON Events announced today that Isomorphic Software will exhibit at SYS-CON's [email protected] at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Isomorphic Software provides the SmartClient HTML5/AJAX platform, the most advanced technology for building rich, high-productivity enterprise web applications for any device. SmartClient couples the industry’s broadest, deepest UI component set with a java server framework to deliver an end-... May. 27, 2016 03:00 AM EDT Reads: 2,054 SYS-CON Events announced today that BMC Software has been named "Siver Sponsor" of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2015 at the Javits Center in New York, New York. BMC is a global leader in innovative software solutions that help businesses transform into digital enterprises for the ultimate competitive advantage. BMC Digital Enterprise Management is a set of innovative IT solutions designed to make digital business fast, seamless, and optimized from mainframe to mo... May. 27, 2016 01:45 AM EDT Reads: 2,084 A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, wh... May. 27, 2016 01:30 AM EDT Reads: 1,951 SYS-CON Events announced today that EastBanc Technologies will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. EastBanc Technologies has been working at the frontier of technology since 1999. Today, the firm provides full-lifecycle software development delivering flexible technology solutions that seamlessly integrate with existing systems – whether on premise or cloud. EastBanc Technologies partners with p... May. 27, 2016 01:30 AM EDT Reads: 2,137 Companies can harness IoT and predictive analytics to sustain business continuity; predict and manage site performance during emergencies; minimize expensive reactive maintenance; and forecast equipment and maintenance budgets and expenditures. Providing cost-effective, uninterrupted service is challenging, particularly for organizations with geographically dispersed operations. May. 27, 2016 01:00 AM EDT Reads: 1,879 The IoT is changing the way enterprises conduct business. In his session at @ThingsExpo, Eric Hoffman, Vice President at EastBanc Technologies, discuss how businesses can gain an edge over competitors by empowering consumers to take control through IoT. We'll cite examples such as a Washington, D.C.-based sports club that leveraged IoT and the cloud to develop a comprehensive booking system. He'll also highlight how IoT can revitalize and restore outdated business models, making them profitable... May. 27, 2016 12:45 AM EDT Reads: 2,773 SYS-CON Events announced today that Alert Logic, Inc., the leading provider of Security-as-a-Service solutions for the cloud, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Alert Logic, Inc., provides Security-as-a-Service for on-premises, cloud, and hybrid infrastructures, delivering deep security insight and continuous protection for customers at a lower cost than traditional security solutions. Ful... May. 27, 2016 12:45 AM EDT Reads: 2,705 The essence of data analysis involves setting up data pipelines that consist of several operations that are chained together – starting from data collection, data quality checks, data integration, data analysis and data visualization (including the setting up of interaction paths in that visualization). In our opinion, the challenges stem from the technology diversity at each stage of the data pipeline as well as the lack of process around the analysis. May. 27, 2016 12:15 AM EDT Reads: 1,358 Many banks and financial institutions are experimenting with containers in development environments, but when will they move into production? Containers are seen as the key to achieving the ultimate in information technology flexibility and agility. Containers work on both public and private clouds, and make it easy to build and deploy applications. The challenge for regulated industries is the cost and complexity of container security compliance. VM security compliance is already challenging, ... May. 27, 2016 12:00 AM EDT Reads: 1,205 Designing IoT applications is complex, but deploying them in a scalable fashion is even more complex. A scalable, API first IaaS cloud is a good start, but in order to understand the various components specific to deploying IoT applications, one needs to understand the architecture of these applications and figure out how to scale these components independently. In his session at @ThingsExpo, Nara Rajagopalan is CEO of Accelerite, will discuss the fundamental architecture of IoT applications, ... May. 27, 2016 12:00 AM EDT Reads: 1,143 SYS-CON Events announced today that Commvault, a global leader in enterprise data protection and information management, has been named “Bronze Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY, and the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Commvault is a leading provider of data protection and information management... May. 27, 2016 12:00 AM EDT Reads: 3,045 With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty ... May. 26, 2016 11:30 PM EDT Reads: 2,561 As cloud and storage projections continue to rise, the number of organizations moving to the cloud is escalating and it is clear cloud storage is here to stay. However, is it secure? Data is the lifeblood for government entities, countries, cloud service providers and enterprises alike and losing or exposing that data can have disastrous results. There are new concepts for data storage on the horizon that will deliver secure solutions for storing and moving sensitive data around the world. ... May. 26, 2016 11:00 PM EDT Reads: 1,249 18th Cloud Expo, taking place June 7-9, 2016, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some... May. 26, 2016 10:45 PM EDT Reads: 3,104 SYS-CON Events announced today that AppNeta, the leader in performance insight for business-critical web applications, will exhibit and present at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. AppNeta is the only application performance monitoring (APM) company to provide solutions for all applications – applications you develop internally, business-critical SaaS applications you use and the networks that deli... May. 26, 2016 10:45 PM EDT Reads: 2,360 SYS-CON Events announced today that MangoApps will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. MangoApps provides modern company intranets and team collaboration software, allowing workers to stay connected and productive from anywhere in the world and from any device. For more information, please visit https://www.mangoapps.com/. May. 26, 2016 10:45 PM EDT Reads: 685 SoftLayer operates a global cloud infrastructure platform built for Internet scale. With a global footprint of data centers and network points of presence, SoftLayer provides infrastructure as a service to leading-edge customers ranging from Web startups to global enterprises. SoftLayer's modular architecture, full-featured API, and sophisticated automation provide unparalleled performance and control. Its flexible unified platform seamlessly spans physical and virtual devices linked via a world... May. 26, 2016 10:15 PM EDT Reads: 2,075 In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, will provide an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life ... May. 26, 2016 10:00 PM EDT Reads: 1,894 In his session at 18th Cloud Expo, Bruce Swann, Senior Product Marketing Manager at Adobe, will discuss how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects). Bruce Swann has more than 15 years of experience working with digital marketing disciplines like web analytics, social med... May. 26, 2016 09:00 PM EDT Reads: 1,266
s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276564.72/warc/CC-MAIN-20160524002116-00065-ip-10-185-217-139.ec2.internal.warc.gz
CC-MAIN-2016-22
17,538
63
https://www.24x7servermanagement.com/clients/index.php?rp=/knowledgebase/55/How-to-get-backup-data-recursively-from-the-Amazon-server.html
code
In order to get backup recursively from Amazon server use this command - s3cmd get --recursive s3://Bucket/Backup/Server-x.x.x.x/Backup/account_name/ /Amazon 1. S3cmd is a tool for managing objects in Amazon S3 storage. 2. Bucket/Backup/Server-x.x.x.x/Backup/account_name/ is the backup directory. 3. /Amazon is the folder on your server where you want to get the backup.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00433.warc.gz
CC-MAIN-2020-34
371
5
https://community.mcafee.com/thread/18871
code
Try going to your internet connection in Windows Explorer or on the Taskbar and right-clicking it select Repair. Or try this quick fix: http://www.snapfiles.com/get/winsockxpfix.html There may have been some minor infection or a corrupted key present when McAfee was installed which it may have removed as an infection during the installation process. There is also a Microsoft article here: http://support.microsoft.com/?kbid=299357 Another fix here: http://www.cexx.org/lspfix.htm Sorry to take so long to answer you but we are rather busy.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512121.15/warc/CC-MAIN-20171211033436-20171211053436-00306.warc.gz
CC-MAIN-2017-51
542
6
http://inherittheearth.net/
code
August 21, 2014 - Joe Pearce As you may know, the Kickstarter for Inherit the Earth: Sand and Shadows did not succeed. I am evaluating possible next plans on that front. The comic will be going on a short hiatus, but should return in September. Update (Sept. 16): I am working on the script for the next "tale", which will be a one-shot. Plus, I need to modify the site code to integrate the Inherit the Earth: Quest for the Orb recap strips into the archives. This is non-trivial. Past Blog Entries >> ||Inherit the Earth Comic Book #3|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657123274.33/warc/CC-MAIN-20140914011203-00245-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
CC-MAIN-2014-41
537
6
https://www.my.freelancer.com/projects/website-design-graphic-design/need-logo-put-website/
code
I need help in creating a logo for a page in my website. also i would like to own the rights to this design. the design is a piggy bank , but instead of a pig. i need it to look like a cat. thanks 26 pekerja bebas membida secara purata $76 untuk pekerjaan ini Hello, I'll make you the perfect logo for your website. Please refer to the details from the private message and take a look at my portfolio. Regards, logodoc
s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647584.56/warc/CC-MAIN-20180321063114-20180321083114-00067.warc.gz
CC-MAIN-2018-13
418
4
https://numaflow.numaproj.io/user-guide/user-defined-functions/reduce/windowing/fixed/
code
Fixed windows (sometimes called tumbling windows) are defined by a static window size, e.g. 30 second windows, one minute windows, etc. They are generally aligned, i.e. every window applies across all the data for the corresponding period of time. It has a fixed size measured in time and does not overlap. The element which belongs to one window will not belong to any other tumbling window. For example, a window size of 20 seconds will include all entities of the stream which came in a certain 20-second interval. To enable Fixed window, we use vertices: - name: my-udf udf: groupBy: window: fixed: length: duration NOTE: A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". length is the window size of the fixed window. A 60-second window size can be defined as following. vertices: - name: my-udf udf: groupBy: window: fixed: length: 60s The yaml snippet above contains an example spec of a reduce vertex that uses fixed window aggregation. As we can see, the length of the window is 60s. This means only one window will be active at any point in time. It is also possible to have multiple inactive and non-empty windows (based on out-of-order arrival of elements). The window boundaries for the first window (post bootstrap) are determined by rounding down from time.now() to the nearest multiple of length of the window. So considering the above example, time.now() corresponds to 2031-09-29T18:46:30Z, then the start-time of the window will be adjusted to 2031-09-29T18:46:00Z and the end-time is set accordingly to Windows are left inclusive and right exclusive which means an element with event time (considering event time characteristic) of 2031-09-29T18:47:00Z will belong to the window with boundaries It is important to note that because of this property, for a constant throughput, the first window may contain fewer elements than other windows.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506676.95/warc/CC-MAIN-20230925015430-20230925045430-00376.warc.gz
CC-MAIN-2023-40
2,026
19
http://forums.informationbuilders.com/eve/forums/a/tpc/f/7971057331/m/2097052396/xsl/print_topic
code
I'm aware of "this.window.name" to get the name of the panel. But I have a component that is in a banner so having the name of the panel won't work.This message has been edited. Last edited by: FP Mod Chuck, April 28, 2020, 06:12 PM Using the browsers Developer Tools, inspect the tab and see if there is a unique class name. HTML, PDF, Excel, PPT In Focus since 1984 Pity the lost knowledge of an old programmer! April 28, 2020, 06:25 PM ok thank you waz. I'll look into this. April 29, 2020, 04:39 AM You might(?) need to have a reasonable understanding of DOM structure and its navigation. Look into "parent." In FOCUS since 1986 WebFOCUS Server 8.2.01M, thru 8.2.07 on Windows Svr 2008 R2 WebFOCUS App Studio 8.2.06 standalone on Windows 10
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147917.99/warc/CC-MAIN-20200714020904-20200714050904-00599.warc.gz
CC-MAIN-2020-29
744
13
https://homelandsecurity.ideascale.com/a/ideas/popular_tags/campaign-filter/active/stage/unspecified/tags/cybersecurity
code
Need infiltration of BLM to find evidence criminal collusion, inciting riots and domestic terrorism. Need to go after supporters funding, as well as Social Media criminal collusion which provide meet places and instruction. And also find out payment methods to paid violent protesters. Audit money flow and check for tax crimes Here's my proposal for a much more secure Voting System!!! Here's my proposal of the Blockchain Voting System to moving our voting system closer to true Democracy & CAN BE DONE FROM HOME! Between Republican Gerrymandering, Voter Suppression, Voter IDs & Voter Registration suspensions along with their move to... more » Imagine a hacker who installed a RAT (Remote Access Trojan/ Remote Administration Tool) on Alice's PC and so he can see everything (screen captures and keylogging) that happens... more » I have a number of intellectual properties (www.danmimis.com) including... more » With all due respect, Eric Goldstein's priorities are out of order. The first priority is the defense of our network with zero trust principles.... more »
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621519.32/warc/CC-MAIN-20210615180356-20210615210356-00137.warc.gz
CC-MAIN-2021-25
1,075
9
https://www.sparrho.com/item/banach-spaces-from-a-construction-scheme/8f0085/
code
Indexed on: 03 Feb '16Published on: 03 Feb '16Published in: Mathematics - Logic We construct a Banach space $\mathcal X_\varepsilon$ with an uncountable $\varepsilon$-biorthogonal system but no uncountable $\tau$-biorthogonal system for $\tau<\varepsilon$. In particular the space have no uncountable biorthogonal system. We also construct a Banach space $\mathcal X_K$ with an uncountable $K$-basic sequence but no uncountable $K'$-basic sequence, for $1\leq K'<K$. A common feature of these examples is that they are both constructed by recursive amalgamations using a single construction scheme.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107865665.7/warc/CC-MAIN-20201023204939-20201023234939-00469.warc.gz
CC-MAIN-2020-45
598
2
https://www.coderanch.com/u/219843/rushikesh-sawant
code
This week's book giveaway is in the Agile and Other Processes forum. We're giving away four copies of Darcy DeClute's Scrum Master Certification Guide: The Definitive Resource for Passing the CSM and PSM Exams and have Darcy DeClute on-line! See this thread for details. ujjawal, your method definition says that it can accept Number type or its super types only. but the list you are passing to that method says that it can accept Number type or its any of its subtypes also. So as generic is compile time protection compiler cannot make sure that you will pass only Number type to that method, hence compiler error. programming using OOP concepts makes life easier for software people. Its easy to debug, test, and most importantly it provides component reuse which is essential for faster development and to cope with changing requirements in software development. thanks for reply. so does it mean that elements in PriorityQueue are not stored in a sorted manner of any kind? and its just the order in which they get removed is determined by either the natural order or by a comparator.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100769.54/warc/CC-MAIN-20231208180539-20231208210539-00575.warc.gz
CC-MAIN-2023-50
1,090
6
http://geekaustin.org/news/dan-tecuci-ibm-speaking-nlp-community-day
code
Dan Tecuci of IBM speaking at NLP Community Day Dan graduated from UT Austin with a PhD in Artificial Intelligence. During that time he contributed to the development of several large scale AI projects: Project Halo - knowledge acquisition and question answering in scientific domains (funded by Paul Allen), RKF, and Calo (precursor of Siri). He then moved to Siemens Corporate Research where he led the development and deployment of a natural language QA system for Siemens Energy Service. Also at Siemens, he developed a prototype system for accurately diagnosing heart diseases from patient data. Dan joined IBM Watson in 2014, where he led the development of Question Answering from tables and then moved onto fixing recipes for IBM ChefWatson. His main areas of expertise are knowledge representation and reasoning, question answering, NLP, and complex knowledge indexing and retrieval. He now works for Watson Health where he is applying learning and reasoning techniques to problems in the Life Sciences domains. For more details, check out the NLP Community Day page.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124371.40/warc/CC-MAIN-20170423031204-00513-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,076
3
http://www.tomshardware.com/forum/244704-33-playing-games
code
As I started to read this post, I chuckled to myself. When I first got started in home computers, it was with a Commodore 64, and like my brother's Apple IIe, we used a TV screen at first. It seemed like a very happy day when computer monitors finally got around to delivering good color, this is, more the 8 or 16 colors. Now things are starting to swing back to the idea of using a TV screen instead of a dedicated monitor. What's that old saying, "The more things change, the more they stay the same"? Anyway, as Cleeve and others have said, yes its possible to use your TV if the TV is new enough and you have the proper outputs on your video card. I've done it myself a couple times, but I decided I liked the image displayed on my LCD computer monitor better. If anything, I tend to use my computer and its widescreen monitor to watch movies rather than to use my TV to play games. Just my opinion, and thanks for the nostalgic ride back to the past. Things sometimes seemed so much simpler then.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948511435.4/warc/CC-MAIN-20171210235516-20171211015516-00320.warc.gz
CC-MAIN-2017-51
1,002
2
https://www.wowebook.org/machine-learning-with-pyspark/
code
Machine Learning with PySpark - Paperback: 223 pages - Publisher: WOW! eBook; 1st edition (December 15, 2018) - Language: English - ISBN-10: 1484241304 - ISBN-13: 978-1484241301 Machine Learning with PySpark: With Natural Language Processing and Recommender Systems Build machine learning models, natural language processing applications, and recommender systems with PySpark to solve various business challenges. This book starts with the fundamentals of Spark and its evolution and then covers the entire spectrum of traditional machine learning algorithms along with natural language processing and recommender systems using PySpark. This book shows you how to build supervised machine learning models such as linear regression, logistic regression, decision trees, and random forest. You’ll also see unsupervised machine learning models such as K-means and hierarchical clustering. A major portion of the book focuses on feature engineering to create useful features with PySpark to train the machine learning models. The natural language processing section covers text processing, text mining, and embedding for classification. What You Will Learn - Build a spectrum of supervised and unsupervised machine learning algorithms - Implement machine learning algorithms with Spark MLlib libraries - Develop a recommender system with Spark MLlib libraries - Handle issues related to feature engineering, class balance, bias and variance, and cross validation for building an optimal fit model After reading this book, you will understand how to use PySpark’s machine learning library to build and train various machine learning models. Additionally you’ll become comfortable with related PySpark components, such as data ingestion, data processing, and data analysis, that you can use to develop data-driven intelligent applications.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816879.72/warc/CC-MAIN-20240414130604-20240414160604-00836.warc.gz
CC-MAIN-2024-18
1,839
15
https://mrsteel.wordpress.com/2006/12/14/simple-paging-in-php/
code
My first tutorial for PHP is simple page breaking of data. Basiclly I’ve set up a file which contains information about images in text file. Data file looks like: img1.jpg, Picture from sea img2.jpg, Picture from cafe and so on… PHP page loads data and show only info for 10 picture for the page that is choosen from GET variable in URL. It’s just basic stuff and you can improve it to suite your needs, this is simple example so we don’t get too far from point and so you can learn other stuff while exploring this example. EDIT: Links are updated! Sorry for keep you waiting…
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590559.95/warc/CC-MAIN-20180719051224-20180719071224-00043.warc.gz
CC-MAIN-2018-30
587
7
https://depotcatalog.com/ssh-add-error-connecting-to-agent-windows-10/
code
How do I start an ssh-agent in Windows 10? In order not to waste time rebooting the system to start our service for the first time, run the following command: start-ssh-agent.cmd. Configure each SSH key pair to access a specific remote Git provider. You can always add an if password or leave this field blank: A random kind of key will be displayed to confirm creation: How do I enable ssh-agent in Windows? Open Settings, select Apps > Apps, then Features, navigate to additional features. Look through the list to see if OpenSSH is installed. If not, at the top of the page select “Add with” and then: Locate the OpenSSH client, then click “Install”. Find the OpenSSH server, then browse “Install”. How do I enable OpenSSH authentication agent? First of all, even open Services (Start Menu -> Type “Services”). Then enter alternative text to authenticate the OpenSSH agent. Finally, the StartupType was set to Automatic. How do I use ssh-agent and ssh-add? Directly on Unix, type: `ssh-agent` eval Make sure your company uses backticks ( ` ) below the tilde ( ! ) and not a single clause ( ‘ ). Enter the command: ssh-add. Enter the person’s private key password. When exiting, enter the command: Run $SSH_AGENT_PID. How do I get ssh-agent on Windows 10? For Powershell users you used before installing OpenSSHUtils to send collection to ssh-agent. 10 already introduces an OpenSSH authentication agent to Windows, which should be disabled by default. Why is my ssh agent not working? It is believed that the SSH agent is not running, or the environment variables it has set are not available in the current environment (including SSH_AUTH_SOCK ) or not set correctly (pointing to a dead officer). (Replace the bass with whatever shell you can use). Why can’t ssh-add open a connection to my authentication agent? $ ssh-add cannot open a connection to its own authentication agent. This appears to be caused by ssh-add using /usr/bin/ssh-add instead of ssh-add C:\Windows\System32\OpenSSH\ since that fixed I tried using the full path part: which gives me exactly the same error as before. How to add a new SSH key to the ssh-agent? Make sure ssh-agent is enabled: your master ssh key for ssh-agent. If you used an existing SSH key instead of creating a new SSH key, you must add id_rsa to the command with the person’s name from your existing private breakpoint file. How do I use SSH agent and ssh-add? To use ssh-agent and ssh-add, follow these steps: - At the Unix prompt, type: eval Make sure `ssh-agent` uses backticks ( ` ) under the tilde ( ! ) instead of double quotes ( ‘ ). - Enter the command prompt: ssh-add. - Enter the password for your private key. - When you log out, enter the command: ruby ??$SSH_AGENT_PID. How to add your SSH key to the ssh-agent? Adding an SSH element to the SSH agent 1 The institution’s SSH agent is running. You can use the “Ssh Agent Autostart” tracks under “Working with SSH Basic Passphrases” or start it manually: ssh agent startup number via 2 Add your SSH private key to all ssh agents. 3 Add the ssh key to your GitHub account. Charles Howell is a freelance writer and editor. He has been writing about consumer electronics, how-to guides, and the latest news in the tech world for over 10 years. His work has been featured on a variety of websites, including techcrunch.com, where he is a contributor. When he’s not writing or spending time with his family, he enjoys playing tennis and exploring new restaurants in the area.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00308.warc.gz
CC-MAIN-2022-49
3,516
34
https://plus.maths.org/content/comment/710
code
Over the millennia, many mathematicians have hoped that mathematics would one day produce a Theory of Everything (TOE); a finite set of axioms and rules from which every mathematical truth could be derived. But in 1931 this hope received a serious blow: Kurt Gödel published his famous Incompleteness Theorem, which states that in every mathematical theory, no matter how extensive, there will always be statements which can't be proven to be true or false. Gregory Chaitin has been fascinated by this theorem ever since he was a child, and now, in time for the centenary of Gödel's birth in 2006, he has published his own book, called Meta Math! on the subject (you can read a review in this issue of Plus). It describes his journey, which, from the work of Gödel via that of Leibniz and Turing, led him to the number Omega, which is so complex that no mathematical theory can ever describe it. In this article he explains what Omega is all about, why maths can have no Theory of Everything, and what this means for mathematicians. My story begins with Leibniz in 1686, the year before Newton published his Principia. Due to a snow storm, Leibniz is forced to take a break in his attempts to improve the water pumps for some important German silver mines, and writes down an outline of some of his ideas, now known to us as the Discours de métaphysique. Leibniz then sends a summary of the major points through a mutual friend to the famous fugitive French philosophe Arnauld, who is so horrified at what he reads that Leibniz never sends him, nor anyone else, the entire manuscript. It languishes among Leibniz's voluminous personal papers and is only discovered and published many years after Leibniz's death. In sections V and VI of the Discours de métaphysique, Leibniz discusses the crucial question of how we can distinguish a world which can be explained by science from one that cannot. How do we tell whether something we observe in the world around us is subject to some scientific law or just patternless and random? Imagine, Leibniz says, that someone has splattered a piece of paper with ink spots, determining in this manner a finite set of points on the page. Leibniz observes that, even though the points were splattered randomly, there will always be a mathematical curve that passes through this finite set of points. Indeed, many good ways to do this are now known. For example, what is called "Lagrangian interpolation" will do. So the existence of a mathematical curve passing through a set of points cannot enable us to distinguish between points that are chosen at random and those that obey some kind of a scientific law. How, then, can we tell the difference? Well, says Leibniz, if the curve that contains the points must be extremely complex ("fort composée"), then it's not much use in explaining the pattern of the ink splashes. It doesn't really help to simplify matters and therefore isn't valid as a scientific law — the points are random ("irrégulier"). The important insight here is that something is random if any description of it is extremely complex — randomness is complexity. Leibniz had a million other interests and earned a living as a consultant to princes, and as far as I know after having this idea he never returned to this subject. Indeed, he was always tossing out good ideas, but rarely, with the notable exception of the infinitesimal calculus, had the time to develop them in depth. The next person to take up this subject, as far as I know, is Hermann Weyl in his 1932 book The Open World, consisting of three lectures on metaphysics that Weyl gave at Yale University. In fact, I discovered Leibniz's work on complexity and randomness by reading this little book by Weyl. And Weyl points out that Leibniz's way of distinguishing between points that are random and those that follow a law by invoking the complexity of a mathematical formula is unfortunately not too well defined: it depends on what functions you are allowed to use in writing that formula. What is complex to one person at one particular time may not appear to be complex to another person a few years later — defined in this way, complexity is in the eye of the beholder. What is complexity? Well, the field that I invented in 1965, and which I call algorithmic information theory, provides a possible solution for the problem of how to measure complexity. The main idea is that any scientific law, which explains or describes mathematical objects or sets of data, can be turned into a computer program that can compute the original object or data set. Gottfried von Leibniz Say, for example, that you haven't splattered the ink on the page randomly, but that you've carefully placed the spots on a straight line which runs through the page, each spot exactly one centimetre away from the previous one. The theory describing your set of points would consist of four pieces of information: the equation for the straight line, the total number of spots, the precise location of the first spot, and the fact that the spots are one centimetre apart. You can now easily write a computer program, based on this information, which computes the precise location of each spot. In algorithmic information theory, we don't just say that such a program is based on this underlying theory, we say it is the theory. This gives a way of measuring the complexity of the underlying object (in this case our ink stains): it is simply the size of the smallest computer program that can compute the object. The size of a computer program is the number of "bits" it contains: as you will know, computers store their information in strings of 0s and 1s, and each 0 or 1 is called a "bit". The more complicated the program, the longer it is and the more bits it contains. If something we observe is subject to a scientific law, then this law can be encoded as a program. What we desire from a scientific law is that it be simple — the simpler it is, the better our understanding, and the more useful it is. And its simplicity — or lack of it — is reflected in the length of the program. In our example, the complexity of the ink stains is precisely the length in bits of the smallest computer program which comprises our four pieces of information and can compute the location of the spots. In fact, the ink spots in this case are not very complex at all. We have added two ideas to Leibniz's 1686 proposal. First, we measure complexity in terms of bits of information, i.e. 0s and 1s. Second, instead of mathematical equations, we use binary computer programs. Crucially, this enables us to compare the complexity of a scientific theory (the computer program) with the complexity of the data that it explains (the output of the computer program, the location of our ink stains). As Leibniz observed, for any data there is always a complicated theory, which is a computer program that is the same size as the data. But that doesn't count. It is only a real theory if there is compression, if the program is much smaller than its output, both measured in 0/1 bits. And if there can be no proper theory, then the bit string is called algorithmically random or irreducible. That's how you define a random string in algorithmic information theory. Let's look at our ink stains again. To know where each spot is, rather than writing down its precise location, you're much better off remembering the four pieces of information. They give a very efficient theory which explains the data. But what if you place the ink spots in a truly random fashion, by looking away and flicking your pen? Then a computer program which can compute the location of each spot for you has no choice but to store the co-ordinates that give you each location. It is just as long as its output and doesn't simplify your data set at all. In this case, there is no good theory, the data set is irreducible, or algorithmically random. I should point out that Leibniz had the two key ideas that you need to get this modern definition of randomness, he just never made the connection. For Leibniz produced one of the first calculating machines, which he displayed at the Royal Society in London, and he was also one of the first people to appreciate base-two binary arithmetic and the fact that everything can be represented using only 0s and 1s. So, as Martin Davis argues in his book The Universal Computer: The Road from Leibniz to Turing, Leibniz was the first computer scientist, and he was also the first information theorist. I am sure that Leibniz would have instantly understood and appreciated the modern definition of randomness. I should also mention that A. N. Kolmogorov also proposed this definition of randomness. He and I did this independently in 1965. Kolmogorov was at the end of his career, and I was a teenager at the beginning of my own career as a mathematician. As far as I know, neither of us was aware of the Leibniz Discours. But Kolmogorov never realized, as I did, that the really important application of these ideas was the new light that they shed on Gödel's incompleteness theorem and on Alan Turing's famous halting problem. So let me tell you about that now. I'll tell you how my Omega number possesses infinite complexity and therefore cannot be explained by any finite mathematical theory. This shows that in a sense there is randomness in pure mathematics, and that there cannot be any TOE. Omega is so complex because its definition is based on an unsolvable problem — Turing's halting problem. Let's have a look at this now. Turing's halting problem In 1936, Alan Turing stunned the mathematical world by presenting a model for the first digital computer, which is today known as the Turing Machine. And as soon as you start thinking about computer programs, you are faced with the following, very basic question: given any program, is there an algorithm, a sure-fire recipe, which decides whether the program will eventually stop, or whether it'll keep on running forever? Let's look at a couple of examples. Suppose your program consists of the instruction "take every number between 1 and 10, add 2 to it and then output the result". It's obvious that this program halts after 10 steps. If, however, the instructions are "take a number x, which is not negative, and keep multiplying it by 2 until the result is bigger than 1", then the program will stop as long as the input x is not 0. If it is 0, it will keep going forever. In these two examples it is easy to see whether the program stops or not. But what if the program is much more complicated? Of course you can simply run it and see if it stops, but how long should you wait before you decide that it doesn't? A week, a month, a year? The basic question is whether there is a test which in a finite amount of time decides whether or not any given program ever halts. And, as Turing proved, the answer is no. What is Omega? Now, instead of looking at individual instances of Turing's famous halting problem, you just put all possible computer programs into a bag, shake it well, pick out a program, and ask: "what is the probability that it will eventually halt?". This probability is the number Omega. An example will make this clearer: suppose that in the whole wide world there are only two programs that eventually halt, and that these programs, when translated into bit strings, are 11001 and 101. Picking one of these at random is the same as randomly generating these two bit strings. You can do this by tossing a coin and writing down a 1 if heads comes up, and a 0 if tails comes up, so the probability of getting a particular bit is 1/2. This means that the probability of getting 11001 is So the probability of randomly choosing one of these two programs is Of course, in reality there are a lot more programs that halt, and Omega is the sum of lots of terms of the form Also, when defining Omega, you have to make certain restrictions on which types of programs are valid, to avoid counting things twice, and to make sure that Omega does not become infinitely large. Anyway, once you do things properly you can define a halting probability Omega between zero and one. Omega is a perfectly decent number, defined in a mathematically rigorous way. The particular value of Omega that you get depends on your choice of computer programming language, but its surprising properties don't depend on that choice. Why is Omega irreducible? And what is the most surprising property of Omega? It's the fact that it is irreducible, or algorithmically random, and that it is infinitely complex. I'll try to explain why this is so: like any number we can, theoretically at least, write Omega in binary notation, as a string of 0s and 1s. In fact, Omega has an infinite binary expansion, just as the square root of two has an infinite decimal expansion Now the square root of two can be approximated to any desired degree of accuracy by one of many algorithms. Newton's iteration, for example, uses the formula Is there a similar finite program that can compute all the bits in the binary expansion of Omega? Well, it turns out that knowing the first bits of Omega gives you a way of solving the halting problem for all programs up to bits in size. So, since you have a finite program that can work out all bits of Omega, you also have a finite program that can solve the halting problem for all programs, no matter what size. But this, as we know, is impossible. So such a program cannot exist. According to our definition above, Omega is irreducible, or algorithmically random. It cannot be compressed into a smaller, finite theory. Even though Omega has a very precise mathematical definition, its infinitely many bits cannot be captured in a finite program — they are just as "bad" as a string of infinitely many bits chosen at random. In fact, Omega is maximally unknowable. Even though it is precisely defined once you specify the programming language, its individual bits are maximally unknowable, maximally irreducible. Why does maths have no TOEs? This question is now easy to answer. A mathematical theory consists of a set of "axioms" — basic facts which we perceive to be self-evident and which need no further justification — and a set of rules about how to draw logical conclusions. So a Theory of Everything would be a set of axioms from which we can deduce all mathematical truths and derive all mathematical objects. It would also have to have finite complexity, otherwise it wouldn't be a theory. Since it's a TOE it would have to be able to compute Omega, a perfectly decent mathematical object. The theory would have to provide us with a finite program which contains enough information to compute any one of the bits in Omega's binary expansion. But this is impossible because Omega, as we've just seen, is infinitely complex — no finite program can compute it. There is no theory of finite complexity that can deduce Omega. So this is an area in which mathematical truth has absolutely no structure, no structure that we will ever be able to appreciate in detail, only statistically. The best way of thinking about the bits of Omega is to say that each bit has probability 1/2 of being zero and probability 1/2 of being one, even though each bit is mathematically determined. That's where Turing's halting problem has led us, to the discovery of pure randomness in a part of mathematics. I think that Turing and Leibniz would be delighted at this remarkable turn of events. Gödel's incompleteness theorem tells us that within mathematics there are statements that are unknowable, or undecidable. Omega tells us that there are in fact infinitely many such statements: whether any one of the infinitely many bits of Omega is a 0 or a 1 is something we cannot deduce from any mathematical theory. More precisely, any maths theory enables us to determine at most finitely many bits of Omega. Where does this leave us? Now I'd like to make a few comments about what I see as the philosophical implications of all of this. These are just my views, and they are quite controversial. For example, even though a recent critical review of two of my books in the Notices of the American Mathematical Society does not claim that there are any technical mistakes in my work, the reviewer strongly disagrees with my philosophical conclusions, and in fact he claims that my work has no philosophical implications whatsoever. So these are just my views, they are certainly not a community consensus, not at all. Is maths an experimental science? My view is that Omega is a much more disagreeable instance of mathematical incompleteness than the one found by Gödel in 1931, and that it therefore forces our hand philosophically. In what way? Well, in my opinion, in a quasi-empirical direction, which is a phrase coined by Imre Lakatos when he was doing philosophy in England after leaving Hungary in 1956. In my opinion, Omega suggests that even though maths and physics are different, perhaps they are not as different as most people think. To put it bluntly, if the incompleteness phenomenon discovered by Gödel in 1931 is really serious — and I believe that Turing's work and my own work suggest that incompleteness is much more serious than people think — then perhaps mathematics should be pursued somewhat more in the spirit of experimental science rather than always demanding proofs for everything. Maybe, rather than attempting to prove results such as the celebrated Riemann hypothesis, mathematicians should accept that they may not be provable and simply accept them as an axiom. At any rate, that's the way things seem to me. Perhaps by the time we reach the centenary of Turing's death in 2054, this quasi-empirical view will have made some headway, or perhaps instead these foreign ideas will be utterly rejected by the immune system of the maths community. For now they certainly are rejected. But the past fifty years have brought us many surprises, and I expect that the next fifty years will too, a great many indeed. About the author Gregory Chaitin is at the IBM Thomas J. Watson Research Center in Yorktown Heights, New York, and is an honorary professor at the University of Buenos Aires and a visiting professor at the University of Auckland. The author of nine books, he is also a member of the International Academy of the Philosophy of Science, as well as the Honorary President of the Scientific Committee of the Institute of Complex Systems in Valparaiso, Chile. His latest book, Meta Math!, published by Pantheon in New York, is intended as popular science. you have got an error in the sentence: "This means that the probability of getting 101 is $1/2 \times 1/2 \times 1/2 \times 1/2 \times 1/2 = 1/2^5.$" The right sentence is : This means that the probability of getting 101 is $1/2 \times 1/2 \times 1/2 = 1/2^3 Thanks for pointing out the error, it's been corrected! "Second, instead of mathematical equations, we use binary computer programs. " That substitution is extremely domain-narrowing. The modern mathematics has happily reclused itself into the borders it had itself drawn for itself - Godel's incompleteness, etc... After that it is even more happily narrowed these borders through mentioned above substitution. The purpose of the borders is to be transcended. Leibniz didn't happily recluse himself into counting tortoise steps ahead of Achilles steps. No. Instead he transcended the borders of the sequential counting process. The same way the modern mathematics should strive to transcend the borders of the sequentiality imposed by the natural numbers (ie. Godel's incompleteness), Turing machine and the likes... Withe best regards, an MS in Mathematics with high GPA. ... because we can show that they are mathematically equivalent.. At last -kins and -king post-named mans starts with antireligious TOE-production... While I don't necessarily think there is (or should be) a TOE, Omega does not serve as proof for the non-existence of a TOE. This is simply because a TOE is not at all required to allow deriving Omega from it. Omega is defined in domain terms, broadly taking from computer science. We can come up with other mathematically rigorous definitions of numbers standing for philosophical issues that cannot be computed. These are applications. Mathematics has no interest in that. I agree that Chaitin has not provided a proof that there is no TOE. Furthermore, almost none of what he says is well-defined. Omega itself is not a number, but rather a function of an arbitrary universal Turing Machine. Chaitin says that the Godel Sentence is true but for no reason since Mathematics is actually random, so there is no proof of it. But the Godel Sentence is true because of how it is constructed, and we can in fact prove it true - Godel proves it true or else his article would be worthless, a theorem without a proof - we simply can't prove it using Godel's formal system. Godel himself says that what his formal system cannot prove can be proven using metamathematics. Chaitin says that he has a better proof of incompleteness than Godel, but Rosser already did that by proving a stronger theorem. Godel's proof requires w-consistency, but Rosser's proof works with any consistent system, which includes all w-consistent systems and also others. It is a stronger result. So it makes no sense to offer more proofs of a weaker theorem. Rosser's theorem is stronger. Chaitin says that Omega is the chance that a random Turing Machine will halt. Whatever way he defines a number, it cannot be the probability that a random Turing Machine will halt because there is no such probability. The notion of that being a probability is not well-defined. We can easily construct a Turing Machine (program) that halts for the first few inputs, loops on the next inputs for a lot more inputs, halts on the next inputs for even more inputs etc. so the chance that it halts fluctuates between 1/3 and 2/3, depending on how many inputs you consider. It diverges rather than converges. Chaitin says that he learned Godel's proof as a child, but he has never discussed the actual proof based on w-consistency, or even mentioned Rosser's proof. Furthermore, even when he talks about the far simpler Turing proof of the Unsolvability of the Halting Problem, he gets it wrong. He says that a program that would tell if another program halts could be run on itself. But that program has an input, while the input of that program is a single program with no input. What Turing actually defined was a program that halts if its input does not halt on itself, and loops if its input does halt on itself. The input is a single program because that program's input is itself. "We can easily construct a Turing Machine (program) that halts for the first few inputs, loops on the next inputs for a lot more inputs, halts on the next inputs for even more inputs etc. so the chance that it halts fluctuates between 1/3 and 2/3, depending on how many inputs you consider. It diverges rather than converges." You are supposing an (impossible) uniform distribution in a countable set. Longer inputs have smaller weights. At each point we have considered only a finite number of strings. Then it is always possible to add many times that many to tilt the probability back and forth. Note "halts for the first few inputs, loops on the next inputs for a lot more inputs". What is the probability that the program that I describe will halt? There is none. "Probability of Halting" is as ill-defined as almost everything Chaitin says. His first version of omega was >1 (depending on how the Turing Machine is encoded!) which took him 20 years to realize and add a kludge rule about programs being inside of other programs. Now how does he know what THAT will produce? His Invariance Theorem (that is said to be the foundation of his theory) is false. There is no bound between the lengths of the shortest program to perform a given function in two different programming languages (to justify using "length of the shortest program") because one language could require each character to be repeated 2 or more times due to its use over unreliable communications lines, and so the length can differ by any factor or absolute difference. "Length of the shortest program" is simple-minded nonsense - just like his "This is unprovable." use, the extent of his understanding of Godel - which is only the weaker Godel theorem based on Soundness and still weaker than Rosser's based on consistency - which is the maximum possible because in an inconsistent system every sentence is provable so there is no sentence that is undecidable. The fault probably lies with me, but for some strange reason I keep getting the answer (for X3) 1.1666... rather than 1.4166... I've just tried it again - 1.166666667. Why is the '4' not appearing? For the previous iteration (X2) I came up with the correct value that you have there (i.e. 1.5), so... To comment on your philosophical proposal, Greg. I don't know very much about the Riemann Hypothesis, (say), but is it important enough to be an axiom? For example, Fermat's Last Theorem was of little significance to number theory (so I've read), by comparison with the mathematical discoveries made in the attempt to prove it. How should a mathematician decide when enough is enough, and consign an otherwise useless hypothesis to the axiomatic waste bin? I only wish that I had an opportunity when young to have studies Physics and Maths and sciences. It is only in my later years and with grateful Thanks to all those wonderous Internet Sites can I read with enthusiasm and and try to understand most of it. My closest was Physics O level at TAFE College and discussing these things with fellow students around a pint. An offer for my PhD came in my 50 s... way too late for me.. So I wish to Thank You and other Scholars for sharing the stories the knowledge and discussdions for people like me.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476409.38/warc/CC-MAIN-20240304002142-20240304032142-00390.warc.gz
CC-MAIN-2024-10
25,957
84
http://mathhelpforum.com/algebra/153073-contradictions-when-dividing-zero.html
code
Consider the following mathematical demonstration (which is also my signature): Okay, I know that the issue here is a division by zero in the 3rd line down, since . This problem was shown to me in a textbook, and I found it really interesting how the fact that something illogical (division by zero) will force another illogical statement (2 = 1). I was wondering if anybody knew of some other clever, but simple, deomnstration of this type, because these are very interesting to me. Plus, I kinda wanted as many people to post them on here as possible, and then for the most clever yet simple example to be choosen as the winner. So, lets begin. (Also, it can involve calculus, it doesn't have to be super simple. I'm just hoping to not have to start talking about Rings and Fields, or Topological spaces and Isomorphisims in these examples; thats all I mean by "simple") Post your favorite little algebraic/calculus demonstration of how a seemingly proper mathematical process can end up in a logical contradiction here, please. Also, please don't point out the "catch" (the 'catch' in the demonstration I supplied is the division by zero in line three), I'd like to see how many of them I can figure out on my own, and I'm sure others would like a chance to try and figure it out on their own also. Thanks in advance for any posts.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122174.32/warc/CC-MAIN-20170423031202-00339-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,334
3
https://forums.puri.sm/t/will-librem-5-will-gnu-linux-libre-5-0-gnu/5241
code
I just read the Alexandre Oliva mail about the GNU Linux-libre 5.0-gnu release. And I wonder if PureOS kernel version running on Librem 5 will be GNU Linux-libre 5.0-gnu. In sort, is A Kernel Without Any Binary Blobs/Firmware. Anyone could tell me what kernel version is running on PureOS 8.0 Prometeus Beta 1 If I’m not mistaken, Debian / PureOS use Linux Libre kernel for years, but not yet in v5.0. to get the sweet amd open source drivers for apus and dedicated gaming cards you only need the kernel 4.18 from ubuntu 18.10 but pureos has alot of stuff removed or disabled that don’t concern the librem specific hardware. what is so great about the 5.0 other than it’s newer. I would like the PureOS OS to have Linux-libre as its kernel. Binary blobs are not just removed, remaining obfuscated/proprietary code and proprietary software/firmware would be gone as well. That would reduce the potential of code exploits and any of its traces. PureOS it using GNU kernel, so i not sure what you mean? So Librem 5 it is the unique using tricked GNU kernel, which is amazing…IMHO According to the wikipedia page as quoted, Distributions that compile a free Linux kernel[ These distros do not use the packaged Linux-libre but instead completely remove binary blobs from the mainline Linux kernel. The source is then compiled and the resulting free Linux kernel is used by default in these systems: By the assertions of the wikipedia page, PureOS is using a Linux kernel that only removes all binary blobs. I believe PureOS can use the Linux-libre kernel. Of course, testing is required before implementation. Note that the Uruk GNU/Linux currently uses Linux-libre kernel. There is no assertion because it doesn’t say “is used by default in only these distributions.” The fact that PureOS isn’t on that list doesn’t mean anything. Wikipedia is not perfect and worse when an opensource peoples is taping for GNU things. Either way if Purism is washing the blobs still it is a GNU Kernel(linuxlibre). But i really nervious for PureOS managed for opensource purism programmers.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506669.30/warc/CC-MAIN-20230924191454-20230924221454-00669.warc.gz
CC-MAIN-2023-40
2,088
19
https://processwire.com/talk/tags/s3/
code
Search the Community Showing results for tags 's3'. Found 3 results module FieldtypeFileS3 - store files on AWS S3 fbg13 posted a topic in Modules/PluginsFieldtypeFileS3 https://github.com/f-b-g-m/FieldtypeFileS3 The module extends the default FieldtypeFile and InputfieldFile modules and adds few extra methods. For the most part it behaves just like the default files modules, the biggest difference is how you get the file's url. Instead of using $page->fieldname->eq(0)->url you use $page->fieldname->eq(0)->s3url(). Files are not stored locally, they are deleted when the page is saved, if page saving is ommited the file remains on the local server until the page is saved. Another difference is the file size, the default module get the file size directly from the local file, while here it's stored in the database. There is an option to store the files locally, its intented in case one wants to stop using S3 and change back to local storage. What it does is it changes the s3url() method to serve files from local server instead of S3, disables uploading to S3 and disables local file deletion on page save. It does not tranfer files from S3 to local server, that can be done with the aws-cli's through the sync function. Files stored on S3 have the same structure as they would have on the local server. -------------------------------------------------------- -------------------------------------------------------- Been struggling with this for quite a while, but i think i finally managed to make it work/behave the way i wanted. All feedback is welcome! Processwire with AWS S3 and s3fs Karl_T posted a topic in General SupportShort question: is it possible that Processwire uses AWS S3 with s3fs as a remote file system(mount to asset folder)? Please advise anything that have to be take care of. Background: I am currently trying to make Processwire running inside AWS Beanstalk as I want to take advantages of the auto scaling function that my client wanted. I have found a discussion here: By reading this and the link inside, I realize that to use the auto scaling I need to configure my web server to be stateless. So, I was looking for a method that can serve the purpose and then I found s3fs. s3fs is a FUSE filesystem that allows you to mount an Amazon S3 bucket as a local filesystem, quoted from their github. I guess that mounting s3 bucket using s3fs to the asset folder should be the right thing to do. My site needs many image upload as it is an e-commerce site using padloper while the admin always using the local file system for images. I have thought of using modules like AmazonS3Cloudfront or FieldtypeFileS3. It seems like those modules are not support my use case. s3fs suits better and more simple. One of my concerns is that I am not sure can the URL of the images can be generated by the default API like $image->url correctly. Before implementing this, I would like to ask the advise from anyone having implemented this with processwire as I am new to AWS. Is it possible? Any better alternatives? I think if not implementing auto scaling, it is also good to separate asset away in some cases, like reducing requests. Thank you for your reading. Getting Amazon AWS SDK to work with PW artaylor posted a topic in General SupportHi all, I am trying to use Amazon S3 to store video files for a client. I am having trouble getting the SDK to work. I am sure it is a stupid error on my part but my head is sore from banging it against my desk and I thought I would finally ask for some help. I am running PW 3.0.11 on NGINX. 1. Amazon recommends using Composer to install the SDK. I was not sure where in the path to install the SDK so I put composer in the /site folder and installed the SDK there (putting vendor at the same level as modules), then I put the require and uses statements in _init.php. I always got an error saying it could not load the aws or s3 classes from the library. 2. So, then I tried to use aws.phar. I put that in the /site directory but once again, no matter what I do, it will not load with the following error: require(): Failed opening required '/site/aws.phar' The file is there with proper permissions and the code for loading is: // --- amazon S3 stuff require $config->urls->site . 'aws.phar'; $s3 = new Aws\S3\S3Client([ 'version' => 'latest', 'region' => 'us-standard', ]); So, here are my questions: 1. In general, where is the correct place to put a php library? It is not a PW module so I assume it should not go in the modules folder. 2. Should I use Composer to install the SDK? If so, where do I put the files? Should I add the AWS SDK to the main composer.json file or put it somewhere else? 3. If I don't use Composer, where do I put the aws.phar file so that PW can load it? 4. Should I not put the 'require' in the _init.php file and move it to another file (_func.php)? I am sure there is a massive face-palm in my future when this gets sorted out. Thanks
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950373.88/warc/CC-MAIN-20230402012805-20230402042805-00516.warc.gz
CC-MAIN-2023-14
4,943
9
https://dougdefrank.wordpress.com/tag/microsoft/
code
In my previous post, I discussed how to install both Git and Microsoft Visual Studio Code on MacOS. This is the third and final part of my three-part blog series on integrating Git with VS Code for MacOS. In this post, I’ll cover how to configure Git and Microsoft Visual Studio Code to work together to synchronize with GitHub. Git with VS Code for MacOS blog series: Continue reading “Git with VS Code for Mac: Part 3 – Configuring Git and VS Code” In my previous post, I discussed how to install Microsoft PowerShell and VMware PowerCLI on MacOS. This is the second part of my three-part blog series on configuring Git with VS Code for MacOS. In this post, I’ll cover how to download and install both Git and Microsoft Visual Studio Code. Git with VS Code for MacOS: Continue reading “Git with VS Code for Mac: Part 2 – Installing Git and VS Code” Ever since I wrote my blog series Git Integration with VS Code, I’ve been wanting to do a similar series of posts for those of us who primarily run MacOS. While a lot of the similar concepts from that series apply, I still wanted to go through the process step-by-step for those who may be completely new to this concept. As a VMware administrator, I want the ability to write or update my PowerCLI scripts on GitHub from whatever system I have with me. Sometimes it may be my corporate-issued Windows device, and other times it might be my personal MacBook Pro. Regardless, I want to be able to synchronize my work on both systems and platforms. Now that both Microsoft PowerShell and Visual Studio Code are available on both platforms, I can work on either platform at any time and pick right up where I may have left off. Continue reading “Git with VS Code for Mac: Part 1 – Installing PowerShell and VMware PowerCLI” This script is an idea that spun off of my previous post, PowerCLI: Find UEFI-Enabled VMs. If you’re preparing to enable Secure Boot in a VMware environment, it may be helpful to identify the VMs that cannot be upgraded. As you might recall, enabling secure boot requires the following: - VMware vSphere 6.5 or higher - Virtual hardware version 13 or higher - VMs need to be configured with EFI boot firmware Continue reading “PowerCLI: Find BIOS-Enabled VMs” With all the news regarding the Spectre and Meltdown CPU vulnerabilities over the past several months, there’s been a greater focus to get VMware virtual machines to virtual hardware version 9 or higher, as noted by Andrea Mauro’s post regarding these vulnerabilities. In addition to that, several companies and organizations may be looking to enable Secure Boot, a feature first introduced with vSphere 6.5. However, in order to enable secure boot, the virtual machine needs to be configured with both EFI boot firmware AND be on virtual hardware version 13 or higher. Continue reading “PowerCLI: Find UEFI-Enabled VMs” For the fifth and final portion of my Git Integration with VS Code blog series, this post focuses on Synchronizing Content with GitHub. Previously in Part 4, we configured Visual Studio Code to establish a connection and download content from GitHub. In this post, I wanted to focus on staging, committing, and pushing content back up to GitHub. Continue reading “Git Integration with VS Code: Part 5 – Syncing with GitHub” Now that PowerShell has been upgraded, and we installed both Git and VS Code, let’s go ahead and configure our environment for synchronization with GitHub. For me, this part was really the meat and potatoes of getting VS Code to integrate with Git and GitHub. Installing the PowerShell Module Now that VS Code is installed, let’s install the PowerShell Module so that it can properly understand PowerShell scripts and *.ps1 files. Continue reading “Git Integration with VS Code: Part 4 – Configuring Visual Studio Code” This blog post picks up where Part 2 – Installing PowerCLI and Git left off. Now that we have some momentum going with this Git Integration with VS Code blog series, let’s keep it going with Part 3 – Installing Visual Studio Code! Continue reading “Git Integration with VS Code: Part 3 – Installing Visual Studio Code” In case you missed it, this blog post picks up where Part 1 – Upgrading PowerShell left off. In continuing on with the Git Integration with VS Code blog series, I now present Part 2 – Installing PowerCLI and Git! NOTE: This process assumes a Windows-based installation, and for the Git install, most of the options were left to defaults unless otherwise noted. Continue reading “Git Integration with VS Code: Part 2 – Installing PowerCLI and Git” So, I’ve been wanting to do this blog series for quite some time, and I’ve been working to put all of the various bits together. When I first started writing scripts for PowerCLI, I would simply write them using either the native Windows PowerShell ISE or some other text editor like Notepad++. It was fine for a while, but I soon began running into issues with version control. Before I knew it, I quickly ended up with a multitude of files in a folder. Things like script-draft.ps1, script-edited.ps1, script-edit2.ps1, script-working.ps1, script-final.ps1, script-FINAL-20180311.ps1, etc. It quickly got to the point where I didn’t know which files had the latest changes to them, or which ones had the newest feature I implemented (or was trying to implement). Does any of this sound familiar? At a recent Western PA VMUG meeting, I was introduced to this new product (to me, at least) called Visual Studio Code. Sure, it was another place to work on developing and even running PowerShell and PowerCLI scripts, but I had no idea how about the concept of version control or Git integration that lied within. All of that stuff was completely foreign to me, but sounded interesting. And, with the help of the #vCommunity and some of my own research, I finally got to a point where I understood how I could integrate my VS Code editor with my online GitHub account, and keep them in sync across multiple devices. Continue reading “Git Integration with VS Code: Part 1 – Upgrading PowerShell”
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141745780.85/warc/CC-MAIN-20201204223450-20201205013450-00707.warc.gz
CC-MAIN-2020-50
6,143
27
https://thegreattexascandleco.com/collections/rancher-edition
code
******CHRISTMAS IN JULY SALE!!! ORDER ANY CANDLE FROM OUR CHRISTMAS IN JULY COLLECTION AND RECEIVE 25% OFF WITH CODE "CHRISTMAS"******FREE SHIPPING on orders over $75!! (some restrictions apply) These handmade containers are more fluid and earthy than our other collections, but just as beautiful. On the face of these containers we’ve replaced our usual medallion with our brand, giving them a more “rancher” look and feel.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154158.4/warc/CC-MAIN-20210801030158-20210801060158-00610.warc.gz
CC-MAIN-2021-31
430
2
https://bugzilla.xamarin.com/96/9615/bug.html
code
Notice (2018-05-24): bugzilla.xamarin.com is now in Please join us on Visual Studio Developer Community and in the Mono organizations on GitHub to continue tracking issues. Bugzilla will remain available for reference in read-only mode. We will continue to work on open Bugzilla bugs, copy them to the new locations as needed for follow-up, and add the new items under Related Our sincere thanks to everyone who has contributed on this bug tracker over the years. Thanks also for your understanding as we make these adjustments and improvements for the future. Please create a new report on Developer Community or GitHub with your current version information, steps to reproduce, and relevant error messages or log files if you are hitting an issue that looks similar to this resolved bug and you do not yet see a matching new report. It would be nice to have "just my code" support for catchpoints - so catchpoints are ignored if there are no user code frames between the throw and the catch. This would prevent users getting confused about internally handled exceptions in Mono. Zoltan, how difficult would this be? Its doable. Wouldn't it be easier for md to just check whenever the exception was caught in user code, and continue otherwise ? how do we check if it is caught in user code? md has functionality to check whenever an assembly is 'just my code' or not, that needs to be applied to the method on top of the stack. Cross referencing what looks to be a related bug: Yes please! This would be great! Fixed in master, version 126.96.36.1995 *** Bug 9476 has been marked as a duplicate of this bug. *** *** Bug 21028 has been marked as a duplicate of this bug. ***
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00531.warc.gz
CC-MAIN-2021-39
1,674
26
https://hollisterstaff.com/mix-mingle-navigating-your-boston-job-search/
code
Navigating Your Boston Job Search Hollister Staffing and General Assembly recently partnered together to host a panel discussion on navigating the job search process. Our panelists, Andressa Martins (Recruiting Manager, Harvard Business School), Connor Shaw (Technical Recruiter, Car Gurus), Megan Wandishin (Team Builder, Drift), and moderated by Hollister's own Mike Raimondi, shared advice on how to avoid common mistakes in your job search. They gave tips to increase your chances of being selected to interview, improving your resume, and also provided advice on getting through the interview process. We were so excited to host this event and advise everyone who attended this event!
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00404.warc.gz
CC-MAIN-2020-05
690
4
https://gamerzee.com/stick-trampoline/
code
Acquire Sticks along with your stick determine, by Leaping up and down on the trampoline's. Achieve the Max Peak necessities to unlock the subsequent ranges. Use the Swinging rope to offer you additional benefit. It's also possible to do tips for additional factors, and all this earlier than the time runs out! Arrow Keys: Motion. Area: Seize and Launch Swing. Q: Seize foot. P: In sport menu Arrow Keys: Movement. Space: Grab and Release Swing. Q: Grab foot. P: In game menu
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362891.54/warc/CC-MAIN-20211203151849-20211203181849-00547.warc.gz
CC-MAIN-2021-49
476
3
https://blogs.technet.microsoft.com/mikep/2008/02/20/new-microsoft-action-pack-details-here/
code
24,000 of our 34,000 Registered Partners currently subscribe to Action Pack, our quarterly toolkits of software and sales and marketing materials. These are created to help our partners stay competitive, meet their sales goals and grow their business. So what is in the new action pack? New Rules and how you get it: To enhance the value of Action Pack, from March 1st, all of those renewing their subscription must take a course and pass an assessment to renew. Partners must take one specific Microsoft online course from the required Partner Learning Centre course list and complete its associated assessment with a minimum score of 70 per cent. To continue receiving Action Pack, partners must pass an assessment every two years. Courses are all 100% subsidised and take no more than 60 minutes. The assessments will take no more than 30 minutes to complete. If this might be of interest to your readers, in your blog posting would it be possible to highlight the following: 1. All information regarding the Microsoft Action Pack resides at: www.microsoft.com/uk/partner/actionpack 2. We have made available to partners a enrolment and re-enrolment guide from: www.microsoft.com/uk/partner/actionpack 3. If partners have a query regarding their Action Pack subscription, they should call: 0870 60 70 700 (press option 1)
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822668.19/warc/CC-MAIN-20171018013719-20171018033719-00249.warc.gz
CC-MAIN-2017-43
1,324
7
https://www.phoenixstyle.co.uk/products/marni-patchwork-snakeskin-cross-body-bag
code
Marni Patchwork Snakeskin cross body bag Marni snakeskin cross body bag in burgandy, black and tan patchwork with small handle and crossbody. Has two compartments with a concealed zip inside pocket and double compartments. This is a very rare item and a perfect addition to a winter wardrobe. - Fabric: 100% leather and snakeskin - Colour: Dark tan leather with patch work effect exterior in burgandy, black and tan - Condition: Preloved, in excellent condition with dustbag
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823710.44/warc/CC-MAIN-20181212000955-20181212022455-00177.warc.gz
CC-MAIN-2018-51
474
5
https://community.articulate.com/discussions/articulate-storyline/uploading-scorm-file-to-storyline-module
code
Uploading SCORM file to storyline module I've never been asked to do this so unsure if it's even possible. My manager sent me a file (see attached) and explained it was a video that he wanted added to a storyline file so we could upload to articulate online. I have tried uploading almost every part of what he sent with no luck. If anyone has a spare moment, would they be able to take a look and see if it's even possible. Cheers and thanks in advance :)
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00152.warc.gz
CC-MAIN-2022-33
456
6
https://postgrespro.com/docs/postgrespro/9.5/release-pro-9-5-19-1.html
code
E.3. Postgres Pro 220.127.116.11 Release date: 2019-08-12 This release is based on PostgreSQL 9.5.19 and Postgres Pro Standard 18.104.22.168. All improvements inherited from PostgreSQL 9.5.19 are listed in PostgreSQL 9.5.19 Release Notes. Major enhancements over Postgres Pro Standard 22.214.171.124 include: Improved planning accuracy for queries with ORclauses. Now sorting is performed correctly for such queries. Fixed implementation of greater than (>) and not equal to (<>) operators for the E.3.2. Migration to Version 126.96.36.199 Depending on your current installation, the upgrade procedure will differ. If you are running Postgres Pro Standard version 188.8.131.52 or higher, it is enough to install the 184.108.40.206 version into the same directory. However, if you are upgrading from PostgreSQL 9.5.x or lower versions of Postgres Pro Standard, some catalog changes should be applied, so pgpro_upgrade script is required to complete the upgrade: If your database is in the default location, pgpro_upgradeis run automatically, unless you are prompted to run it manually. If you created your database in a non-default location, you must run pgpro_upgrade manually, you must stop the postgres service. The script must be run on behalf of the user owning the database (typically postgres). Running pgpro_upgrade as root will result in an error. For details, see pgpro_upgrade. To migrate to this version from vanilla PostgreSQL 9.5.4 or lower, perform a dump/restore using pg_dumpall.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358118.13/warc/CC-MAIN-20211127043716-20211127073716-00533.warc.gz
CC-MAIN-2021-49
1,495
17
https://www.dariah.eu/event/dariah-annual-event-2023-cultural-heritage-data-as-humanities-research-data/
code
- This event has passed. DARIAH Annual Event 2023: Cultural Heritage Data as Humanities Research Data? June 6, 2023 - June 9, 2023 Collections in libraries, archives and museums have been at the heart of humanities research for centuries. However, with the current focus on data-driven research, data management plans and the research data lifecycle, in what ways do we need to think differently about cultural heritage collections as data? Inspired by the proclamation “cultural heritage data is humanities research data”, this year’s DARIAH Annual Event will seek to explore what this means in practice. What does it mean for cultural heritage institutions to provide access to their ‘collections as data’? Do we need to think of different workflows for digitised and born-digital datasets? Can we think of a humanities research data continuum? These are only some of the questions we aim to explore at the 2023 DARIAH Annual Event.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100686.78/warc/CC-MAIN-20231207185656-20231207215656-00463.warc.gz
CC-MAIN-2023-50
944
5
https://ckeditor.com/old/forums/FCKeditor-2.x/How-can-I-set-Upload-directory
code
I integrated java-fckeditor vers. 2.5 into my project but I have a little problem. When I use connector and upload a file, it store into my deploy directory. If I redeploy application, directory is deleted. I want set extern directory. Is it possible? Thanks and sorry for my english How can I set Upload directory?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100781.60/warc/CC-MAIN-20231209004202-20231209034202-00405.warc.gz
CC-MAIN-2023-50
315
6
http://www.meshmeld.com/python/2015/02/18/threatmap.html
code
One of the fun projects I got to design, and write for work is a map showing current threats around the world as seen by the firewalls we make. Visualizing all of this is pretty demanding, as well as interesting. We see about 100gigs of data a day, that we then distil down to a small subset that we can show. So we look at the IPS hits we see in the field that we get fed information on. We clean it up a bit, as well as anonymize it a bit. And then we display a small subset selected by a few critiria. And it looks something like this
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992721.31/warc/CC-MAIN-20210513014954-20210513044954-00003.warc.gz
CC-MAIN-2021-21
537
4
http://www.benzboost.com/livewall.php?u=990
code
Originally Posted by Terry@BMS Yep got it -- not really used to reading these logs so it didn't click at first. (Let's keep in mind I don't know crap about the n55 or tuning, so I'm kind of thinking out loud, I guess you can say I like attempting to think through problems) The turbo, while not hitting the targets seems to still make more boost than stock at the later rpm range. the key is to find out where that slack is. Why isn't it hitting targets. Now hardware guy says tuning, tuning guy says hardware, understandable -- this is new territory for both of you, while both having been successful in your respective fields. I'm unfamiliar with the MAP sensor on these cars -- do they limit how much boost you can run? Could you set boost targets for 23psi at 5500? Or maybe set boost targets at 17psi? Doing the former would be interesting to see how much boost is being made, what if it hits more than 17psi that way? And the latter -- what if it hits lower? That could lead me to believe that maybe the DME is limiting something? What at about trying to target 20psi at 5500 on stock turbos? (I recognize the harm in this, I'm only suggesting this for a run) I mean if it hits 17 there, maybe it's DME limiting it? However if it meets 20psi (or more than 17) maybe something hardware related is being finicky? "
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164004946/warc/CC-MAIN-20131204133324-00029-ip-10-33-133-15.ec2.internal.warc.gz
CC-MAIN-2013-48
1,318
6
https://www.efinancialcareers.fr/emploi-UK-Londres-Senior_Product_Engineer.id05669045
code
Senior Product Engineer Borrowing heavily from Spotify's model of organising teams, the team is split into autonomous squads. These cross-functional groups focus on solving problems in a specific business area: - The Acquisition team aim to optimise for growth by understanding the needs of customers. This squad supports the commercial and marketing side of the business, building tools to help them react faster and reach a wider audience. They also develop, run and analyse A/B tests to optimise our website, partnerships and marketing. - The Scalability team aim to make our operations more efficient. In this squad, your customers are your colleagues: you are constantly improving internal systems used by our mortgage advisers and customer success associates every day to ensure our technology scales with our growth ambitions.As a Senior Product Engineer, we expect you to integrate with or lead one of these teams. As well as this, you'll be focusing on ways to improve the entire Engineering department, through coding standards, projects and process changes.We're open to applications from specialists in front-end and back-end engineering, as well as full-stack engineers. About you Senior Product Engineers will design and write software, but will also represent Engineering when collaborating with product managers and the rest of the business. As a senior member of the Engineering team, your code quality, business understanding and professionalism will be exemplary.You are an expert at turning business requirements into technical ones. You are able to articulate technical concepts both to your fellow engineers and to the wider business.You have strong academic credentials, and excellent written and verbal communication skills.Having experience building online, consumer-facing products would be a bonus, as well as any experience in messaging systems, domain-driven design or building evolving data contracts. About our working culture We are truly cross-functional: as well as designers and product managers, our teams include data analysts, mortgage advisers and marketing specialists.We're focused on results: meetings are banned two days a week, and if you need to work from home to finish a project or leave early to manage your energy we encourage you to take it. You feel comfortable that our values - being brave, investing in each other, making it simple, and owning it - reflect aspects of your personality and approach to work.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571027.62/warc/CC-MAIN-20190915093509-20190915115509-00021.warc.gz
CC-MAIN-2019-39
2,460
9
http://www.nvnews.net/vbulletin/showpost.php?p=2275468&postcount=3
code
Re: Always Tearing on 8600 GTS From what I can tell unfortunately XFce compositor doesn't have any option for VSyncing either, so I suppose I can rule out tearing free X. So what can I do about my tear line in OpenGL? And is tear free video out the window as well? (edit) P.S. There is quite a lot of tearing in X is this normal? If I drag a window across the screen there will always be one or more tears flickering in the window.
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097757.36/warc/CC-MAIN-20150627031817-00275-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
431
4
https://help.viewpoint.com/en/vista-field-service/vista-field-service/getting-started/getting-started/modify-portal-security-and-login/link-pr-co-and-employee-number-to-a-va-user-profile
code
Link PR Co and Employee Number to a VA User Profile Security setup determines the information that users are able to view in the portal. This setup relies on the user's VA User Profile being connected to the payroll company and employee number that they use to log in to the portal. - In Vista, open the VA User Profile form ( ). Review and update the PR Co and Employee fields - PR Co: The payroll company that the user belongs to. Press F4 for a list of valid companies. - Employee: The employee number to associate with the user. Press F4 for a list of employees.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817819.93/warc/CC-MAIN-20240421194551-20240421224551-00316.warc.gz
CC-MAIN-2024-18
566
6
https://www.colintemple.com/About
code
Colin Temple Photographs My name is Colin, and I am a person who has eyes, a brain and a camera. So, here's what happens. First, some light source emits photons, and those photons bounce off of matter in the world. Some of those photons enter my eyes, and information related to their arrangement is sent to my brain. In my brain, I develop some interpretation of what I am seeing. From time to time, that interpretation will create physiological responses -- feelings. It may also cause me to obtain new ideas. When either happens, and I have a camera at hand, I will try to capture some of the photons that are being reflected towards me with it. This allows me to record information about their arrangement in a way that allows me to then create objects or representations of that arrangement that will cause similar arrangements to enter the eyes of others. It is my hope, typically, that similar ideas or feelings will be inspired within them, or at least, that they will have an experience that is worthwhile. When afforded the time, I set out on expeditions specifically to find animals and objects that will reflect photons towards me in interesting patterns.
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496227.25/warc/CC-MAIN-20200329201741-20200329231741-00412.warc.gz
CC-MAIN-2020-16
1,167
8
http://www.exherbo.org/docs/gettingstarted.html
code
Currently there are stages available for the following platforms: These are stages targeting fairly new targets, or ones that do not have a lot of users. Users should be able to fix issues on these targets intelligently before complaining of issues. Easiest solution: SystemRescueCD It is possible to get all of our shiny stuff via anonymous git. You can browse our repositories via cgit. If you don’t already have Git set up: git config --global user.name "<your real name>" git config --global user.email "<your email>" Register an account on Gerrit. You need a GitHub or Google account. The email address you wish to use in Gerrit must be configured in your GitHub account. Clone the repository you want to change things in from Gerrit and set up the Change-Id hook. See ‘Cloning repositories’. cave to sync from this local clone. /etc/paludis/repositories/<repo>.conf and change the line: sync = git+https://git.exherbo.org/<repo>.git sync = git+https://git.exherbo.org/<repo>.git local: git+file:///home/<user>/<path to local repo> Then, you can sync from the local clone with cave sync -s local <repo>. For more info on this see our workflow docs. Make your changes in the local clone and commit the changes to the repository. Remember to actually commit the changes; otherwise, cave will not see them since it is using git pull to pull from the local clone. For more info on using Git see the Git book. Testing out your changes: cave sync -s local <repo> cave resolve -x1 <package modified> If they are not to your liking, go back to step 5, using git commit --amend and git rebase -i as much as you want. Submit your changes to Gerrit. git push origin HEAD:refs/for/master
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813322.19/warc/CC-MAIN-20180221024420-20180221044420-00474.warc.gz
CC-MAIN-2018-09
1,686
26
https://sourceforge.net/directory/language%3Ashell/os%3Alinux/license%3Aapache/?sort=update
code
The Arc-live(X86-i386)is a distribution live dvd and pendrive with automatic hardware detection and is now included the kernel-3.2.0-4-486/3.2.0-4-686-pae(arc-live-1.0.1-wheezy stable with live-installer). GDCBOx helps to acquire values in your network from measuring devices and transfers these values to a server. GDCBox is a package distributed für embedded linux systems such as OpenWRT. Portable Ubuntu Linux for Scientific Computing Released August 22, 2013 Lubuntu Blends: Biochemistry 13.04 (Raring) v5.44 Linux Kernel Image 3.8.0-29 Lubuntu Blends are pre-installed Wubi disk image remixes of Ubuntu and Debian Science meta packages, A custom boot loader allows installations to be copied and automatically booted from most external or USB flash drives. Once up and running, use earlier Lubuntu Remix README instructions here until documentation is updated. https://sourceforge.net/projects/portable-linux/files/ Installation 1. Download the Wubi installer http://releases.ubuntu.com/saucy/wubi.exe 2. Install any flavor of Ubuntu. 3. Swap out the root.disk with the ones provided here. Overview LAMP stack running on localhost (127.0.0.1). Scientific, productivity & media packages include R (Rattle Data Miner), GridEngine, Condor, cooperative computing tools, WINE, LibreOffice, Evolution, Clinica, Neuro Debian Desktop, PsycoPy, OpenVibe, 3DSlicer, Paraview, Openshot. Cheers, Gregory Remington Replaced by project buildaix This project has been replaced by buildaix. Please refer to http://sourceforge.net/projects/buildaix/ for updates. This project area will not be updated! And the files here - are way behind those at buildaix! Full featured free PACS based on dcm4chee and mysql, with remote web accession available for Linux in Debian packaging format for x86 32 and 64 bit processors. (KEYWORDS: PACS,DICOM,HL7,WORK LIST) Just bring Linux for a better world U M I, pronounce "ou" "ème" "aie" to an approach of "you & I" expression, is meant to be a derivative of Ubuntu, a Linux distribution. U M I is a system that wants generalist, simple and tailored to your needs. M I perhaps as "Maths Infos", "Mission Impossible", "Micro Imagination", "Museum Incarnation", ..., "Mandela Ideologie", ...,"Magne Isapèt" :), ... ; but in reality M I for "Me Inside", inside Linux, inside Debian, inside Ubuntu. This project designates all logistics associated developments. The goal is to promote the development of secure and robust software combining beauty and elegance for Linux platforms . DRBL-hadoop is a plugin for Diskless Remote Boot in Linux (DRBL). It will help you to setup and deploy a Hadoop Cluster in few steps. You can also use this Live CD as an teaching environment of Hadoop. It already have Cloudera CDH2 installed inside. A collection of python scripts which maintain a small linux distribution for a web-managed VPN endpoint providing distributed authentication, roaming profiles, and PKI services. All management is done via encrypted http. Uses LDAP, Kerberos, Apache, Pyth Open source model for Window and Door Co. It maintains leads, jobs, services, transactions, part detail, addresses and customers. This is useful project and very esy to discuss with owner, operations, receiving etc. serious people needed. [Now hosted on GitHub] Linux parallel-shell-scripting tools for multi-processor and multi-core systems. Ozganizator is a Desktop organizer with some similarities with devilspie or the way Afterstep works but wich will work on quite any window manager. The WSGW (Web Security GateWay) is a security-centric HTTP/s proxy, based on the Apache web server and some bundled third party modules. The goal of the WSGW project is to provide a web application and XML "firewall" for the masses. Open Source Application Server Appliance based on Open Source GlassFish ShoutDepot is a tool that will allow full administration of a server farm aimed at the sale of ShoutCast compatable servers. It will allow central administration from a Master server and an unlimited number of Slave or Relay servers. A collection of add-on tools for working with the Lawson ERP package. PLEASE NOTE: This project is independent of Lawson Software, and the opinions/tools presented in this project do not necessarily reflect those of Lawson Software or the project owner. Fully packaged linux distribution to provide internet access and resource management to small and medium companies. Run Virtual Machines On a LiveCD | Management GUI This project provides LiveCD's that can boot a fully working virtual machine hypervisor without the need to install/configure. The CD's boot a graphical (X GUI) environment that is easy to use. VirtLive can be used to run multiple operating systems on a single machine. The first release is built with Debian Squeeze for amd64 with Libvirt and Qemu/KVM. A management console is provided by Virt Manager.
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524127095762.40/warc/CC-MAIN-20180427075937-20180427095937-00418.warc.gz
CC-MAIN-2018-17
4,865
21
http://forums.gardenweb.com/discussions/1489005/still-learning-to-prune
code
Still learning to prune ... I used to know a lot more about pruning than I do now, but I like to think that I'm starting to gain on it a little. This year what really sunk in for me (for pomes, at least) is the need to remove certain competing branches. I've always had a hard time removing wood. The bigger the branch, the more difficult it is for me to remove it. So once a branch gets any size I'm inclined to leave it. Then I learned (from reading here, thank you all!) that leaders have to be allowed to dominate if they are ever going to support productive wood. Pretenders to the throne cannot be tolerated; this tree growing stuff is not about democracy. Suddenly I found myself enthusiastically removing any branches that were more than 1/3 the size of the branch they emerged from. Then I realized I don't want to necessarily remove the whole branch -if it's a spur-producing, well-placed, pencil-sized branch I can just cut it back hard to a bud and in a couple of years it'll be producing fruit again. I think I'm starting to see both my pear and my apple more clearly now. I've quit being afraid of big cuts- amazing what a tree will come back from. And it's amazing what some of the new growth will do if competition is removed. Anyhow, that's where I am now. This is a thank you note to those many here who have helped.
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463460.96/warc/CC-MAIN-20150226074103-00238-ip-10-28-5-156.ec2.internal.warc.gz
CC-MAIN-2015-11
1,334
7
https://forums.tomsguide.com/threads/emachines-netbook-opens-to-black-screen.256215/
code
Nothing appears on screen, have not plugged into external Monitor but when I adjust the brightness the tones of grey change from light to dark so I'm thinking something is keeping windows from getting started. Mine does the same thing. I connected it to an external monitor and it works fine but nothing from the "inbuilt" monitor. Someone mentioned something about removing the battery and doing something with the power button. Tones of grey suggest a damaged LCD, but you should really hook it up to an external monitor to test if your computer is functioning normally or not. I have a similar problem with my emachines laptop. It does work when I hookup the external monitor. However still nothing but an extremely faint display on the built-in monitor. How do I need to fix it?
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151531.67/warc/CC-MAIN-20210724223025-20210725013025-00000.warc.gz
CC-MAIN-2021-31
782
4